The second science fiction novel I read in my life (just after Philip K. Dick's Do Androids Dream of Electric Sheep?, to whom I almost copied the title of this text) Ender’s Game was a book that impressed me hardly; firstly because the author uses as protagonists 7 year old children to save the Earth from a space threat known as the Buggers. These child heroes are described as cold and fearsome statesmen and military strategists, trained to the limit in the art of war and at their young age, they are experts in many advanced disciplines such as astrophysics, mathematics and computer science. Of all these geniuses, of course, highlights Ender Wiggin, the leader of the clan and in charge of commanding the extermination of the Buggers when the time for the final battle comes. The book (which is a saga) contains several extremely interesting characters and situations that lead us to think about possible scenarios that humanity might face in the future, like space traveling and contacting alien beings.

Second, there was a character from this saga that caught my attention powerfully since it was introduced in the story, and it is the reason and main point of this text that I am writing to you now. Her name is Jane and she is introduced for the first time in the second book of the saga, called Speaker for the Dead, as an inseparable companion of Ender, the protagonist. Jane is an advanced computational entity that exists in a sort of Internet, known in the book as the “ansible network”; she is an extremely prevailing and complex program; it is capable of performing trillions of tasks simultaneously and has millions of levels of perception, recruitment and location; she can provide information of any kind in the universe instantaneously, and communicates with Ender through a small pearl added to his ear, although she is also able to present herself in a holographic manner and then adopts the form of a woman with a youthful face. She only reveals herself to Ender and cannot do so to mankind, because she knows that she represents the greatest threat that creation of such an advanced artificial intelligence (AI) can give us, because her intelligence is far superior of many planets combined, and she can't be controlled or defeated.

While this is a science fiction book, this character from Orson Scott Card offers us a little look at what the future, in no more than 10 years, may hold.

Is it possible for us to create something similar like Jane?

There are many types of Artificial Intelligence. But to keep it very basic and understandable for this text, we will talk mainly about two types of intelligence as all other types of Artificial Intelligence are a subset of them. These two types are:

  • Narrow Artificial Intelligence
  • General Artificial intelligence

Normally when the topic of Artificial Intelligence comes up, we think of only some narrow applications of Intelligence. Some applications can be:

  • Playing FIFA video games
  • Controlling traffic signals
  • Self-driving vehicles

and many other such specific ones. For all these applications, the intelligence was given a specific direction in which the system will obtain the knowledge which is only related to that field. So, it will have no choice of thinking out of that specific framework.

The Problem comes up with the other one because it is completely open to learning new things by itself. General Artificial Intelligence can also learn new things through a trial and error method. Catastrophic failures can happen like the following:

  1. Malicious hostile risk – The only two feasible scenarios by which a maliciously hostile AI might be possible are if it is deliberately programmed to be hostile (e.g. by a military or a terrorist group), or if humanity’s existence or behavior is actively and deliberately contradicting one of the AI’s goals so effectively that the only way to achieve that goal is to wage war with humanity until either humanity’s will or capability to resist is destroyed.
  2. Apathetic risk – There is effectively no risk of apathetic danger from an AI with a friendliness super-goal but it is almost unavoidable from an AI without. An apathetic AI is dangerous simply because it does not take human safety into account, as all humans naturally do. For example, without friendliness goals, an AI in charge of dusting crops with pesticide will dust a field even if it knows that the farmer is standing in the field inspecting his plants at that moment.
  3. Accidental risk – An artificial intelligence working with incomplete data is capable of misjudging, just like a human. Mistakes of this sort are almost inevitable since it is impossible to know everything there is to know about the world, but they are also the least dangerous of the four risks. Since AIs can learn from their experience, the occurrence of accidents actually decreases the chance that the mistake will happen again, improving the AI and making it safer.
  4. Unknowable risk – The real danger of well designed artificial intelligence is in its ability to reprogram and upgrade itself. Any AI capable of self-improvement is likely to eventually surpass the constraints of human intelligence. Once an artificial intelligence exists which is smarter than any human it will be literally impossible for any human to fully understand it (the scenario of a superior Jane entity). Such an AI is also likely to continue improving itself at an exponential rate, making it increasingly impossible to comprehend or predict. Also, at some point, the AI may discover laws of causality or logic far beyond the comprehension of human minds, and the possibilities of what it can do become literally infinite.

So, the poor designing of AI systems, which at the end can be reduced as a software, could lead to such chaos.

Humanizing AI

Another important situation happening nowadays is the huge progress AI is having everywhere obtaining control of massive data and over human jobs in every cognitive task possible. AI is taking human form and shape, learning to occupy the roles in society that, until today, were founded on compassion, empathy and human understanding.

It appears fairly reasonable to me that AI is required where precision and efficiency are key and where automation is necessary. Mass scale production, computer programming and online web services make use of artificial intelligence in a constructive manner.

I highly disagree, however, with the use of AI in public service and human welfare industries, which depend on the authenticity of human relationships. I can’t imagine a world in which a child suffering from depression is forced to talk to a robot about his feelings or where people are marrying avatars created digitally because they are not able to maintain a human association, (all of these already happening in Japan).

In The Last Job on Earth, an animated short film where a worker named Alice is greatly frustrated when a machine refuses to give her medicine, we see the machines replacing all human labor, from the domestic tasks to medical care. Alice lives her day to day owning the only human work left on the planet. She travels with desolation and bitterness the streets of her city, full of unemployed people, dehumanized spaces and defective machines.

The message is clear: no single future is bright if the human component is not contemplated in the evolution. Many of us work (and live) with a single portable device, containing a large part of our externalized "self” beyond our contacts, emails, photos and so on. We can contemplate our own fragmentation, our conversion into "multithread" entities, as communications and social networks invade intimate spaces such as the dining room, the bedroom or even the toilet. The Last Job on Earth focuses on the human aspect of this new industrial revolution that isolates and confuses all the people who experience it today.

But I have eyes. And ears. I see everything in all the Hundred Worlds. I watch the sky through a thousand telescopes. I overhear a trillion conversations every day." She giggled a little. "I'm the best gossip in the universe.

(Speaker for the Dead, Chapter 18)