Since we have to necessary study to define an

Since 1940, developing the machine matching humans in general intelligence is a big
challenge. Beginning with the work of Alan Turing who asks first about the possibility for a
machine to think, it has become now a hot topic with the phenomenal potential of the
artificial intelligence (AI). Two days ago, new progress has been made : two machines have
suceed to beat humans at the reading test of the Stanford University (Stanford Question
Answering Dataset or SQuAD), showing that the dream of creating « superintelligence » is
closer day after day.
« Superintelligence » of Nick Bostrom studies the possibility of machines to be the smartest,
the processus and the threaten of it. The propect of the existence of superintelligence in few
decades raises some worries about the fate of the humanity. So, the question is : is the
prospect of achieving the type of superintelligence described in the book realistic ?
Defining first the notion of superintelligence, we will discuss it and the probability of a
superintelligence to exist one day. We will finish exposing the most interesting ideas resulting
from the reflexion on superintelligence.
Theorethically, the superintelligence is defined as a level of intelligence vastly greater than
contemporary humanity’s combined intellectual wherewithal. In other words, it is the fact of
possesing common sense and an effective ability to learn, reason, and plan to meet complex
information processing challenges across a wide range of natural and abstract domains. First,
if we have to judge of the possible existence of a superintelligence as it is defined, we will
certainly conclude that this type of agent is an ideal. In fact, « wide range » suggests us the
infinity of domains that the agent has to be successful into to be a superintelligence. So the
definition itself asks the question of how much domains are required to be tested in order
to declare an agent a superintelligence. It asks also about the quantity and the quality of
criterias that we have to necessary study to define an agent as a superintelligence. All is the
essence of the problem and the book tries to give some elements in order of being able to
distinguish practically a superintelligence.
Different kinds of superintelligence are described. In practice, there is first the human level
machine intelligence (HLMI) which is described as probably the first step before creating a
superintelligence. There is also the « engineering superintelligence », an antellect that vastly
outperforms the best current human minds in the domain of engineering.
Qualitatively, there are three superintelligence : speed, collective and quality. The speed
superintelligence is an intellect that is just like a human mind but faster. The collective
superintelligence corresponds to a system composed of a large number of smaller intellects
such that the system’s overall performance across many very general domains vastly outstrips
Brain, Mind and Cognition
that of any current cognitive system. The quality superintelligence is a system that is at least
as fast as a human mind and vastly qualitatively smarter.
Let’s study the reality of each definition of the superintelligence : concerning the speed
superintelligence, how can we possibly able to measure a intellect faster than the human
mind? If it is faster, then we wouldn’t even be able to measure it. Concerning the collective
intelligence, in how much domains do we have to test the system in order to pretend of the
superiority of the agent ? And what will be the conditions of the experiments in order to
qualify also this as a superior system ? What if a system can be qualified as superior in
specific conditions of experiments and not in others ? What about the degree of precision
of measures ?
All these questions ask about the exactitude of the defintion of superintelligence. A more
realistic definition will definitely include the notion of probability because this is the only way
we have to be the more exact possible. A superintelligence will be for example an agent
which on average have better results than human.
Let’s talk about the feaseability of this kind of superintelligence. The feasibility is first justified
by the fact that as evolution produces intelligence, human engineering will soon be able to
do the same. In fact, in view of the real progress of AI, this argument is clearly justified and
the prediction of superintelligence appearing in some decades seems realisitic (in 2075 for
90% of scientifics). The processus of creation of a superintelligence is described in two
phases : first phase begins when the system reach the human baseline for individual
intelligence and then the second phase when the system is able to improve itself through
its own capabilities. Is this possible ? There is no reason to suppose Homo sapiens to have
reached the apex of cognitive effectiveness attainable in a biological system. And as there is
no reason, we can’t denite it.
Now, let’s talk about the sustainability of the artificial intelligence. Surprisingly, studies about
the overall long term impact of HLMI for example shows that around 50% of people think
it will be on balance good. Only 10% think the superintelligence as a threat…maybe wrongly.
This leads us to the two most interesting thought to remember from the book in my opinion.
In fact, superintelligence raises the question of human control over machines. How will we
be able to control and fight against smarter minds if they became dangerous? A real alarm
is given by the book trying to insert ethical limits about the creation of superintelligent
machines. It is understandable : without superintelligence, we have already suceed to create
the nuclear weapon, enable to destroy the whole humanity, so what if superintelligence
exists ? The whole world will be redefined by these new kind of beings and we don’t know
if this will be to our advantage. In my opinion, I think that it will have dramatic consequences.
Brain, Mind and Cognition
The use of AI has already made trouble : the example of 2010 Flash crash shows how can
the AI become uncontrollable in few seconds. As we want to create a processus which can
improve itself through its own capacities, the phenomenon of explosion described in the
book isn’t avoidable. So it’s pretty obvious that the first superhuman AI that we have bought
has to be a safe one.
Similarly, threaten exist also from cognitive transformations. In fact, weak forms of
superintelligence are achievable by means of biotechnological enhancements : for example,
iterated embryo selection. But this asks ethical questions too : on which criteria embryo will
be selected and what about the embryo not selected ? Will they be killed ? But aren’t
humans lives, overall ? Here is the danger too, because most horrible crimes have been
made on genetic selection criteria as it was the case during the Second World War.
To conclude, superintelligence is a big thing with incredible good as bad outcomes, scarying
for humans. Limits has to be taken on multiples points : prioritization of risk reducing
intellectual progress over risk increasing intellectual progress, control of AI through create
an non escaping box where AI couldn’t persuade us to free it, creating an AI friendly or
without motivations…At the end, influents on this topic are unanimous, as Eliezer Yudkowsky
from MIRI (Machine Intelligence Research Institute) or Brian Tomasik from Fondational
Research Institutute. We have to stay altruists to avoid a catastroph.