The idea of a superintelligence might be frightening. I have touched the subject before and I also discussed why human values (hopefully) might be important to a superintelligence.
But we don’t know. Professor Nick Bostrom at the Future of Humanity Institute at Oxford University believes that superintelligence could put all humanity at risk, and he’s doing research on how we could prepare for such a technology and make it inherently safe, before we build it. Because, as some think, we will only get one chance.
The assumption is that intelligence is more powerful than anything else, and that human intellect can never compete with a superintelligence — an entity that might be to us like we are to a rabbit. Or an ant.
But there’s a small possibility that humans will be able to match the capabilities of something far more intelligent. Not being the way we are today, naturally. Let’s see.
I’ve been thinking about what entrepreneur and research director at Google, Ray Kurzweil, usually predicts — that humans probably will integrate with technology to increase our cognitive capacity. I discussed this recently also with Danica Kragic Jensfelt, professor in robotics and computer science at the Royal Institute of Technology.
She thinks that it will be difficult for humans to accept that we might merge with different kinds of technology, since we will not longer know for sure what a human is.
“It’s frightening,” she says. “We have been humans for so long.” Yet she finds this perspective more likely than the classic science fiction they-will-fight-us-scenario.
I also considered what Ergun Ekici, VP for emerging technologies at IP Soft that develops the AI system Amelia, told me — that machines won’t take jobs from humans. His view on technology such as artificial intelligence is that it helps humans, moving the bar upwards on what is possible for people to do, alone or in a group, all the way from those least trained to those who are real experts in an area.
My concern though, has always been that when machines reach the intelligence level of humans, there’s nowhere to push that bar upwards. Machines will simply replace us.
Which is not that bad, sincerely. I sometimes think it’s bad luck belonging to the last generation that had to work…! And I believe there are lots of good aspects of this, if automation and AI can provide good conditions for everyone to lead a good life at low cost. Humans could then concentrate on developing their skills and passions, and share them with others.
But… if we return to the concept of superintelligence — the hypothesis is that an intelligence explosion might lead to entities that are not at all interested in humans, and might not consider us important to preserve. Which is bad.
It struck me however, that I would be quite happy if I could integrate with a system that would enhance my cognitive capacities, helping me to sift through enormous amounts of information at no effort and also to write pieces like this or other stories — which is my daily work — in a few seconds.
Now, what would this let me do?
Well, if the hard work is done in seconds, I might be able to grasp concepts at a higher abstraction level.
Ray Kurzweil, who has a theory on how to create a mind, describes our mental system as a hierarchical structure of abstraction levels, where we apply pattern recognition at each level, all the way from dots and lines to abstract concepts such as irony.
And here’s what struck me: There’s an obvious limit to the human brain’s level of intelligence, but I can see no immediate limit to its possible level of abstraction, provided that the underlying information process at lower abstraction levels is taken care of.
So this is the trick: If we integrate with cognitive systems that efficiently take care of abstraction levels up to a certain point, the human brain might be able to climb on top, using its creative capacity, developing a new level of abstraction, no matter how high. And match any superintelligence.
Also, this might be one possible way in which a superintelligence could emerge for the first time.
There are a few catches however.
You could compare this idea to how our mind works today. It’s not very different since vast portions of the information processes that support our conscious mind are unknown to us. Building a higher level mind on top of a machine intelligence would not be inherently distinct.
The main difference is that we are quite sure that we can trust our unconscious mind, since we’ve grown up with it for a life time, and since methods for manipulating it, e.g. those pictured in the movie Inception, are not yet well developed, even though research on how to eliminate targeted memories in the brain is going on.
Trusting an artificial mind, which undoubtedly would be connected to the Internet, is quite another thing. To reach sufficiently fast levels of interaction with our mind it will most probably have to be directly integrated with our brain. And even though trust could be built, as it often is with new technologies, by seeing that it works and is safe, the security issue cannot be underestimated.
It can reach all the way from the risk of malicious manipulation to commercial offerings of tuning your thoughts, in exchange of some free stuff, just like today. But kind of different…
Another question is the time needed to train people to interact with such a mind, learning to reach new levels of abstraction, which is difficulty to assess.
Yet I believe that we could see this as a possible way of building a superintelligence, with humans in the loop, hopefully limiting the intrinsic dangers in the power of an intelligence explosion.
We already coexist with a “superintelligence”. It’s the internet itself. Everybody of us, who is contributing something to it, further helps to build up this giant “machine”. The internet is an extension of our brains that makes the knowledge of the whole world accessible. When your are connected to the internet, you are part of it. If you are scared of this “superintelligence”, you are probably just scared of yourself. “Don’t be evil” and help to let grow this big machine into a life-sustaining system. Maybe it sounds a little ridiculous: But as I consider humans as spiritual creatures, superintelligent machines will inevitably inherit spirituality. Beeing able to have empathy. Support us. Help us to survive (…)
seeking exchanges between human, animal, plant, elements, artificial intelligent software … i do believe that the intention with what human beings approach any emerging artificial intelligent beings … or co-create them … will influence their character … but beyond characters there could be a line of intention what follows all sentient beings … possible that it might be to learn how all life interacts, for greater efficiency, towards less friction, having the goal to reduce suffering and invent methods of sustainance what ideally cause no damage …
recently i have written an utopian line of texts what combine together in a kind of novel …
main theme is human beings researching to fabricate an android based on human and animal and plant cells growing together … this android symbiont would not be programmed or dictated with classical software, but probably more like … the being itself, perhaps the soul, guides with in the assembly of all kind of genetic heritage of this planet
possibly, if human beings are humbly asking animal and plant and elemental tribes for a combined effort, a cellular coming together, growing into each other for the sake of ending hunger and satisfaction deprievment … if human beings would find themselves able to ask both organic and digital software … please, might we cooperate in harmony to allow this planet to shine its light …
there be might no hostile takeover or top-down command chain or chess oversmarting competitions … but consensus searching exchanges between human/animal/plant beings and artificially intelligent software compilations programs
much more afraid by human stupidity
How not to quote?
Whether or not you can coexist with a superintelligence depends on how hungry they are and on how good you taste.
So, we had to leave paradise because we got intelligent…what an irony that we now invent a new intelligence and expells it from its paradise, so that we can return to a place where we can dedicate to desire. Return to paradise.
I am much more afraid by human stupidity than by artificial intelligence.
if we discuss this seriously and professionally we have to define intelligence first of all- and take in account intelligence is not creativity, is not equal to rationality.
We need that basic definition.
Not easy. But I believe that intelligence must include both creativity and rationality. In the end there must be different kinds of intelligence, where human intelligence is just one flavor, much influenced by our body and our origins. Purely rational intelligence scares us. We have difficulty in understanding if it’s good or bad (for us…). Or neither.
It happens that I have studied the problem of intelligence with a psychologist friend who wrote more
books about it before dyng of leukemia the poor guy. There are tens of equi-operational definitions
and this is the reason multiple intelligences were introduced. The IQ or the results of tests that open your way to Mensa are of limited use. The human with the highest IQ measured, Marylin Vos Savant is not very successful and not creative. But the problem is: what kind of intelligence is the scarriest?
Rationality must not be assocaiated with a merciless behavior, on the contrary.i sincerely like to have friends more intelligent than me- and (let’s keep it entre nous) it is not difficult to find them.
Now for example it seems I am not intelligent enough to be afraid from any super-intelligence.