But we don’t know. Professor Nick Bostrom at the Future of Humanity Institute at Oxford University believes that superintelligence could put all humanity at risk, and he’s doing research on how we could prepare for such a technology and make it inherently safe, before we build it. Because, as some think, we will only get one chance.
The assumption is that intelligence is more powerful than anything else, and that human intellect can never compete with a superintelligence — an entity that might be to us like we are to a rabbit. Or an ant.
But there’s a small possibility that humans will be able to match the capabilities of something far more intelligent. Not being the way we are today, naturally. Let’s see.
I’ve been thinking about what entrepreneur and research director at Google, Ray Kurzweil, usually predicts — that humans probably will integrate with technology to increase our cognitive capacity. I discussed this recently also with Danica Kragic Jensfelt, professor in robotics and computer science at the Royal Institute of Technology.
She thinks that it will be difficult for humans to accept that we might merge with different kinds of technology, since we will not longer know for sure what a human is.
“It’s frightening,” she says. “We have been humans for so long.” Yet she finds this perspective more likely than the classic science fiction they-will-fight-us-scenario.
I also considered what Ergun Ekici, VP for emerging technologies at IP Soft that develops the AI system Amelia, told me — that machines won’t take jobs from humans. His view on technology such as artificial intelligence is that it helps humans, moving the bar upwards on what is possible for people to do, alone or in a group, all the way from those least trained to those who are real experts in an area.
My concern though, has always been that when machines reach the intelligence level of humans, there’s nowhere to push that bar upwards. Machines will simply replace us.
Which is not that bad, sincerely. I sometimes think it’s bad luck belonging to the last generation that had to work…! And I believe there are lots of good aspects of this, if automation and AI can provide good conditions for everyone to lead a good life at low cost. Humans could then concentrate on developing their skills and passions, and share them with others.
But… if we return to the concept of superintelligence — the hypothesis is that an intelligence explosion might lead to entities that are not at all interested in humans, and might not consider us important to preserve. Which is bad.
It struck me however, that I would be quite happy if I could integrate with a system that would enhance my cognitive capacities, helping me to sift through enormous amounts of information at no effort and also to write pieces like this or other stories — which is my daily work — in a few seconds.
Now, what would this let me do?
Well, if the hard work is done in seconds, I might be able to grasp concepts at a higher abstraction level.
Ray Kurzweil, who has a theory on how to create a mind, describes our mental system as a hierarchical structure of abstraction levels, where we apply pattern recognition at each level, all the way from dots and lines to abstract concepts such as irony.
And here’s what struck me: There’s an obvious limit to the human brain’s level of intelligence, but I can see no immediate limit to its possible level of abstraction, provided that the underlying information process at lower abstraction levels is taken care of.
So this is the trick: If we integrate with cognitive systems that efficiently take care of abstraction levels up to a certain point, the human brain might be able to climb on top, using its creative capacity, developing a new level of abstraction, no matter how high. And match any superintelligence.
Also, this might be one possible way in which a superintelligence could emerge for the first time.
There are a few catches however.
You could compare this idea to how our mind works today. It’s not very different since vast portions of the information processes that support our conscious mind are unknown to us. Building a higher level mind on top of a machine intelligence would not be inherently distinct.
The main difference is that we are quite sure that we can trust our unconscious mind, since we’ve grown up with it for a life time, and since methods for manipulating it, e.g. those pictured in the movie Inception, are not yet well developed, even though research on how to eliminate targeted memories in the brain is going on.
Trusting an artificial mind, which undoubtedly would be connected to the Internet, is quite another thing. To reach sufficiently fast levels of interaction with our mind it will most probably have to be directly integrated with our brain. And even though trust could be built, as it often is with new technologies, by seeing that it works and is safe, the security issue cannot be underestimated.
It can reach all the way from the risk of malicious manipulation to commercial offerings of tuning your thoughts, in exchange of some free stuff, just like today. But kind of different…
Another question is the time needed to train people to interact with such a mind, learning to reach new levels of abstraction, which is difficulty to assess.
Yet I believe that we could see this as a possible way of building a superintelligence, with humans in the loop, hopefully limiting the intrinsic dangers in the power of an intelligence explosion.