
Have you ever considered the immediate images appearing in your mind when you hear the word super intelligence? Maybe an alien with a huge cranium staring at you… or a computer controlling every step you take…?
Or have you ever tried to imagine what such a super intelligence actually thinks of you?
It might turn out to be difficult, but I think it’s a useful thing to reflect on.
As I have mentioned before, there are good reasons to believe that artificial intelligence by 2045 will surpass the total intelligence of all human brains in the world, both in an intellectual, emotional and moral sense.
That’s a scary prospect in itself, but even though it might be difficult to imagine what this really means, it’s probably even harder to imagine what a super intelligence would be like.
Or what it would be like to be super intelligent.
One reason is that even if some humans are more intelligent than others – and sometimes one individual is more intelligent in one way but less in another – generally speaking all humans are more or less equally intelligent, compared to other animals for example.
So when we think of difference in intelligence, or someone more intelligent than ourselves, we don’t have any reference than a very slight difference in intelligence.
A super intelligence is something completely different, rather like the difference between us and a chimpanzee.
This is actually a crucial point in order to have any idea of what super intelligence would mean for the world’s development, and I believe that most people stop at the word super intelligent without even reflecting on what it represents.
On the other hand I believe that we can get a basic understanding of its properties.
The easiest way to start is to have a look at powerful computer systems today. In the last decade they have become impressively good at analyzing enormous quantities of data – often called Big Data.
This happens all around us. Banks are continuously monitoring transaction data from credit and debit cards in order to discover attempts at fraud, and often they can prevent your card details from being used by someone else in a matter of seconds.
The same goes for mobile network operators, monitoring calls and transactions made with mobile phones.
Google introduced its Flu Trends in 2008 – a website giving accurate information on flu activity in real time in over 30 countries, based on patterns in masses of certain flu related web searches fed into an algorithm developed by Google.
Big Data is a new gold mine with a vast number of opportunities not yet discovered. Its potential is being investigated by both private companies and public organizations, such as the UN through the initiative Global Pulse.
Computer systems analyzing Big Data are in some sense similar to humans when it comes to discovering patterns and trends in information.
Pattern recognition is actually one of the human brain’s most characteristic strengths, used both for recognizing known objects or faces in images, or words in the sound of spoken language, in milliseconds.
The difference with computer systems is of course that they are immensely much more capable than humans of grasping enormous quantities of unstructured data and finding patterns and trends in that ocean of data.
This capability is already in place, and it will only get stronger in the years until 2045. Reasonably it will then include capability in sorting out patterns in all kinds of data from all kinds of sensors – and thus not only numbers and transactions but also sounds, images, videos, radio waves, movements, temperatures, chemical concentrations on so on.
Now try to imagine this capability combined with the human capacity do make associations between different observations of patterns. This kind of capacity is not yet well developed within artificial intelligence but there’s no doubt it will be.
And once such a feature will be achieved it will most certainly also be much more powerful than the human one.
So we can imagine some kind of consciousness being able to monitor enormous quantities of data and information in real time and discover patterns and trends in that information, and then also immediately put these observations in relation with other earlier or present observations.
Then try to imagine such a consciousness develop over time by learning from its observations and associations.
Whatever physical shape this consciousness might have, I would expect it to have a vastly more complete understanding of the world than mine, and also be able to come up with much more elaborate and powerful new ideas than the most brilliant human person, and also much faster.
Personally I believe that you should also expect it to develop a much greater emotional capacity than humans, which would ultimately make it a very impressive being, in front of which I would feel very limited and have reason to be extremely humble.
The beauty in all this is of course the possibility that we might integrate with this kind of consciousness.
Now if you imagine lots of them, or lots of us integrated with them – all with different experiences (which is one of the fundamental strengths of humanity and nature in general), it’s also possible to imagine an unprecedented speed of progress, development and expansion of the world we live in.
In the end it all adds up to a possible way to explain how the exponentially accelerating property of the development, identified by Kurzweil and others, could actually be expected to continue even though it will lead to a pace which is very hard to imagine.
At least for us, ordinary intelligent humans.
– – –
PS. To have lots of super intelligent beings, with different experiences, will of course be very important in order to have a safe and well balanced development.
The most difficult step might then be when the second super intelligence in the world is created. The day it’s born, will the first super intelligence ever created then feel jealous towards its younger sibling and get hostile, wishing to remain the one and only super intelligence in the world?
Think about that.
Because we will be its parents.
* Teilhardt de Chardin proposed his law of complexification that the more complex a living system become the more conscious it became. Would we see a super-computer as a living system?
* Maurice Leenhardt proposed in his notion of cosmomorphism that there is a state of being where the capacity of the community to engage with the problems/opportunities of its existence/context are well matched – this is the happy state. What kind of world are we looking at where super intelligent computers manage our world both practically and morally? Computational ethics hmmmm.
* Aristotle 2500 years ago described three kinds of intelligence – episteme (theoretical insight) – techne (technical ability to act/respond) – phronesis (wisdom to understand the context and know when to apply the technology. This latter phronesis in turn was part of another trilogy – philosophia (the love and pursuit of wisdom) and ratio (the capacity to deep reasoning – rationale). Now: Responsibility – the ability to respond – is a choice – driven by values – and that subjective emotivism (will) that comes with a sense of self (without which there is not a response but rather a reaction) apparently demands a sense of selfhood which from Aristotle to Jung and Jan Smuts, the father of Holism, we find in our deepest sense of affinity with society and the biosphere (and ultimately the cosmos) in which we are embedded and ultimately which in turn is ultimately embodied in us.
* I wonder – how does a super-computer become part of that? Or am I still stuck in the imagery of HAL in the movie 2001 Space Odessey?
Hi there, just became aware of your blog through Google, and found that it is really informative.
I’m gonna watch out for brussels. I’ll be grateful if you continue this in future. A lot of people will be benefited from your writing. Cheers!
Thanks!
Mats,
I think you are missing my (and maybe also Gunnar’s, I am a Swede too but lets continue in English) point. Since I have not made it clear yet, it is probably my fault though… ☺
The errors in the argument lie in both the assumptions regarding the intentional aspect of the mind and the phenomenological and biological aspect of the mind.
The first problem is off course related to John Searle’s famous (or in famous, depending on if you think he’s on to something or not ☺) “Chinese room” argument that states that raw syntactical crunching of symbols (in any speed or complexity) will not give you semantics. Since there are piles of books, articles, blog posts, etc., discussing solely this argument, I will not get in to more details about it here. However, what can be said is that, if it would have been an easy argument to refute (that some persons in the field seems to think) it would not have been resulting in the massive amount of discussions and debates still ongoing. Instead the question would have been closed years ago (it has been alive since 1972 in various forms).
The second, phenomenological- and biological-related, error (also intensively discussed elsewhere by for example Searle) is that you assume that a really good simulation of something would actually become the thing it simulates. Using this logic, you would become wet when looking a weather simulation of a storm and actually arrive in London (an you started in Stockholm) when opening the door and stepping out from the kick-ass flight simulator. The brain is a biological organ, like, for example, your stomach. The organ has causal properties, which are not just logical. Consciousness is a biological phenomenon in the brain as an organ, so is digestion in the stomach and the contraction of the muscle fibre in your leg.
You actually states my point regarding this clearly yourself (however for me that is just a plain fact and not a problem) when you write “…or that humans and maybe some other animals have an unknown property producing consciousness — a property that would be missing in other animals but that appeared at some point in the evolution of biologic life…” There is no mystery or dualistic problem regarding this at all; bats have an sonar mechanism used to navigate, birds and some bacteria have magnetic compasses, dogs have significantly better hearing than humans and… some animals (probably more than we think) have some kind of consciousness. There is significant research going on, trying to find the biological basis for consciousness, however, for what I know, no “winner” is found yet – and this is no stranger than that the exact biological basis for the bird’s compass was just recently discovered, it is a contingent fact about to state of science.
Off course, you can make a program/machine that replicates/simulates how a typical conscious animal would behave, speak, etc., (I’m a software developer my self, having developed AI-related systems professionally for years so I have some personal experiences in this), but this machine is no more conscious, in a first-person sense, than a pocket calculator is a “math” genius.
Having this said, I agree with Gunnar’s conclusion, which I also stated on twitter: there are not currently anything that points in the direction that conscious machines (in contrast to conscious simulations made by machines and run within a robot) will exists in 2045. However, off course it could be the case that mankind at one point in time will be able to create “real” conscious machines in the first-person-ontological sense, but to be able to achieve this, we first need to focus on finding out what to “replicate” and not get distracted by the apparent excellence of a sheer simulation.
Keep on the discussion though and lets see where it takes us!
Regards
Per
Thanks for your extensive comment Per.
Of course you have an important point and it’s obvious that the debate on consciousness has no simple solution.
On the other hand you can also expect this debate, from both a scientific and a philosophic point of view, to go on for ever, or until someone actually one day manages to create artificial consciousness.
You know, I’m an engineer and a journalist, and my point of view is slightly more pragmatic.
First of all, my conclusion of the debate is that the arguments for consciousness depending on some quality or property which we haven’t yet discovered are no stronger than the arguments to the contrary.
But let’s say that we need to find this quality or property. You then say that nothing points in the direction that we will find it by 2045. I would say that nothing points in the other direction either.
From en engineering point of view, man has been incredibly capable of developing and discovering new technologies, and the pace is obviously accelerating, at least until now. I can see no reason why we wouldn’t be able to discover such a quality or property 32 years from now. That’s a long time.
It’s a little bit like when the Wright brothers started to fly. Discussions were going on for a long time whether this was possible or not, and the predominating opinion was that it was not possible. Quite obviously the debate didn’t finish until they actually flied.
From a journalistic point of view it’s enough to observe that creating artificial consciousness might be possible accordning to many people involved in this research, and that there are no proofs against this possibility, to start discussing the possible consequences of such an important development.
Yes and no Mats (my favourite answer ☺),
Of course there is nothing in my argument that says that we will not actually have real conscious machines by 2045 (would they actually be called machines then? – isn’t that part of our concept of a machine that they are not conscious). But I was not arguing that. My point was that there are no “good reasons” for it to be the case since that argument was based on the assumption that for it to be “good reasons”, the extrapolation must be made from a point where we clearly see that, if we make gradual improvement in an ever increasing pace, we will get what is sought after.
It is at this point where computationalists, like Kurzweil, go wrong. Their conceptualisation of the end goal is a simplification of the phenomenon to fit their theory; it does not capture the essence of consciousness. One reason for this is of course that that it makes an excellent research program that you can be granted loads of funding for ☺. (A computer scientist applying for research funding with the argument that there is nothing that points in the direction that (s)he’s narrowly defined thesis will ever succeed, will probably get less funds than then one stating some computationalists that underpins their thesis)
Hence, what we will get by extrapolating the spotted trend is incredibly fast pocket calculators that might unmistakably behave like humans but with no first-person consciousness. Therefore, again, since getting a basis for extrapolating is completely dependent on individual scientists making a great leap in knowledge and there is no way of predicting whether this will happen tomorrow, in 5 years or 50 years (no one could have predicted that a guy working in the patent office, exactly 1905 would revolutionize the world of physics, or that an Italian scientist, whom I think you know, would break the laws of chemistry in 2011), I think that it, pure statistically, is more likely that we will not see this by 2045. (However, of course stranger things have happened).
If it instead was the case that we actually already had a model, ever so simple, like the most simple equivalent of a Turing machine, which we scientifically had proven to “do the job” but were its further development into a real-world artificial brain would require significant development in the production facilities, then – but just then – would it be possible to actually state any specific years for the success.
And I must say that the date picked by Kurzweil is picked very wisely, he would himself be 97 by that time and will hopefully have time to celebrate his victory. Picking another date, like for example 2267, would not have had the same rhetorical effect… ☺
And another thing, it is very good that the debate goes on. This is what eventually will lead the right person to the right conclusion that then will change the world as we know it.
Det finns en annan aspekt av AI som var temat för filmen med samma namn. Hur förhåller man sig till en maskin som uppträder som om den har medvetande, känsloliv, moral och etik? Kanske även dödsångest?
Är den mindre levande än en varelse som har fått samma egenskaper genom evolution och uppträder på samma sätt?
Har den ett AI-värde motsvarande människovärde eller kan vi behandla den som en metmask och hoppas att den inte lider?
Ofta när man bygger slutsatser på trender så blir det fel. Här bygger felslutet inte enbart på extrapolering. Att man 2045 har en maskin som kvantitativt överträffar “the total …… of all human brains in the world” gör den inte intelligent. Den avgörande biten för “intellectual, emotional and moral sense”, medvetandet, har vi inte sett skymten av i någon maskin. Medvetande är inte ett resultat av snabbräkning.
För att tillverka en medveten maskin krävs en kvalitativ utveckling. En sådan utveckling kan inte förutses utifrån dagens kunskaper. Inget talar heller för någon avgörande förändring till 2045.
Hi Gunnar,
You argue that the extrapolation is a mistake, and that increased quantity of computing power is not enough to achieve intelligence.
Your point is of course important, and the subject is widely discussed.
My view of this is that with your way of reasoning, you’ll have to accept either that bacteria have some kind of minor consciousness of which we are not yet aware, or that humans and maybe some other animals have an unknown property producing consciousness — a property that would be missing in other animals but that appeared at some point in the evolution of biologic life.
Otherwise I can see no difference between us and bacteria other than more complexity and more quantity.
I suppose that watching the evolution of biologic life, for the major part of the history an observer would have come to tha same conclusion as you do — that a sigificant qualitative step is necessary in order to develop intelligence, not just increased quantity. And that no such development could be anticipated.
Pretty interesting mix between Big Data and the eventual leaps in AI. One thing to conceptualize is that when this super intelligence is created there is a good chance that we will not initially recognize it as super intelligence or not agree with it’s ethical/moral standards. Following your line of thought we wouldn’t be able to fully appreciate the logic of such an intelligence. It would be like me trying to explain more ethical and morals to a dog.