Why human values will be fundamental for a super intelligence

What do the ancient Greeks or fight for civil rights have to do with the destiny of the universe? More than you could believe. Or rather – they are a fundamental part of it.

The short explanation is that without the experience gained through thousands of years of human civilization, a super intelligence wouldn’t have the necessary knowledge to avoid destroying itself. This is fortunate, because at the same time this is why we can hope to be part of and be respected by future super intelligence.

To make this clear, let’s have a look at Kurzweil’s analysis of the Singularity. Basically Kurzweil has studied general structures of development and has found that both evolution of biological life and development of technology follow one steady exponential curve which has never, ever slowed down or hesitated, not even during natural disasters or the world’s most severe economic recessions.

He has also found natural explanations to this, noting that new products of evolution/development are reinserted into the ecosystem and there contribute to increased speed of development. Putting it in mathematical terms this results in an exponential curve which fits real observations.

One main property of exponential curves is that they are highly non intuitive when you follow them. They start out slow, and seem linear – meaning that everything continues with constant speed – but at a certain point acceleration becomes obvious and shortly after speed increases to breathtaking levels.

This is why Kurzweil’s conclusions might seem fantasies and far from reality, even though they are basically observations of reality. One of his conclusions is that artificial intelligence in 2045 will surpass the total intelligence of all human brains in the world, both in an intellectual, emotional and moral sense. This would be the Singularity.

Most probably this conclusion is accurate, and if it is not, it’s just about an error of a couple of years earlier or later.

Another of Kurzweil’s predictions regard nanotechnology and the development of microscopic robots – nanobots – which ultimately could circulate in human bodies, adding functions ranging from nutrition and healthcare to intelligence.

In fact, Kurzweil concludes that approaching the Singularity, humans will gradually add more and more technology into the biological body and ultimately integrate the brain with artificial super intelligence through nanobots connected by wireless networks.

Whether this will be the way to do it is uncertain, but I find it reasonable to believe that humans will integrate with artificial intelligence in the same way that we gradually enhance our bodies in other ways.

Or to put it in another way, we would probably prefer to be part of this intelligence rather than have it around as a separate entity, completely detached from human consciousness (which ultimately might be independent from biologic bodies…).

Now, Kurzweil brings up another phenomenon that we are all perfectly aware of – that all technologies inherently bring both new possibilities and risks, all the way from inventions such as the fire, the wheel and the knife to aircraft and nuclear power.

And while the possibilities with technologies that we will develop in the following decades are enormous, so are the risks.

The only way to deal with these risks is as we have always been doing – having more brains to work with constructive applications of a technology that with destructive ones, giving maximum support to constructive development in times of threat in order to always be at least one step ahead of destructive forces. As technology development will never stop, unless the world ends, we simply have no other choice.

But this is just one part of the equation – I will come back to this.

Let me first bring up some of the most staggering risks that Kurzweil and others have depicted. Discussions on risks with gene therapy and gene modification are already widespread, with all the advantages and disadvantages such technologies bring.

Even more frightening could be the risks with nanobots, especially if we allow them to be self replicating. The main disaster scenario refers to the ‘grey goo’ which implies that self replicating nanobots would start multiplying themselves without control and, if distributed all over the world, they would consume all existing bio mass in a few hours.

And the only reason to develop self replicating nanobots is actually to build an immune system, capable of detecting and attacking such nanobots that might have gone awry or are designed for destructive purposes, using powerful self replicating.

Now here’s the key issue: While there’s an idea on how we can defend ourselves and manage the risks with anything up to nanotechnology, there’s no such possible defense towards a super intelligence that would attack us, simply because a super intelligence would always be smarter than us and would find a way to circumvent any defense we could possibly imagine.

Kurzweil is consciously careful on this point and notes that there’s absolutely no guarantee for us being respected by a super intelligence, but his thought is that if we are careful when designing it, basing it on human intelligence, it will respect us.

This might look like a very weak hope, almost desperate, but here’s my point: It’s actually more than a hope. We have reasons to believe that a super intelligence will need to respect humans and human values, developed through thousands of years of civilization, simply because that’s the only way to survive, even for a super intelligence.

To make this credible, let’s start by noting Kurzweil’s discussion on our uniqueness in the universe. Despite conclusions from the Drake Equation and projects like Search for Extraterrestrial Intelligence, SETI, Kurzweil concludes that we might actually be the most developed intelligence in the universe.

The main reason for this would be that following the universal exponential curve of development, under certain conditions our intelligence would penetrate into the universe within a few hundred years. If there were a higher intelligence than ours somewhere in the universe, being developed through billions of years like life on Earth, it’s highly unlikely that this intelligence wouldn’t already have reached us. Otherwise it would need to be timed precisely at the same level as ours, within a few hundred years of development.

Now consider this: Assuming that life on Earth and our intelligence is actually the result of a unique process necessary to reach that level, so should all sociological structures around it be.

This is the other part of the equation I mentioned – that while technology gets gradually more powerful we also develop the unique sociological structures around it that are necessary to handle the powers of the technology.

The delicate balance between inherent possibilities and risks in technology that I discussed before, is possible to maintain today only because of an open and democratic society which have its origins among the ancient Greeks.

As Kurzweil points out – if we decide to ban development of certain technologies from fear of risks they present to humanity, the result will be that development of such technology moves underground and to totalitarian states where we won’t have sufficient insight to be able to develop defense towards destructive uses of such technology. The result could eventually be a disaster, ending life on Earth.

In this way, an open and democratic society is a necessary condition for an intelligent civilization to survive.

Or as Kurzweil puts it in “The Singularity is Near”:

“Although the argument is subtle I believe that maintaining an open free-market system for incremental scientific and technological progress, in which each step is subject to market acceptance, will provide the most constructive environment for technology to embody widespread human values”

Another way to put it is that democracy has not been developed by chance, but is actually part of the process that has brought the advanced technology we have today and also makes it possible to handle it.

And while technology continues to develop, so do democracy and other sociological structures and human values.

Increased and more efficient communications bring people on Earth closer every day. Cultures and traditions meet, influence each other and often come in conflict.

This is a gradual process which is possible to handle just because it is gradual. The result is a continuing refinement of sociological structures which are necessary in order for us to deal with steadily increased risks and possibilities with new technologies.

Today a threat of an aggressive epidemic virus can be met thanks to highly developed methods to scan its genetic code and create vaccines very rapidly, but also thanks to sociologic structures that let us coordinate an action plan for epidemic defense.

On the other hand, the same structures give us the possibility to question such an action plan and the use of vaccines if there’s reason to do so, without putting humanity at immediate risk, as it’s of fundamental importance to be able to manage also criticism within this system.

In the years to come we will see that discussions on personal integrity will be extremely important, as the need to monitor technology in order to defend us towards malicious use will increase. This must be done without compromising individual liberty and legal security.

The more intelligent and refined technology gets, the more it will reflect values of people creating and using it, and to avoid disastrous conflicts in the realm of the technology we will need all the experience we have gained in mixing cultures and learning to respect each other – a process where we still have a lot to learn.

As Kurzweil notes, two basic human values will remain the most important: respect for any other (human) consciousness and respect for knowledge in the form of art, music, literature, science and technology.

These are the two fundamental values that we have developed and that will probably remain unchanged for any intelligence that want to survive.

The first has a spiritual origin whereas the other has grown through building our modern society, and it will grow more important as everything moves towards knowledge.

Starting from these two principles we will need to let values and ethics from different religions and philosophies, that have formed during thousands of years, to continue influencing each other, and we need to let structures and values in different societies to meet and mix, all in order to gain experience that helps a complex system of individual minds, that gradually becomes more powerful and intelligent, to survive.

Because in the end, the strength in nature and technology is built on differentiation, and to make an immense number of super intelligent conscious individual entities, each of which will be potentially extremely harmful, to coexist and survive together, will require extremely developed and refined sociological structures.

This is why human values are fundamental for the destiny of the universe, and it is also why we can expect that future super intelligence will respect humans and human values, and even develop them to a much higher level.

The ancient Greek probably didn’t have a clue of this.


3 thoughts on “Why human values will be fundamental for a super intelligence

Add yours

  1. Kurzweil’s is a materialist fanatic, but such extremism sells books. The fundamental fabric of the universe is consciousness, as reported by thousands of consistent observers over the millenia.

Leave a Comment. Latest comments are displayed on top. Comments are not threaded.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: