Autonomous cars highlight fundamental questions
I like the recent call from Molly Wood at Cnet News (where I used to work in 2009): “Self-driving cars: Yes, please! Now, please!”.
She notes quite obvious advantages with autonomous cars – safety, efficiency and environmental improvements – and observes that the forces working against adoption are fear and love of driving, emotions so strong that Alan Mulally, CEO of Ford Motor Company, recently insisted that Ford would not be developing self-driving cars, or even introducing self-driving mode in vehicles.
I suspect that Mulally’s insisting at this point depends more than anything else on not wanting to disturb passionate drivers/customers or Ford’s shareholders.
Unanswered questions like whether a police officer should have the right to pull over autonomous vehicles or what insurance they would need were brought up at the symposium.
These are all important issues that need to be resolved. Molly Wood’s proposal, which I find valid, to help push consumer and manufacturer adoption is to introduce mandatory auto-mode zones or drive times.
Another interesting and hands-on approach is the EU-project SARTRE which aims at realizing a system for road trains or ‘vehicle platoons’ where a number of cars are automatically guided at controlled distances behind a lead vehicle with a professional driver.
Volvo Car Corporation, who participates in the project, recently released a video showing three cars automatically guided at six meters (18 feet) distance from each other at 90 km/h (56 mph) behind a leader truck.
These efforts and approaches are effective and important, as is Google’s autonomous vehicle research program that has achieved 200,000 miles of driving without an accident, the Chinese car Hongqi HQ3, Darpas race Grand Challenge, the world’s first legislation on autonomous vehicles in the state of Nevada in June 2011 and several other initiatives.
Yet the discussion on autonomous driving is only a precursor of a series of other more fundamental questions regarding capable and autonomous machines, questions which will be much more delicate to answer.
At the heart lies a fact that we all know and that Molly Wood points out – that computers are better at certain things than humans are.
But we know this only up to a certain point. It’s easy to admit that computers are better at resolving differential equations but harder to accept that they could beat humans in a quiz show like Jeopardy (IBM Watson, February 2011) or more capable of driving cars safely and efficiently.
And even if we accept this, it will all get really sensitive the day we decide that humans are no longer allowed to do certain things that computers do better.
I’m perfectly convinced that we will have roads where humans aren’t allowed to drive, because they will not be able to handle the high efficient, high speed environment which is managed by computer systems on those roads.
That will hurt.
In a sense we are entering a time of transition when these issues will be very difficult. At what point are we sure that the computers are better? And what if they are much better but fail from time to time, with fatal consequences? How should we compare failures of computers to those of humans? Counting lives?
A good example can be found in aircraft. While automated systems are making air transport gradually safer they also create frustration among pilots.
In incidents like when Air France 447 crashed in the Atlantic with 228 passengers in June 2009, several systems seem to have failed and disconnected themselves, leaving the pilots to handle an almost impossible emergency situation manually.
Part of this problem is that automated systems are still not intelligent enough to invent and attempt new solutions to unforeseen situations.
Part is the essential issue that when we let computers take over difficult tasks, humans that used to handle those tasks get less training and gradually lose their skills.
This could become a significant problem the day when a large number of all roads are reserved for autonomous vehicle’s driving only, while badly trained humans are still required to drive on a few smaller roads.
But if it will hurt some people no longer to be allowed to do certain things that computers do better, the real concern will come the day we will have to decide to give intelligent computers the same rights as humans, in the name of equality.
This might seem distant, and today it’s a hypothetical situation that comes down to the discussion on whether human consciousness is a mysterious entity, separate from the matter in the body, or if it’s an emerging property from the complex interaction between 100 billion neurons in the human brain, and in that case, if the same emerging property could originate also in a non biological system.
Among those who have discussed this intensively are Ray Kurzweil, arguing that consciousness is an emerging property in any system as complex as the human brain, and John Searle, philosopher from University of California, defining Kurzweil a materialist, and convinced that consciousness requires biology.
Personally I find Kurzweil’s view more likely to be true and I expect artificial intelligence to be conscious at a certain point, forcing us to decide upon the rights of intelligent machines.
But I also find it interesting to reflect upon when consciousness first emerged. Psychologist Julian Jaynes suggested in his book “The Origin of Consciousness in the Breakdown of the Bicameral Mind” from 1976 that consciousness as we understand it – an introspective and self-aware way of reasoning – might be as young as 3000 years.
Before being conscious, humans would simply have functioned very well anyway, behaving more or less intuitively, obeying internal “voices” as orders what to do – a hypothesis called Bicameralism.
Although the ideas of Jaynes are not much supported today, it’s interesting to note how much we are able to do without being conscious about it – walking or driving all the way from home to work for example. Or how well some people do things like acting or playing football, while they actually say they perform less well if they become conscious of what they are doing.
Jaynes’s thesis was that consciousness was a culturally evolved solution but we might as well imagine it to have been supported by subtle biological changes in the brain structure, making this immeasurable property possible.
In that case, finding the key to consciousness in artificial intelligence might turn out to be a real challenge. Even though modern research on brain simulation is aiming at self organizing chaotic systems imitating the human brain, nature might have had to search intensively to find the particular solution that opened the way to a conscious mind.
Yet I believe we will be successful in creating conscious AI with emotional capabilities and that one day the discussions on autonomous vehicles will seem as minor as permitting washing machines or not.
On the other hand – getting there slowly, step by step, is absolutely essential.