Here’s a really good piece on the difficulty but also the importance of ethics for machines, robots, autonomous cars, arms and similar stuff powered by artificial intelligence: Moral Machines by Gary Marcus, Professor of Psychology at N.Y.U.
Prof Marcus argues that the moment autonomous cars will be so much better and safer than human drivers that we will prefer them from a moral point of view, that moment
…will signal the beginning of another [era]: the era in which it will no longer be optional for machines to have ethical systems.
Then he points out how difficult it will be to build ethical systems — clearly not as easy as implementing the three famous laws of robotics formulated by Isaac Asimov. On the other hand it’s like philosopher Colin Allen puts it: “We don’t want to get to the point where we should have had this discussion twenty years ago.”
The elegant conclusion by Prof Marcus is:
What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.
His piece highlights something I’ve tried to emphasize in an earlier post — when looking at the impressive opportunities accelerating technology development is offering us, it’s of absolute importance to consider also how culture and human values have evolved over time and how fundamental they are in order to build something prosperous from our technology.
It’s easy to forget these values when you understand the dazzling power of technology, but tech in itself will take us only so far. Or indeed nowhere.