We have a choice each time we automate

(This article was originally published in Swedish by the Swedish daily Svenska Dagbladet on April 23, 2023). 

The development of AI is currently progressing at an unprecedented pace. In the colorful debate that sways between enthusiasm and warning, however, the most relevant question is seldom asked: what do we want with AI and automation?

Perhaps not unexpectedly, it is in columns on culture and art that the question is sometimes addressed, and then from two perspectives. One more philosophical – whether humans have any influence on technological progress, and one more practical – what happens if we automate away all jobs and all knowledge.

The first perspective is perhaps the least explored, but two authors who have successfully analyzed it in depth are Kevin Kelly and Steven Johnson. In two separate books from 2010, they describe technological development as a kind of network in an inexorable movement where inventions do not occur by chance but, on the contrary, are often made in several places simultaneously, independently of each other.

Eight billion brains that solve problems daily, large and small, can be said to be the fuel in the machinery. And the movement cannot be stopped unless eight billion brains stop thinking, which will not happen.

It is difficult to predict where the sum of all solved problems will lead us. And even though shared problems such as the climate crisis in some ways provide a direction, development seems to have its own structure. At every moment, it appears to be most influenced by what is within reach – the adjacent possible. Even profit does not seem to have a decisive influence on what is invented.

Of course, nobody can know if technological progress would have looked similar if we replayed history from the beginning, but Kelly’s and Johnson’s analyses suggest it.

In other words, much speaks in favor of us not being able to refrain from inventing AI.

The second perspective – what happens if we automate away all jobs and all knowledge – is more tangible but, on the other hand, not new. A well-founded concern about how automation affects jobs has been with us since the industrial revolution in the late 18th century, with the Luddites in the early 19th century as the most famous example.

Oxford researcher Carl Benedikt Frey argues successfully in his book The Technology Trap (2019) that we are in many ways experiencing a similar situation to that of the industrial revolution, in contrast to the more prosperous post-war period. He emphasizes that several active political measures will be required to balance the effects of automation if we want to reap its potential for increased productivity while avoiding a society in turmoil. But as we can all see, politicians are not even in the starting blocks for such a conversation.

When it comes to automating away all knowledge, on the other hand, there is a difference compared to 200 years ago. All automation causes us to lose both competence, experience, and human structures. But while the consequences of automating well-defined manual tasks are manageable (think elevator operator), the risks of today’s automation are more complex.

Ask any writer or artist why they write or create. Somewhere, the answer will be about understanding. When we create something, we compile our knowledge and experience and let it take on a new form in a new human context.

Despite differing opinions among experts and researchers, there is very little evidence to suggest that today’s AI has any form of consciousness. What generative AI accomplishes is to imitate human behavior – not just human knowledge, but also our way of reasoning about knowledge, and combining our existing knowledge in new ways. It is impressive, and at the same time difficult to understand, because it is obscure to us what can be achieved with the principles that generative AI is based on – pattern recognition and statistical correlation – when the data sets are very large.

As Arthur C. Clarke put it – Any sufficiently advanced technology is indistinguishable from magic.

But without consciousness, there is no subjective experience, and no ability to build your reasoning on a unified self over time. Thus, there is no genuine understanding of cause and effect, and no ability to reason about issues such as relationships, ethics, and morality. Nor is there any ability to reason about knowledge in a new human context that has not been described before, or to value human contact.

When we automate knowledge with generative AI, we therefore make ourselves very vulnerable. And it is not just the discomfort that people may become redundant, but a tangible risk also for companies that prioritize a crude profitability calculation. 

Fewer people mean reduced ability to imagine how knowledge needs to take a new form when the world and the people in it change. It poses a significant risk of losing quality and strength in everything from innovation to user experience and branding. In short, a business risk.

When used correctly, AI can instead complement us humans. By not focusing on what we do, for an AI will do that too, but instead on what and who we are, which an AI does not come close to, we can find this balance.

Used correctly, AI can help us become more energy- and resource-efficient. In total, this is one of the most important things we can do for increased sustainability and reduced greenhouse gas emissions.

Used correctly, AI can help us become more productive and perform work that would otherwise not be done. Because it is hardly the case that there is a shortage of work that needs to be done in the world. In software development, we already see this – there are enormous opportunities for increased productivity that can lead to software that is expensive and sometimes poor becoming cheaper and better. And software that we need but that costs too much could finally be developed at a reasonable cost.

All of this requires political action, increased transparency and openness, responsible, reliable and secure AI, and much more. But the important lesson is that every time we automate knowledge, we must assess what is lost. And even the most brutal capitalist who only thinks about efficiency and profitability will discover that it is a very expensive and painful mistake to overlook the need for people.

Only by understanding what AI is, and what we want with AI, can we find the nuances between warning signals and embellishments, between regulation and free rein.

Mats Lewan, journalist, author, engineer.

§

References:

Kevin Kelly, What Technology Wants, Viking Press, 2010.

Steven Johnson, Where Good Ideas Comes From, Riverhead Books, 2010.

Carl Benedikt Frey, The Technology Trap: Capital, Labor, and Power in the Age of Automation, Princeton University Press, 2019.

§

This opinion piece was first published in Swedish daily Svenska Dagbladet on April 23, 2023.

Leave a Comment. Latest comments are displayed on top. Comments are not threaded.

Blog at WordPress.com.

Up ↑