This summer’s reading has been Thinking, Fast and Slow by the Israeli-American psychologist and winner of Nobel Memorial Prize in Economic Sciences, Daniel Kahneman.
Great reading (although a little heavy to read from start to end in a short time).
The book describes the brain’s two ways of thinking — the faster, more intuitive and emotional ‘System 1’, as Kahneman calls it, which incessantly interprets impressions and makes associations, and the slower, consciously controllable and more rational ‘System 2’ which resolves problems, allows us to focus and to control ourselves, but that also requires a significant effort when activated.
The message of the book is that we tend to rely too much on human judgment, frequently based on intuitive conclusions served by System 1 — conclusions that System 2 often lazily accepts, instead of activating itself to assess them rationally.
For me, however, the book brought a couple of other thoughts. One was that System 1’s constant search for patterns and recognition is reminiscent of an idea of what the basic algorithm of the brain’s way of working could be.Author and entrepreneur Ray Kurzweil has suggested that pattern recognition is what the brain is in fact engaged in, at levels from dots and dashes all the way to abstract phenomena like irony.
Kurzweil presents this idea in his book How to Create a Mind (2012) and he calls it the Pattern Recognition Theory of Mind (read more in this earlier post where I also note that Kurzweil is now Director of Engineering at Google, working with machine learning).
I was also struck by the idea that the image of the two systems could help when trying to imagine what super-intelligence might be like (which I discussed in this post). Supposing that machines will one day, not too distant, achieve human intelligence and consciousness, which I believe is reasonable (by the way — have a look at this research in which an AI system was IQ-tested and judged to have the intelligence of a four-year-old), then they will soon after become super intelligent, although that might be difficult to comprehend.
But try to imagine the associative power of System 1, constantly tapping into years of experience of different patterns, phenomena, objects, behaviors, emotions etc, and then imagine having the same kind of system tapping into much larger quantity of stored data, performing associations at significantly higher speed.
Then imagine a System 2, being able to assess the input from such a System 1 on steroids, being able for example to perform multi dimensional analysis — i.e. the same kind of classic sorting we do when we picture a phenomenon in four quadrants with two axes defining two different variables (like this one), although a super intelligent System 2 would do the same thing with a thousand variables.
Such an intelligence would probably have forbearance with our limited capacity to see the whole picture, but hopefully it would also have sympathy for our capacity to enjoy life with our limitations.
Mats, Thanks for the book reference, I’ve added it to my list. You no doubt read “Blink” by Gladwell. I wonder where Sys 1 and 2 fit in with Blink’s premise. Could what Gladwell explorers be a System 0?
I’ve been doing a fair amount of machine learning work recently and so far, from what I’ve seen, ML “brains” or AI, lack what I’ll call the “serendipity affect.” There’s something about a biochemical computing engine that allows almost a chaotic or random mix-mash of information streams such that new, never before imagined, thoughts spontaneously arise from the neuronic stew that are our minds. Creativity – where does it come from? Will an AI mind every really be creative or are they stuck being a parody of human intellect? They may become the ultimate pattern matching engine, able to identify the faintest of patterns in a nanosecond. But that still sounds like a machine not a mind.