Welcome to Eureka Street

back to site

The rise of the machines

5 Comments

 

There is a great deal of commentary about the growing importance of artificial intelligence, or AI, especially in business circles. To some extent this is a self-fulfilling prophecy — if people think something will have a seminal effect then it probably will. But if the supposed commercial benefits are significant, the dangers are potentially enormous.

Elon Musk, a person intimately familiar with AI, frets that it could become ‘an immortal dictator from which we would never escape’ suggesting that it will overtake human intelligence in five years. He, and many others, envisage a ‘technological singularity’: a point when machine intelligence surpasses human intelligence and the computers accelerate at an incomprehensible rate.

At one level these claims are just nonsense that reveal just how degraded our understanding of ourselves has become. The first and most obvious problem is that AI can never replicate the complexity and range of human intelligence. At best, it can improve on a small part of our thinking, computation. But computation is only one part of our cognition, and cognition is only one slice of the range and depth of human thought.

There are other errors, perhaps implying some more intelligence is required when thinking about AI. Computers do not have intentionality (will), which is self-evidently necessary to thinking. They have no sense of their own mortality. Anything that involves our understanding of qualities rather than quantities, such as the beauty of a painting or a piece of music, is outside the range of AI, or any computer. Computers cannot think, and to call what they do ‘intelligence’ is only to confirm how narrow our measurements of thinking are (IQ measures, basically).

Then there is the problem of consciousness: humans’ ability to be aware of their own thoughts and of themselves. It is possible to program software that can continuously produce new software configurations in response to the computer’s interaction with its environment. That is what using AI to get a computer to ‘learn’ means. But no machine will ever be aware of the experience of having learned. It is a machine. It is not merely lacking in self-consciousness, it is inanimate.

 

"The danger is that it will lead to a massive degradation of our humanity, reduce us to nothing but industrial outputs, transactions and binary behaviours."

 

Human self-awareness is impossible to deal with in mathematical terms because it is an infinite regress. There will never be an algorithm that plots self-consciousness because it could never include the awareness of the algorithm itself, which must always lie outside.

Despite all these obvious absurdities, there is no doubt that AI will become far more intrusive because it can be applied to repetitive industrial production. As bored workers throughout the ages will attest, self-awareness is often a disadvantage in the work place, not an asset. AI machines do not have that problem.

AI can be readily applied to market behaviour, which works off a simple binary: buy/not buy; sell/not sell. That is what turned the social media companies, which surveil our every move, into global behemoths. Dubbed surveillance capitalism, it works because the human behaviour involved is binary. AI is also being applied to war, another binary: kill/not kill (an effort appallingly called ‘human augmentation’).

Yet apply AI to something more complex, like writing a poem, and the outcome will be very different. It would take a legion of good poets, doing the programming, just to get an AI computer to generate bad poetry. You may as well hire a poet instead, they should be cheap.

Proponents of AI like to claim that it will improve humanity. It is more likely that the opposite is true. The danger is that it will lead to a massive degradation of our humanity, reduce us to nothing but industrial outputs, transactions and binary behaviours. Such computer technology may help us produce more stuff to consume, kill our enemies more efficiently, or create more financial activity, but it will come with a terrible price.

The enormity of the threat was described with startling prescience by CS Lewis in his book The Abolition of Man (the abolition of man is exactly the risk). Lewis said that human nature would be the ‘last part of Nature to surrender to Man’. That is the very thing that AI proponents are aiming at in their efforts to create what they call human 2.0. He wrote: ‘The battle will then be won ... but who, precisely, will have won it? For the power of Man to make himself what he pleases means, as we have seen, the power of some men to make other men what they please.’ As Lewis prophetically explained, technology will not liberate humans, it will enslave and diminish them, except for the select few.

Ignoring the human will also lead to catastrophic breakdown of human systems at some point. Witness the fate of Long Term Capital Management (LTCM), a hedge fund in the 1990s that used an algorithm for pricing risk, called the Black and Scholes risk pricing model, to make large investments. One of LTCM’s directors, Myron Scholes won a Nobel Prize for it.

The model’s mathematics were brilliant but at one point it went so badly wrong that the losses were enough to almost bring down the entire Western banking system. That is what happens when you try to model inherently unpredictable human behaviour. The catastrophe required then chairman of the US Federal Reserve, Alan Greenspan, to organise a massive bail out and demonstrated how dangerous self-impelling computers can be. Musk is right. AI represents perhaps the biggest danger humankind has ever faced.

 

 

 

David JamesDavid James is the managing editor of personalsuperinvestor.com.au. He has a PhD in English literature and is author of the musical comedy The Bard Bites Back, which is about Shakespeare's ghost.

Main image: (2001: A Space Odyssey, 1968. An astronaut looks at his reflection in a camera. (Photo by Metro-Goldwyn-Mayer/Getty Images) 

Topic tags: David James, climate change, COP26, economic growth, finance

 

 

submit a comment

Existing comments

...reminds me of the Little Britain comedy skit where the girl answers the clients' questions every time with "...computer says No." All too frequently we trust on-line content as resources and recite it verbatim because "it's on the internet..." which somehow makes it authority, however anonymous or vague it may be. I have observed "Doctors" make statements relying on fact(oid)s with little or no citation and most likely written in a manner to persuade confidence in the reader; a Wiki-ocracy: knowledge is power and selecting the knowledge we prefer is empowering - even if delusional; is that not artificial intelligence of a sort? Possibly the greatest danger in the trust in AI is the assumptions that the base data on which the intelligence is based is either correct in the integrity of logic or that history cannot be altered. Every day "cancel-culture" and social correctness attempts to re-write history or eliminate uncomfortable truth; this isn't so much "unpredictable human behavior" as unpredictable machine outcomes when the computer has to adjust it's thinking to potentially illogical inputs; the fallibility of human forethought and deliberate corruption of hindsight. Maybe the greatest fear we have to cope with in relation to AI is knowing we can't mislead it.


ray | 02 February 2022  

I'll consider believing "AI represents perhaps the biggest danger humankind has ever faced" when my version of Windows 11 works properly.


Daniel O'CONNELL | 02 February 2022  
Show Responses

Daniel, perhaps fun to consider with the AI topic but in the late 1980's Windows (even before V 3.11) had a little assist program within itself called Dr Watson. It was used to alleviate buggy operating and the user selected when to run diagnostics. Later versions still have a Dr Watson embedded but it checks itself automatically, if you go to DOS you might find it lurking in the shadows...dunno about Win11. The initial algorithm to make it automatic was time-based on the PC clock, logical and simple BUT the foresight was lacking when it faced the Y2K bug.
I'm not telling you this to trouble-shoot your PC problems but to demonstrate how much we unknowingly rely on programmers having intuition to ensure machine learning is robust over a long period of time, they won't always get it right. Dumb stuff still happens; you'd think something as reliable as clockwork based purely on time would be bullet-proof but the human element is we manipulate "time" at least twice a year with daylight saving, skipping an hour. We also travel across time zones and may skip multiple hours...and if your time-based algorithm is related to some human life support system it might just go wrong. But blame AI.


ray | 02 February 2022  

I was just thinking. Was The Victorian government's response to the covid outbreak, supposedly based on sound medical advise, actually just based on a computer program devised by some IT expert?


Brian Leeming | 02 February 2022  

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461


Grant Castillou | 06 February 2022