Welcome to Eureka Street

back to site

The great AI misdirection

8 Comments

 

How we define and use, or misuse, words can have cataclysmic, long-lasting effects. In the 1980s, most Western governments adopted polices of ‘financial deregulation’, apparently failing to notice that the phrase itself is a contradiction, a logical error. Finance is rules, so it cannot be deregulated. Scroll forward half a century after a vast number of financial players invented their own forms of money – that is what ‘deregulation’ turned out to really mean ­– and we are locked in endless confusion about what money actually is. Once you start in the wrong place, you can never find your way back. It is reminiscent of being lost in Dante’s dark forest at the beginning of The Divine Comedy.

A similar semantic error is happening with Artificial Intelligence (AI). Unless corrected it, too, will have dire long-term consequences. Computers cannot create human thinking and intelligence, artificial or otherwise. Neither will they become ‘smarter’ than us. They can only scan data and identify patterns, which allows for faster and better deductions by humans. It would be more accurate to rename AI Artificial Deduction Simulator (ADS). 

We are being repeatedly warned that AI could end the human race unless controlled and it certainly sounds impressive when former Google scientist Geoffrey Hutton says he ‘suddenly switched’ his views on whether AI ‘is going to be more intelligent than us,’ resigning his position as a consequence. Sam Altman, chief executive of OpenAI is sounding the alarm, saying his work at OpenAI will lead to most people being worse off, and sooner than ‘most people believe’. Elon Musk is calling for a six month moratorium because of the dangers. He co-wrote a letter with Apple cofounder Steve Wozniak to that effect.

It is true that AI technology will be economically and socially disruptive, as many new technologies are. It will especially affect some areas of repetitive work that require accuracy rather than thoughtfulness. But ask the question: ‘what does “more intelligent” mean?’ The response from technologists will likely be that AI will be able to process information billions of time faster than us. But that is not intelligence. It does not involve the creation of meaning, which, amongst other things, requires self-awareness.

The central problem is that AI proponents, like financiers with ‘deregulation’, have been captured by their own metaphors. Unfortunately, because they are respected as ‘experts’, they go on to capture the wider population with those same metaphors. 

If AI represents a great peril to the human race, it will not be because AI will become smarter than us. It will be because we aren’t sensible enough to realise that a computer cannot emulate the full range of human thinking. If we are foolish enough to let AI control physical infrastructure, or worse, weapons systems, we could do ourselves great harm. But it will be because of our own foolishness, not an inevitable consequence of the march of technology.

The mistake can be seen by asking a few basic questions. Can an AI computer doubt itself? What with? Yet humans are capable of self-doubt, indeed it is crucial to intellectual rigour. Can AI come up with a postulation with sketchy or incomplete data? No. It is always a case of garbage in, garbage out. But humans can and do routinely. Unlike AI, which can only scan data sets for the purposes of deduction, humans can induce, come up with ideas or theories by making connections from very incomplete information. 

 

'We seem to be in a headlong rush to rob ourselves of our own natures. The more that happens, the more likely it is that we will end up becoming slaves of what we have created.'

 

Humans have imaginations, which allows them, for example to have empathy for the situation of others. What is the software code for imagining? Humans can have an understanding of truth, in part because that usually involves some sort of moral position, notably that dishonesty is wrong. Computers have no such constraints: they can spit out false information as easily as truths.

There is another deception. The purpose of thinking is to create meaning. It is not to demonstrate that you are ‘intelligent’, which is something you might assess after the fact, such as with an IQ test. So even the use of the word ‘Intelligence’ in AI is something of a misdirection. 

The whole thing is an exercise in ‘personification’, a literary term describing how writers invest non-sentient objects with human qualities they cannot have (such as ‘depressed clouds’ or a ‘cheerful sun’). The AI experts are personifiers, investing a machine with human capabilities.

It is hard to be optimistic that these errors will be corrected. It is too easy to adopt the latest metaphors and run down a false path. Neither is the quasi-scientific nonsense just confined to thinking about AI. Witness, for example, the sloppy thinking behind the current efforts by the Federal government to legislate against ‘harmful misinformation and disinformation in Australia’. Information is passive, it cannot meaningfully be the subject of legislation because it is only acted upon, it does not do anything. What the government really wants to do is prevent people intending to convey meanings that they do not approve of. But in order to conceal that, they characterise consumers of online material as robots having data fed into them, not humans capable of vastly different responses and interpretations. That this language subterfuge seems to go largely unnoticed does not augur well. 

We seem to be in a headlong rush to rob ourselves of our own natures. The more that happens, the more likely it is that we will end up becoming slaves of what we have created.

 

 

 


David James is the managing editor of personalsuperinvestor.com.au. He has a PhD in English literature and is author of the musical comedy The Bard Bites Back, which is about Shakespeare's ghost.

Main image: (Getty images)

Topic tags: David James, AI, Finance, Humanity

 

 

submit a comment

Existing comments

How different is simulation from stimulation?

How different is an artificial deduction simulator from a human deduction stimulator? As different as between two humans one of whom is a psychologically integrated emotion stimulator and the other a psycho- or socio-pathic emotion simulator? Cannot a psycho- or socio-path learn to understand the logic of an emotion and copy it so as to hide in plain sight?

Sociology holds that an individual only knows how to be human from exposure to society, feral children and adults being examples of what happens when the exposure is delayed by only a few years from birth. The only difference between psychological integration and psycho- or socio-pathic imitation is habituating to some rules and heuristics.

Within a domain of knowledge and action which is founded upon strict rules, an AI which is properly fed with the relevant rules and case histories of their application can beat a chess grandmaster.

Within a domain of knowledge and action which is founded upon loose rules and heuristics such that humans can disagree upon the outcome, how can you prove that the AI did not generate a valid outcome in deciding whether or not to vote for the Voice?


s martin | 18 August 2023  

What makes us so human that we can look down on (currently primitive, but soon to get better, versions of) AI as lesser intelligence?

Faith was reckoned to Abraham. He had no independent ability to know he had faith.

Atheistic evolutionists won't have this problem but Christians, whether evolutionist or not, might want to wonder why apes exist. Is it a dig from God that "there, but for the grace of God...."? Upright, bipedal, binocular, prehensile, the package is almost ready to go build a civilisation. All that's missing is a bit of brain, except what about those so-called humans who are born without a little bit of brain and have less agency than the so-called lower primates?

It must be the soul which reckons us as human. Without it, we'd only be amorally intelligent, like an AI, able to scan and aggregate information but compose only prudential action. We might be able to simulate morality, but we could not stimulate an affective understanding of morality because that requires the notion of offence against God.

The next time you go to a zoo, enjoy the humour of the ape as a finger of God pointing at you.


s martin | 18 August 2023  
Show Responses

Next time you go to the zoo, ponder the morality of your own soul-possessing species in its arrogant and widespread disregard for all other life forms, including members of its own species, except insofar as they can be made to serve H. sapiens. 'Morality' is nothing but the rules of currently socially acceptable behaviour, and from where I sit I see little evidence that our species has anything worth crowing about.


Ginger Meggs | 20 August 2023  

It's a bit late to criticise "serve H.sapiens'. In fact, it's impossible to criticise "serve H.sapiens". Everything on the planet serves the human species physically, intellectually or emotionally.


Beauty is said to serve humanity. If you leave a species or a location alone because it has a uniqueness which calls to be preserved unchanged, Beauty is being called upon by humans to serve themselves. As a concept in the eye of the beholder, beauty is whatever the human brain perceives to be beauty. As for arrogant disregard, most extinctions were caused by Evolution's disregard of what humans would almost certainly have considered beautiful.

What zookeeper would have wanted dinosaurs to become extinct? Compared to them, the staple drawcards of gorillas, big cats or Komodo dragons are piffle. Whale-watching? Piffle. Try plesiosaurus watching on Loch Ness. All those tourists would be good for the Scottish rupee after independence (but that's another story).


s martin | 21 August 2023  

A highly astute analysis that directs responsibility for technological development and usage to where it belongs: with its inventors, we humans.


JRD | 18 August 2023  
Show Responses

On that, John, we can be in furious agreement !


Ginger Meggs | 18 August 2023  

Careful, Ginger, only fanatics agree furiously!


John RD | 25 August 2023  

A very valuable analysis which I've passed on to others for their interest.


Len Puglisi | 18 August 2023  

Similar Articles

No vote, no voice

  • Daniel Gregory
  • 10 August 2023

The upcoming Voice referendum in Australia will be a defining moment for the nation. However, Australians living overseas indefinitely are unable to participate, raising questions about the true boundaries of democratic participation.

READ MORE

No place like home – no home at all

  • Andrew Hamilton
  • 09 August 2023

The crisis of homelessness is no longer distant; it's a grim reality affecting friends, families, and even white-collar workers. As housing costs soar, a report paints a bleak picture with the demand for accommodation skyrocketing. This Homelessness Week, the question is not just how we define homelessness, but how we respond to its profound impact.

READ MORE