Welcome to Eureka Street

back to site

The future of work is transhuman

4 Comments

 

The push to introduce Artificial Intelligence (AI) into many areas of modern life is an existential threat to the human race. Not because computers will replace human intelligence, which can never happen, but because the aim is to convince us that human intelligence does not exist. To avoid being tricked into technocratic servitude, it is vital to assert how wondrous, mysterious and polyvalent the human mind is.

AI and its offshoots, such as transhumanism, will continue to have a significant impact on the job market by automating mechanistic activities in the service sector, including in highly paid areas like law, education, journalism, graphic design, even music.

Tertiary industries that rely on deduction will be particularly affected, leading to the kind of transformation that has already occurred in primary and secondary industries, which have experienced efficiency improvements that have radically changed the living standards of the world’s population. But any job that involves interaction between self-aware humans will not be threatened because computers do not have, and never will have, consciousness or the ability to relate to people that accompanies that.

AI is self-generating software that is capable of continuously adapting by interacting with the data it receives. The deception is that this self-referential quality is the same as human awareness, which is also self-referring (‘I am aware of myself: I can see me’). Thus technocrats talk about a machine ‘learning’. To state the obvious — and the obvious needs to be continually repeated to counter this dangerous sophistry — no machine can apprehend its own existence. They are not even organic. When a human learns something, they are aware of themselves having found something out. A machine can only be created, by self-aware humans, to be a simulacrum of that.

To say, as many proponents of AI claim, that it is just a matter of increasing computational power and it will become possible to give computers a subjective, internal life is to profoundly misunderstand what a subjective internal life is. As one analyst commented, he would only be concerned about conscious machines if ‘these machines start worrying that their parts might be wearing out’. These repeated assertions by AI proponents, who see themselves as being at the vanguard of science, are actually deeply unscientific — immature thinking untroubled by empirical or philosophical rigour. Perhaps that is why they so often dress like teenagers.

Consider the logic of it. When someone, say Elon Musk, has a clever, high-IQ thought, then he will be aware of having had that clever, high-IQ thought. So who is watching that thought? It is Elon Musk’s consciousness, or core being. Can a machine, made up of inert components, ever watch its own workings? What with?

That is only the start of the problems. The mind is experienced by us as a single thing. The philosopher Descartes said: ‘I am only a thinking thing; I cannot distinguish in myself any parts. Our minds have a unity.’ The mind has no components, whereas computers consist of nothing but components. Moreover, those components mostly operate serially, one function after another. The human brain tends to operate with parallel networks, everything at once. The two are not interchangeable.

 

'AI and transhumanism will continue to transform economic life on the planet. Rather than trying to stop it, which will fail, the counterattack should instead be to repeatedly insist on the obvious: that the ‘I’ in AI is not human intelligence, and that the ‘humanism’ in transhumanism is not human.' 

 

Another issue is that humans can think effectively with loose or incomplete information. The computer scientist John von Neumann noted that the human nervous system is very imprecise, and ‘no known computing machine can operate reliably and significantly on such a low precision level’. Yet humans can form ideas from vague or imprecise elements; that is largely what intuitive or inductive reasoning is. With computers it is always a case of rubbish in, rubbish out, which is worth remembering when computer modelling is preferred to good sense. 

Another problem is that maths, from which computer algorithms are derived, cannot model consciousness because it is an infinite regress: ‘I am aware/I am aware of being aware/I am aware of being aware of being aware’ and so on. To quote Hamlet, we are ‘infinite in faculty’.  

As the historian of science Stanley Jaki noted, a ‘conscious man is a unity in which the potentially infinite sequence of acts of self-reflection do not signify distinct parts of a thinking apparatus … Consciousness is the perceiving field of qualitative differences as opposed to the quantitative structure of external things. Consciousness is also the matrix of experiences about the self, about the purpose in action, and about the meaningfulness of judgments … only man can abstract and rise to the level of universal concepts.’ 

Of most concern are the moral implications. Morality depends on having a conscience, self-awareness. To believe that AI can replace human thinking is to take the ability to distinguish between good and evil off the table. Coding ethical precepts into computer algorithms is not a substitute. It is like saying that the 10 commandments are really a still-alive Moses instructing us on how to make moral decisions.

Physicalists, who believe that humans are nothing but matter, and that the mind and the brain are the same thing, face unsolvable problems when it comes to AI. Human brains each day lose about 85,000 cells or neurons. Computers will not work if a single part is lost. With computers, size equates with capacity. Analyses of the brains of geniuses shows no correlation with size. These physical differences between computers and people do not augur well for attempts to create a computer/brain interface (such as Musk’s Neuralink). 

AI and transhumanism will continue to transform economic life on the planet. Rather than trying to stop it, which will fail, the counterattack should instead be to repeatedly insist on the obvious: that the ‘I’ in AI is not human intelligence, and that the ‘humanism’ in transhumanism is not human. These aberrant ideas have a long history: similar claims were prosecuted 700 years ago by Raymundus Lullus in his Ars Magna. But opposing them has become crucial. It is not just a matter of logic or clear thinking; it is about defending our humanity against those who would degrade it.

The ultimate irony is that because our minds are so remarkable we are able to imagine impossible, science-fiction notions like AI and self-conscious cyborgs. But it is essential to remember that it is all fiction. Jaki writes: ‘The crucial issue in the man-machine relationship lies not in what machines can do but in the concepts that man forms about machines and about himself. As long as this is done with proper concern for man's uniqueness, there is no cause for alarm.’

 

 

 


David James is the managing editor of personalsuperinvestor.com.au. He has a PhD in English literature and is author of the musical comedy The Bard Bites Back, which is about Shakespeare's ghost.

Main image: Human hand reaching for robotic hand. (Getty images)

Topic tags: David James, AI, Transhumanism, Future

 

 

submit a comment

Existing comments

Thanks David for that forthright piece. I find your assertions intuitively attractive and believe they may be right, but I am not sure you have made a convincing case. AI has come a long way since Jaki and the Bard, and, on the human side, we still have little understanding of what human consciousness is. Is it possible that machines will one day replicate consciousness? (I don't think so ... but I am not sure how to make the case without introducing theological arguments.)

On a lesser matter, perhaps you elide AI and transhumanism a little too much? Would it be helpful to clarify that AI is about pushing the limits of what machines can do, while transhumanism is about pushing the limits of what humans can do? But yes, the latter will be very dependent on the former.


Chris Mulherin | 16 February 2023  

"As long as this is done with proper concern for man's uniqueness . . ." (S Jaki)
Aye, there's the rub in a time when the very uniqueness that distinguishes humans
and its celebration by Sophocles and Shakespeare in the west's humanist tradition is radically contested.
In the same encomium from Hamlet quoted by David, we read in reference to man: " . . . in apprehension, how like a god."
Several Shakespearean commentators have observed the similarity of this exalted estimate to the Psalmist's: "You have made him little less than a god" (8:5), a line that influenced Michelangelo's daring depiction of Adam in his Sistine Chapel masterpiece.
Genesis's ". . . made in the image and likeness of God" sums up the uniqueness of human dignity:
we are beings with the attributes of intelligence and freedom, possessing creativity and the ability to love.
Where is the AI product of which this can validly be said?
Thank you for a very timely, stimulating and well-argued reflection, David.


John RD | 16 February 2023  

From my perspective nothing gets changed until it affects the educational class.For those of us that have been unfortunate to have suffered from long periods of unemployment,we can only look on this potential threat of AI overtaking jobs and fear of redundancies with perhaps indifference or even disinterest.There is,a,sense of de ja vu around the issue,the actors change the situation remains the same.The solutions seem to always be of a quick fix nature without addressing the ethics and deeper human questions of what it means to be human,and the role of meaningful work.If you've known the wound if rejection from being unemployed through illness,disability you've 'been there and not done that'(work that is).In the 80's and 90's,nurses who fought for a better career structure and better wages,and who became sick as result,were often sidelined,and as well as the recession we were meant to have it was a case of bringing in overseas nurses to fill in the gaps.Nothing has changed post covid- there is a shortage of nurses because nurses have left because of enormous stresses,and the answer is to bring in nurses from pacific nations rather than address issues of fairness and illness.So what is the difference with AI replacing workers,not much.
From my perspective there may be a bigger outcry,because unfortunately not much gets changed unless it affects the middle class or educational classes.We have tolerated the existence of families where there are now a few generations of family members not working and the social cost is enormous,but there are no marches in the streets.
We are all being conned into accepting high levels of unemployment,and unfortunately we are accepting that it's a case of being quick or dead.


Roz | 17 February 2023  

‘The very uniqueness that distinguishes humans…’ ? Aye, there’s the rub. Sure, unique in some aspects, but no ‘more unique’ than other forms of life in their own wondrous ways. ‘Little less than a god’ ? One would be hard-pressed to see evidence of that in the daily news. ‘Made in the image and likeness of God’ ? More likely the other way round given the age-old human propensity to create gods to fill gaps in knowledge and understanding. ‘Beings with the attributes of intelligence and freedom, possessing creativity and the ability to love’ ? Yes, perhaps, but so also are other species.

Time perhaps that we focused not on our ‘uniqueness’ but on what we share with the rest of Earth’s biota, and that is life itself. This is not just an academic game. The adverse effect of the hubris that inevitably goes with self-assigning some special ‘higher’ or ‘superior’ place to the species Homo sapiens or even some part of it is everywhere apparent.


Ginger Meggs | 26 February 2023