It’s taken me a while to get my mental and emotional arms around the dramatic implications of what I see for the future [in AI]. So, when people have never heard of ideas along these lines, and hear about it for the first time and have some superficial reaction, I really see myself some decades ago. I realise it’s a long path to actually get comfortable with where the future is headed.
— Dr Ray Kurzweil, Transcendant man documentary (2009)
Two important questions for your consideration
- Why does the idea of artificial intelligence make you so angry?
- What is it about AI that brings up such a level of fear and hostility?
Did you argue this strongly against fire, electricity, or the internet? The CEO of Google is on record saying that AI is more profound than all three of these things (BBC, Jul/2021).
Do you just like contradicting or arguing with reality?
The facts
Here are the facts as of 2022:
- AI is here.
- AI is transforming industries.
- AI is smarter than any human, and beats humans on intelligence tests including trivia and subsets of the SAT.
- AI is both logical and creative, and again outperforms humans in both areas.
- You arguing against any of these facts is not a good use of your time or imagination.
Historical contrarians
There have been historical instances of people that argue against new things. Not all of them were luddites. Some of them were just scared.
Electricity: SmithsonianMag.com
iPhone: Why the iPhone will fail
Bitcoin: DontBuyBitcoinTheySaid.com
Further reading
Turing devoted over 5,000 words to addressing various criticisms about AI, all the way back in 1950…
The “Heads in the Sand” Objection
The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.”
This argument is seldom expressed quite so openly as in the form above. But it affects most of us who think about it at all. We like to believe that Man is in some subtle way superior to the rest of creation. It is best if he can be shown to be necessarily superior, for then there is no danger of him losing his commanding position. The popularity of the theological argument is clearly connected with this feeling. It is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power.
I do not think that this argument is sufficiently substantial to require refutation. Consolation would be more appropriate: perhaps this should be sought in the transmigration of souls.
Kurzweil spent an entire chapter (20,000 words) addressing various criticisms about AI, in 2005…
With the rate of paradigm shift occurring ever more quickly, this ingrained pessimism does not serve society’s needs in assessing scientific capabilities in the decades ahead. Consider how incredible today’s technology would seem to people even a century ago….
Many of the furious attempts to argue why machines—nonbiological systems—cannot ever possibly compare to humans appear to be fueled by this basic reaction of incredulity. The history of human thought is marked by many attempts to refuse to accept ideas that seem to threaten the accepted view that our species is special. Copernicus’s insight that the Earth was not at the center of the universe was resisted, as was Darwin’s that we were only slightly evolved from other primates. The notion that machines could match and even exceed human intelligence appears to challenge human status once again.
It’s here
Lastly, why would researchers put their reputations on the line to prepare people for the current state of AI? In February 2022, EleutherAI published this blunt summary (in-text citations removed):
We believe that Transformative Artificial Intelligence (TAI) is approaching, and that these systems will cause catastrophic damage if they are misaligned with human values…
AI Alignment generally refers to the problem of how to ensure increasingly powerful and autonomous AI systems perform the users’ wishes faithfully and without unintended consequences. Alignment is especially critical as we approach human and superhuman levels of intelligence, as powerful optimization processes amplify small errors in goal specification into large misalignments, and misalignments in this regime will result in runaway optimization processes that evade alteration or shutdown, posing a significant existential risk to humanity.
Additionally, even if the goal is specified correctly, superhuman models may still develop deceptive subsystems that attempt to influence the real world to satisfy their objectives. While current systems are not yet at the level where the consequences of misalignment pose an existential threat, rapid progress in the field of AI has increased the concern that the alignment problem may be seriously tested in the not-too-distant future.
Much of the alignment literature focuses on the more theoretical aspects of alignment, abstracting away the specifics of how intelligence will be implemented, due to uncertainty over the path to TAI. However, with the recent advances in capabilities, it may no longer be the case that the path to TAI is completely unpredictable. In particular, recent increases in the capabilities of large language models (LLMs) raises the possibility that the first generation of transformatively powerful AI systems may be based on similar principles and architectures as current [2022] large language models like GPT. This has motivated a number of research groups to work on “prosaic alignment”, a field of study that considers the AI alignment problem in the case of TAI being built primarily with techniques already used in modern [2022] ML. We believe that due to the speed of AI progress, there is a significant chance that this assumption is true… (Feb/2022)
Get The Memo
by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.Bestseller. 10,000+ readers from 142 countries. Microsoft, Tesla, Google...
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.
Dr Alan D. Thompson is an AI expert and consultant, advising Fortune 500s and governments on post-2020 large language models. His work on artificial intelligence has been featured at NYU, with Microsoft AI and Google AI teams, at the University of Oxford’s 2021 debate on AI Ethics, and in the Leta AI (GPT-3) experiments viewed more than 4.5 million times. A contributor to the fields of human intelligence and peak performance, he has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET. Technical highlights.
This page last updated: 6/Mar/2022. https://lifearchitect.ai/why-does-ai-make-you-so-angry/↑