Daily (Non) Inspiration: “AI. Now I am become destruction. A destroyer of reality at scale.”

24 STRATEGIES FOR 2024 AI KEYNOTES AI MEGATRENDS THE "BIG" FUTURE
DAILY
MOTIVATIONAL
INSIGHT
FROM
JIM
JIM’s
HIGHLIGHTS
FOLLOW ME

“AI. Now I am become destruction. A destroyer of reality at scale.” – Futurist Jim Carroll

J. Robert Oppenheimer, the ‘father’ of the atomic bomb, famously noted that upon witnessing the first use of the weapon, the phrase from the Hindu sacred text the Bhagavad-Gita ran through his mind.

Now I am become Death, the destroyer of worlds’.

He realized that they had unleashed onto the world a horrible weapon and technology, hoping that society had the guardrails in place to prevent its misuse. Fortunately, experience has shown that since then we’ve been able to avoid the worst – the societal guardrails have prevented its misuse. So far. 

No such guardrails exist today as the sophisticated tools of rapidly maturing AI technology are unleashed into our world.

There is nothing inspirational whatsoever about today’s post, and I thought long and hard about whether I should go down the path with this topic – I have been thinking about this for the last many months. I finally decided to mock up an image and put it out to my followers on Mastodon for a vote.

Well, a 78% ‘No” vote made the decision for me.

Society is not ready for what is coming.

Today, we can barely manage the flood of false information generated by humans – how are we ever going to deal with it when it is generated at scale? I, like many others, have been watching with increasing alarm the sudden and fast arrival of all these new A.I. technologies. It’s everywhere – and every tech company is rushing to get involved. The result will be a massive rush to push products out the door, with little regard given to safety, ethics, and the potential for destructive misuse.

Over on Mastodon, @jacqueline@chaos.social (whoever she might be) stated the situation perfectly:

Think about her comment:

society: damn misinfo at scale is getting a bit out of hand lately. seems like a problem.

tech guys: i have invented a machine that generates misinformation. is that helpful?

People are worried – rightfully so – as to how information networks like Facebook, TikTok, Twitter, and others have been weaponized by various factions on the left and the right; by political parties and politicians; by sophisticated public relations campaigns and companies. Information is coming at us so fast and so furious that many people have lost the simple ability to judge what is real and what’s not.

And the rush to capitalize on the newest iterations of AI technology – barely months old in terms of use – is already seeing some awful results. Take, for example, this one.

When Arena Group, the publisher of Sports Illustrated and multiple other magazines, announced—less than a week ago—that it would lean into artificial intelligence to help spawn articles and story ideas, its chief executive promised that it planned to use generative power only for good.

Then, in a wild twist, an AI-generated article it published less than 24 hours later turned out to be riddled with errors.

The article in question, published in Arena Group’s Men’s Journal under the dubious byline of “Men’s Fitness Editors,” purported to tell readers “What All Men Should Know About Low Testosterone.” Its opening paragraph breathlessly added that the article had been “reviewed and fact-checked” by a presumably flesh-and-blood editorial team. But on Thursday, a real fact-check on the piece came courtesy of Futurism, the science and tech outlet known for recently catching CNET with its AI-generated pants down just a few weeks ago.

The outlet unleashed Bradley Anawalt, the University of Washington Medical Center’s chief of medicine, on the 700-word article, with the good doctor digging up at least 18 “inaccuracies and falsehoods.” The story contained “just enough proximity to the scientific evidence and literature to have the ring of truth,” Anawalt added, “but there are many false and misleading notes.”

And there we have it. That’s our future.

The thing with a lot of this ‘artificial intelligence’ stuff is that it’s not. These are predictive language models, trained to construct sophisticated sentences based on massive data sets. Those data sets can be manipulated, changing the predictive outcome of the model. Many of these early releases have had small ethical efforts to ensure the A.I. data set does not include hate speech, racism, and other such things. That probably won’t last long; A.I. will be weaponized before we know it. The result? A.I. language tools will soon be able to generate a ridiculous amount of harmful content that will flood our world.

But it’s not just that – it’s the fact that the old truism GIGO applies – garbage in, garbage out. We are already seeing the lack of guardrails as companies rush to cash in. The Men’s Journal situation is but one small example. There are dozens, hundreds, soon to be thousands, millions. A.I. is soon to become an engine of misinformation, a factory of falsehoods, and a deployer of dishonesty.

Gosh, I am depressing myself as I write this.

And here’s the problem – there is so much money rushing into A.I. so fast that but moments after the hype of crypto imploded, the hype of AI began. Months in, and we’re already into a bubble! 2023? All A.I. all the time!

There are too many potential problems to mention, but the fundamental one is this – the tech industry has shown that it cannot be trusted. The ‘tech bros’ (as people refer to Zuckerberg, Musk, and Thiel) have proven themselves to be far too interested in the generation of cash than they are in the betterment of society. Now expand that into this new world.

Case in point. Just this week, Google rushed its A.I. into our world, and it was critically wrong right off the bat.

The type of factual error that blighted the launch of Google’s artificial intelligence-powered chatbot will carry on troubling companies using the technology, experts say, as the market value of its parent company continues to plunge.

Investors in Alphabet marked down its shares by a further 4.4% to $95 on Thursday, representing a loss of market value of about $163bn (£140bn) since Wednesday when shareholders wiped around $106bn off the stock.

Shareholders were rattled after it emerged that a video demo of Google’s rival to the Microsoft-backed ChatGPT chatbot contained a flawed response to a question about Nasa’s James Webb space telescope. The animation showed a response from the program, called Bard, stating that the JWST “took the very first pictures of a planet outside of our own solar system”, prompting astronomers to point out this was untrue.

Google said the error underlined the need for the “rigorous testing” that Bard is undergoing before a wider release to the public, which had been scheduled for the coming weeks. A presentation of Google’s AI-backed search plans on Wednesday also failed to reassure shareholders.

Trust me – the rigorous testing that Google won’t be happening, because, it’s a race – for profit, market dominance, and supremacy.

I’m sorry to be so thoroughly depressing with this post, but though I am a big proponent of technology, I’m not excited about what is unfolding here at all. And so I simply want to go on the record so that when, months and years from now, we realize that the destructive potential of AI has been fully weaponized, I can sit back and say, ‘I told you so.’  Our future will be defined by the production of misinformation and false realities at scale, and society is ill-prepared to deal with it; the technology and venture capital industries are all too eager to ignore the problems in chasing the potential for profit; and politicians are all too eager to exploit it for their own selfish interests than to put in place any sort of protective legislation and regulation.

And it is happening so fast. Just under 4 months ago, I spoke in Switzerland at a global risk summit on the acceleration of risk, and my Daily Inspiration at the time said this: “The biggest risks aren’t just those we don’t yet know about – it’s the speed at which they are coming at us!”

Combine that with the other one I wrote at that time, based on my response to someone asking about the future and risk. My response? “Every new technology is ultimately used for a nefarious purpose, accelerating societal risk”

Little did I know it would happen so fast.

“AI. Now I am become destruction. A destroyer of reality at scale.”

Sorry.

GET IN TOUCH

THE FUTURE BELONGS TO THOSE WHO ARE FAST features the best of the insight from Jim Carroll’s blog, in which he
covers issues related to creativity, innovation and future trends.

VIEW OTHER BOOKS