“To survive the future you have to trust the algorithm but so far, the algorithm hasn’t done a lot to earn our trust!” – Futurist Jim Carroll
Can you trust the algorithm?
Particularly when you know that Rosie the Robot cheated?
She did! Programmed to clean the floors, she swept the dirt under the rug when the Jetsons weren’t looking!
If you can’t trust Rosie, can you really trust anything AI related?
Through the many years that I’ve been talking about A.I., I’ve long been focused on both the potential upside and the downside. Here’s a clip from a recent post-keynote Q&A:
I thought it might be helpful to share the whole transcript.
I like to know, what do you feel about AI? And what’s beyond AI, in your opinion?
A bloody mess.
I’ve been talking about the AI for a long time, obviously, as a futurist.
A couple of observations.
Number one, it is going to be the speaking topic of 2023. I’ve got about 40 bureaus and agents around the world who book me and represent me, and the number of inquiries coming in on that topic is staggering, and we’re all seeing that because of what has happened with ChatGPT.
ChatGPT is this thing, you type in something and it comes back with some information, and I think the first time we saw that, it was like the first time we saw the World Wide Web and we went, ‘whoah, look at that, there’s something magical happening here.
I had a situation with ChatGPT, I’m a geek, I run my own servers and I needed some server code run, and I went in and asked, after asking a whole bunch of people online, how do I write this code? Nobody could give me a response, and I went into ChatGPT and asked, how do I write this code? It gave me a perfect answer.
But the problem that is unfolding here, and I put a post out about this on my website three days ago, is, as a society, it’s become painfully obvious, we don’t know how to deal with misinformation…
Look at what has happened in the US with the election, you look at what has happened with MAGA supporters and all of that.
And what AI is allowing us to do is number one, it does generate errors, it’s not perfect information, people are going to trust it, and it’s going to allow us to generate misinformation at scale in ways that are simply profound.
If you’re on Instagram, there’s somebody who’s generating… I think, I can’t remember her name, but she’s an artificially intelligent generated image of a young girl named Alice or something like that, and every day he puts out to social networks, “which picture should I use today?” and people vote which one’s gonna go on Instagram. And if you look at her, you would never know it was generated by an AI.
We are mere months away from the ability to generate full-on video content that you wouldn’t know that it’s not real.
There’s a guy who just two days ago put up a picture, a video of how he cloned himself using technology, he made an almost perfect image of him saying something, using an image generator, an AI to generate the text…an AI voice generator to do his voice.
I’m an optimist about technology, I’m an optimist about the future, I can’t come out here on stage and say, “Guess what? Your future sucks!” I wouldn’t get a lot of repeat business.
And I’m terrified by what’s going on by AI. For the first technology, it’s got me absolutely freaked out, , and I think we’re gonna do a lot of really stupid things with it as well.
There’s a lot going on right now to take our support desks, which we manage to a call center and put an AI engine on it, and that’s going come out as with staggering speed, and if we thought we were dealing with customer support hell right now, it’s gonna get unbelievably bad, because the implementations will go wrong and the technology will go bad.
So I’m actually quite a pessimist on it. I think there’s a lot of opportunities, if we look at the world of medicine, the ability for an AI to go through 100 x-rays and interpret those x-rays far more intelligently than a human can to identify conditions, it’s magical.
But on the other side, it’s scary.
I think it’s inevitable, your customer workgroups and other groups will get involved with AI and looking at it. I mean, this is the topic at 2023, and I think you have to go at it with caution.
The other side of it is, last year the whole focus was on Meta and artificial reality and cryptocurrency and blockchain, and that all sort of collapsed and went nowhere and now, the entire Silicon Valley is going “whoah, a shiny new toy, AI… the amount of venture capital, of money going in, it is simply unreal, so we’re in for an interesting ride.
AI. The future. Algorithms. Should trust them? What’s your plan?
That’s a question you’ll have to ask yourself on an increasing basis simply as you go about living your daily life, working in your profession, and working and leading day by day.
I’ll admit that I’m asking myself on a regular basis as I let the algorithm take over some of the driving activities in my Tesla – when you are barrelling down a highway at 100km/hr and a computer is doing the driving, it certainly makes you think! (I’ve also, as you might have seen, become quite skeptical that Tesla or any other car company will be able to deliver anything resembling full, wide-scale 100% reliable self-driving technology at any time in the future, which demonstrates to me that there is a lot of promise about algorithms but a lot yet to deliver.)
Here’s the thing – we already know we have significant problems. Long before ChatGPT arrived on the scene and accelerated risk with AI, we knew there were issues involving discrimination, errors, and more. Watch this video: it’s chilling.
Or this one:
And yet, we seem to be willing to trust the algorithm more than we trust our fellow humans:
Our daily lives are run by algorithms. Whether we’re shopping online, deciding what to watch, booking a flight, or just trying to get across town, artificial intelligence is involved. It’s safe to say we rely on algorithms, but do we actually trust them?
Up front: Yes. We do. A trio of researchers from the University of Georgia recently conducted a study to determine whether humans are more likely to trust an answer they believe was generated by an algorithm or crowd-sourced from humans.
The results indicated that humans were more likely to trust algorithms when problems become too complex for them to trust their own answers.
Background: We all know that, to some degree or another, we’re beholden to the algorithm. We tend to trust that Spotify and Netflix know how to entertain us. So it’s not surprising that humans would choose answers based on the sole distinction that they’ve been labeled as being computer-generated.
In three preregistered online experiments, we found that people rely more on algorithmic advice relative to social influence as tasks become more difficult. All three experiments focused on an intellective task with a correct answer and found that subjects relied more on algorithmic advice as difficulty increased. This effect persisted even after controlling for the quality of the advice, the numeracy and accuracy of the subjects, and whether subjects were exposed to only one source of advice, or both sources.
The problem here is that AI isn’t very well suited for a task such as counting the number of humans in an image. It may sound like a problem built for a computer – it’s math-based, after all – but the fact of the matter is that AI often struggles to identify objects in images especially when there aren’t clear lines of separation between objects of the same type.
Quick take: The research indicates the general public is probably a little confused about what AI can do. Algorithms are getting stronger and AI has become an important facet of our everyday lives, but it’s never a good sign when the average person seems to believe a given answer is better just because they think it was generated by an algorithm.
Today, the biggest barrier to widespread AI is not the technology, Kande said – that part is available and ready to implement. It’s the psychological baggage that’s holding it back.
“Trust in AI is a big thing, the same way you trust the manufacturing process that delivers a high-quality car,” he said. “So if it’s your favorite car, manufactured by that car company, we trust their manufacturing process and we trust the car that comes out of it.”
But a car is something we can see, touch, and feel. Even if we don’t know exactly how the engine works or how its parts are put together, we generally have a good sense of what it’s supposed to do: take us, safely, from point A to B. What Kande soon realized is that technologies like AI, despite their business-transforming potential, are a different story altogether. For one, they are invisible. This leads to many justifiable questions, even for folks who are generally well versed in the technology.
“Can you trust somebody’s AI?” asked Kande. “Can you trust the algorithm that they have? Can you trust that it won’t be biased? Can you trust that it won’t be creating collateral damage?”Study: People trust the algorithm more than each other; People are more likely to pick an answer when they believe it was generated by an algorithm13 April 2021, The Next Web
AI is all so seductive right now – it’s the new shiny toy! And yet, it takes us into an unknown and complex world in which we have to trust the algorithm
And in the context of this, most of the major tech companies have fired most of their A.I. ethics professionals.
Last year, Amazon-owned streaming platform Twitch acknowledged it had a problem.
For much of the company’s 12-year history, women and people of color had argued the platform was biased. Sexist and racist harassment were endemic, and critics said the company’s all-important recommendation algorithms, which use artificial intelligence to decide which streamers to promote to viewers, were amplifying the problem.
As part of its response, the company set up a responsible AI team to look specifically at the algorithms. At its semiannual conference, TwitchCon, the team’s principal product manager told Twitch streamers, “We are committed to being a leader in this area of responsible and fair recommendations.” He urged them to fill out demographic surveys to track potential discrimination.
But last week, the handful of people who made up the responsible AI team were laid off, part of a broader round of cuts that hit about 400 of the company’s 2,500 employees. Others who worked on the issue as part of their current jobs have been moved to other topics, according to a former member of the responsible AI team, who spoke on the condition of anonymity to discuss internal company matters.
As AI booms, tech firms are laying off their ethicists
Twitch, Microsoft and Twitter are among firms that have laid off workers who studied the negative sides of AI
March 30, 2023, Washington Post
Crazy, isn’t it?