Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Just because AI recommends a cliff doesn't mean you have to jump

Mar, 30, 2024 Hi-network.com
womanpeekgettyimages-1282737925
Crispin la valiente/Getty Images

When an exciting technology hurtles toward us, we tend to get wrapped up in the excitement.

Especially when it comes to something as dramatic as artificial intelligence. 

Also: The White House plans to regulate the government's use of AI

AI can write our exam papers. AI can write ads. AI can even make movies. Yet there's still the nagging thought that AI isn't exactly perfect, especially when it comes to hallucinations -- those pesky moments when the AI simply makes things up.

Yet the impression is that companies like Google and Microsoft are boldly intent on injecting AI into every aspect of society.

Where, then, can we find a sense -- a true sense -- of what still needs to be done to AI in order to make it trustworthy?

I confess I've been on that search for some time, so I was moved to repeated reading of a soul-bearing, life-affirming expression of honesty from Ayanna Howard, AI researcher and dean of the College of Engineering at The Ohio State University.

Writing in the MIT Sloan Management Review, Howard offered the most succinct summation of the gap between technologists and, well, everyone else.

She offered this simple thought: "Technologists aren't trained to be social scientists or historians. We're in this field because we love it, and we're typically positive about technologybecauseit's our field."

Also: The best AI chatbots: ChatGPT isn't the only one worth trying

But this, said Howard presciently, is precisely the problem: "We're not good at building bridges with others who can translate what we see as positives and what we know are some of the negatives as well."

There is, indeed, a desperate need for translation, a desperate need for technologists to have a little more emotional intelligence as they create the tech of the future.

"The first [need] -- and this probably requires regulation -- is that technology companies, particularly those in artificial intelligence and generative AI, need to figure out ways to blend human emotional quotient (EQ) with technology to give people cues on when to second-guess such tools," said Howard.

Think back to the early days of the internet. We were left to our own devices to work out what was true, what was exaggerated and what was total bunkum.

Also: Microsoft wants to stop you from using AI chatbots for evil

We're extremely excited, but still treading our way toward some level of certainty. 

Howard explained that as long as a piece of technology seems to work, humans will generally trust it. Even if, as in one experiment she was a part of, people will blindly follow a robot away from a fire escape -- yes, during a fire.

With AI, Howard suggests, the likes of ChatGPT should admit when they have a lack of certainty.

This doesn't absolve our need to be vigilant, but it would surely create a greater level of trust that's vital if AI is to be accepted, rather than feared or imposed.

Howard worries that, currently, anyone can create an AI product. "We have inventors who don't know what they're doing who are selling to companies and consumers who are too trusting," she said.

Also: Generative AI will change customer service forever. Here's how we get there

If her words seem cautionary, they're still an exceptionally positive unveiling of sheer truths, of the challenges involved in bringing a potentially revolutionary technology to the world and making it trustworthy.

Ultimately, if AI isn't trustworthy it can't be the technology it's hyped up to be.

tag-icon Горячие метки: 3. Инновации

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.