Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

8 ways to reduce AI burnout

7 ноября 2022 г Hi-network.com
Shutterstock

Responsible and ethical artificial intelligence has become the hot-button issue of our times, especially as AI seeps into every aspect of decision-making and automation. According to a recent survey, 35% of companies now report using AI in their businesses and 42% are exploring the technology. 

Special Feature

Digital transformation: Trends and insights for success

Digital transformation projects are about driving fundamental change across customer experience, tech and business culture. This ZDNet special report brings you the latest trends and insights you need to succeed.

Read now

The same survey by IBM finds that trust is extremely important -- four in five respondents cite being able to explain how their AI arrived at a decision as important to their business.

However, AI is still code -- ones and zeros. It doesn't carry the empathy and often is missing context, as I and my co-author Andy Thurai, strategist with Constellation Research, explained in a recent Harvard Business Review article. 

It has the potential to deliver biased and harmful results. As AI moves up the decision chain -- from simple chatbots or predictive maintenance to assisting executive or medical decisions -- there needs to be a reckoning. 

Also: The people building artificial intelligence are the ones who need AI the most

That is, AI's developers, implementers, users, and proponents need to be able to show their work, explain how decisions are made, and be able to continually adapt to new scenarios.

Responsible AI, however, is not easy. It means pressure -- especially on AI teams. As Melissa Heikkil? points out in MIT Technology Review, "Burnout is becoming increasingly common in responsible AI teams." The largest organizations have "invested in teams that evaluate how our lives, societies, and political systems are affected by the way these systems are designed, developed, and deployed." For small-to-medium companies and startups, it means these responsibilities fall to developers, data engineers, and data scientists. 

The result -- even at the largest companies -- is that "teams who work on responsible AI are often left to fend for themselves," Heikkil? finds. "The work can be just as psychologically draining as content moderation. Ultimately, this can leave people in these teams feeling undervalued, which can affect their mental health and lead to burnout."

Also: AI's true goal may no longer be intelligence 

The speed of AI adoption in recent years has ratcheted up the pressure to intense levels. AI has moved from the lab to the production level "faster than expected in the last few years," says Thurai, who has been a vocal advocate for responsible AI. Managing responsible AI "could be particularly draining if they are forced to moderate content, decisions, and data that are biased against their beliefs, viewpoint, opinions, and culture, while trying to maintain a fine line between neutrality and their beliefs. Given the fact AI works 24x7x365 and the decisions made by AI sometimes are life-changing events, the humans in the loop in those areas are expected to keep up with that which can lead to burnout and exhaustion, which can lead to error-prone judgments and decisions."

Laws and governance "haven't caught up with AI," he adds. "Combined with the fact that many enterprises don't have proper procedures and guidelines for ethical AI and AI governance, making this process even more complicated."

Add to this the potential challenges to AI outputs from courts and legal systems, "which start to impose hefty penalties and force corporations to reverse their decisions," he says. "This is particularly stressful for the employees who are trying to enforce the rules on AI systems."

Also: Artificial intelligence: 5 innovative applications that could change everything 

Support from the top is also lacking, piling onto the stress. A study of 1,000 executives published by MIT Sloan Management Review and Boston Consulting Group confirms. However, the study finds that while most executives agree that "responsible AI is instrumental to mitigating technology's risks -- including issues of safety, bias, fairness, and privacy -- they acknowledged a failure to prioritize it." 

So how do AI proponents, developers, and analysts address the issues with potential burnout, a feeling of fighting the ocean's tides? Here are some ways to mitigate AI-induced stress and burnout:

  • Keep business leaders aware of the consequences of irresponsible AI.Unfiltered AI decisions and outputs run the risk of lawsuits, regulations, and damaging decisions. "Executives need to see the ethical and responsible AI cost of spending as a means to improve their company liability and risk posture, rather than a cost center," says Thurai. "While spending less money now can improve their bottom line, even one liability or court judgment would dwarf the savings that would come from these investments."
  • Push for appropriate resources. The stress induced by responsible AI reviews is a new phenomenon that requires rethinking of support. "A lot of mental-health resources at tech companies center on time management and work-life balance, but more support is needed for people who work on emotionally and psychologically jarring topics," Heikkil? writes. 
  • Work closely with the business to ensure that responsible AI is a business priority. "For every company that implements AI, there must be responsible AI as well," Thurai says. He cites the MIT-BCG study (mentioned above) that finds only 19% of the companies that have AI as their top strategic priority work on responsible AI programs. "It should be close to 100%," he says. Managers and employees need to encouraged to employ holistic decision-making that incorporates ethics, morality, and fairness.
  • Proactively ask for help with responsible AI decisions."Have experts make those ethical AI decisions instead of AI engineers or other technologies who are not educated enough to make such decisions," says Thurai.
  • Keep humans in the loop.Always provide off-ramps within the AI decision making process. Be flexible and open to systems redesigns. A survey conducted by SAS, Accenture Applied Intelligence, Intel, and Forbes finds one in four respondents admit (PDF) they have had to rethink, redesign, or override an AI-based system due to questionable or unsatisfactory results. 
  • Automate as much as possible."AI is about very high-scale processing," says Thurai. "The manual process of validating the input bias, data quality, and validating results will not work. Companies should implement AI or other high-scale solutions to automate the process. Any exceptions or auditing can be done manually, but having humans perform the high-scale AI work will lead to disastrous results."
  • Keep the bias out of data up front.The data employed in training AI models may contain implicit bias, due to the limitations of datasets. Data flowing into AI systems should be well vetted.
  • Validate AI scenarios before they go into production.Data going into AI algorithms can change from day to day, and these algorithms need to be constantly tested.  

"It is easy in this bipolar-biased world to call AI-made ethical decisions to be fake by people who disagree with it," says Thurai. "Corporations should take extra care with both transparency in AI decisions and the ethics and governance that are applied. Explainable AI from top to bottom and transparency are two important elements. Combine with regular auditing to evaluate and course-correct actions and processes."  

Artificial Intelligence

The impact of artificial intelligence on software development? Still unclearAndroid 14's AI-generated wallpapers are super fun. Here's how to create themAI aims to predict and fix developer coding errors before disaster strikesGenerative AI is everything, everywhere, all at once
  • The impact of artificial intelligence on software development? Still unclear
  • Android 14's AI-generated wallpapers are super fun. Here's how to create them
  • AI aims to predict and fix developer coding errors before disaster strikes
  • Generative AI is everything, everywhere, all at once

tag-icon Горячие метки: По вопросам бизнеса Цифровая трансформация

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.