Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Do companies have ethical guidelines for AI use? 56% of professionals are unsure, survey says

Dec, 20, 2023 Hi-network.com
Scale on a block
Parradee Kietsirikul/Getty Images

Although AI has been around since the 1950s, it has seen tremendous growth within the past year. Tech giants have been implementing AI into their products and services, while individuals are using it to make their lives a little easier. 

Deloitte surveyed companies and professionals in its second edition of the "State of Ethics and Trust in Technology" report, led by its Technology Trust Ethics practice. According to the report, 74% of companies have already begun testing generative AI, while 65% have begun to use it internally. The increasing awareness of AI's new capabilities has led to the pressing question of how organizations can use this technology ethically. 

Also: The ethics of generative AI: How we can harness this powerful technology

Deloitte interviewed 26 specialists in various industries to gather information about how industry leaders are considering concerns about the ethical use of emerging technologies, including generative AI. 

The company then tested hypotheses and delivered a 64-question survey to more than 1,700 businesses and technical professionals to gain further insights. 

Special Feature

03-the-future-of-ai-jobs-and-automation-720.png

The Future of AI, Jobs, and Automation

We've entered a period of dramatic innovation in AI and automation and it's going to have a significant impact on the future of jobs, productivity, and the ways we operate in teams. Research predicts that worker productivity could increase by as much as 4x by 2030, powered by AI. We unpack the opportunities and the ways to benefit from this transformation.

Read now

The report, by Beena Ammanath, managing director of Deloitte Consulting LLP and leader of Deloitte's Technology Trust Ethics practice, refers to emerging technologies as the following: Cognitive technologies (including general and generative AI and chatbots), digital reality, ambient experiences, autonomous vehicles, quantum computing, distributed ledger technology, and robotics. 

According to the survey, 39% of survey respondents, consisting of business leaders and developers of emerging technologies, thought cognitive technologies had the most potential for social good, compared to 12% in digital reality, and 12% in ambient experiences. 

Also: 5 essential traits that tomorrow's AI leader must have

However, 57% of survey respondents also thought that cognitive technologies had the greatest potential for serious ethical risk. 

The most concerning statistic is that over half of the respondents (56%) said their "company does not have or are unsure if they have ethical principles guiding the use of generative AI."

Chart about emerging tech by Deloitte
Deloitte Technology Trust Ethics Survey

Compared to Deloitte's report in 2022 about ethics and trust in emerging technologies, this year's report reveals that "organizations find themselves wrestling with new ethical issues posed by wide-scale adoption of this once-again new technology." 

These issues are tied to concerns about how businesses and organizations are using these technologies. 

Despite the many benefits of AI, 22% of respondents were concerned with data privacy while 14% cited transparency about how AI is trained with data to produce its outputs. 

Also: Does your business need a chief AI officer?

Data poisoning as well as intellectual property and copyright were concerns that each consisted of 12% of survey respondents. Data poisoning is the "pollution" of data training sets by bad actors and can lead to inaccurate results produced by AI. 

Graph of AI concerns
Deloitte Technology Trust Ethics Survey

Deloitte's report also detailed the types of damage that survey respondents believe could arise when ethical violations are not taken seriously. 

Reputational damage was the greatest source of concern coming from 38% of respondents, followed by human damage such as misdiagnoses or data privacy violations (27%), regulatory penalties like copyright infringement (17%), financial damage (9%), and employee dissatisfaction (9%). 

These damages are evident in the several lawsuits that have already been filed due to privacy violations, copyright infringement, and other issues related to the unethical use of AI. 

Also: AI and automation: Business leaders adopt small-scale solutions for greater impact

So how can companies ensure they using AI safely? Deloitte lists a multi-step approach to helping companies: 

  • Exploration:Companies can begin by letting product owners, business leaders, and AI/ML practitioners explore generative AI through workshops to see how it could create value for their businesses. This way, companies can recognize the costs and benefits of incorporating AI into their businesses. 
  • Foundational:Companies could buy or build AI platforms to implement generative AI into their businesses. Of the survey respondents, 30% of survey respondents' companies chose to use existing capabilities with major AI platforms. 8% of respondents created their own in-house AI platforms, while 5% decided not to use generative AI. 
  • Governance:Creating standards and protocols for AI use could minimize the potentially harmful impacts of AI, so companies should determine what types of ethical principles they plan to uphold. 
  • Trainings and education:Companies could mandate trainings that outline the ethical principles of using AI. In addition, technical trainings that educate employees about using a variety of LLMs could provide companies with more guidance about the ethical use of AI. 
  • Pilots:Engineers and product leaders could run experiments on a variety of use cases to test proof of concepts and pilot programs and then eliminate aspects that are too risky. 
  • Implementation:Companies should draft a plan for introducing a newly enhanced product into the market and assign accountability for product implementation and ownership. The company should also have a team of experts prepared to address any issues that may arise. Transparency is also crucial for this step, as companies should explain how user data is inputted into the model, how the model reaches its output, and how likely the model is to hallucinate. 
  • Audit:According to one interviewee, companies will need to modify their policies depending on the risks of AI use. This could vary company by company, as not all organizations will incorporate AI for the same use case.  

In considering the impact of generative AI on human workers, issues such as transparency and data privacy ranked above job displacement. Nevertheless, the report also mentioned that "49% said workers at their organization displaced by AI moved to different roles and retrained and upskilled." Furthermore, 11% were terminated, 13% were put in a different role without being retrained or upskilled, and 27% did not experience any job displacement at their organization from AI, according to Deloitte. 

Also: Will AI hurt or help workers? It's complicated

"The sooner companies work together to identify the risks and establish governance up front, the better their ability may be to help generate stakeholder value, elevate their brands, create new markets, and contribute to building a more equitable world," said Ammanath. 

tag-icon Горячие метки:

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.