Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Twilio introduces AI nutrition labels to increase transparency and trust

24 августа 2023 года Hi-network.com

Twilio, a software company that helps businesses automate communications with their customers, announced that it would place 'nutrition labels' on its AI services.
These labels are similar to the nutrition facts labels on food products that consumers are familiar with. They will provide information about the AI's performance, accuracy, bias, and other key factors such as data usage, risk assessment, and whether there is a 'human in the loop'.


The initiative's goal is to promote trust in AI by increasing transparency. The idea is that if consumers have more information about how AI works, they will be more likely to trust it. Twilio also offers other companies an online tool to help them design their own AI nutrition labels.


Why does it matter?

As generative AI technology becomes more prevalent, the need for transparency becomes paramount to establish trust. Various stakeholders, including regulators, customers, and the public, must be able to evaluate the reliability and potential consequences of AI systems across different sectors like customer service, finance, and healthcare.

Regulators have taken a keen interest in regulating AI content labelling, however, at this point mostly this was done on a voluntary basis.

In theUnited States, the government has secured voluntary commitments from leading AI companies which involve the development of robust technical tools like watermarking systems to distinguish AI-generated content. This not only fosters creativity but also mitigates the risk of fraud and deception.

In theEuropean Union(EU), tech companies are being encouraged to label AI-generated content as part of their efforts to combat misinformation. The EU is urging platforms to implement technology capable of identifying AI-generated content and clearly marking it for users. This focus extends to services that incorporate generative AI, such as Microsoft's New Bing and Google's Bard AI-augmented search services, with the aim of preventing malicious actors from using AI to disseminate false information.

Meanwhile, theCanadian governmentis working on a voluntary code of conduct for AI developers to prevent the creation of harmful or malicious content. The code seeks to ensure a clear distinction between AI-generated and human-made content while including provisions for user safety and the avoidance of biases.

Chinahas released interim measures to regulate generative AI services. These measures provide guidelines for the development and use of AI technology, including content labelling and verification. China has also introduced new regulations that prohibit the creation of AI-generated media without clear labels, such as watermarks. 

tag-icon Горячие метки: Искусственный интеллект Политика в области информационного наполнения Защита прав потребителей

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.