Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Can governments turn AI safety talk into action?

Jun, 16, 2024 Hi-network.com
AI depicted on screens
Andriy Onufriyenko/Getty Images

At the Asia Tech x Singapore 2024 summit, several speakers were ready for high-level discussions and heightened awareness about the importance of artificial intelligence (AI) safety to turn into action. Many are looking to prepare everyone from organizations to individuals with the tools to deploy this tech properly. 

Also: How to use ChatGPT to analyze PDFs for free

"Pragmatic and practical move to action. That's what is missing," said Ieva Martinekaite, head of research and innovation at Telenor Group, who spoke to on the sidelines of the summit. Martinekaite is a board member of Norwegian Open AI Lab and a member of Singapore's Advisory Council on the Ethical Use of AI and Data. She also served as an Expert Member in the European Commission's High-Level Expert Group on AI from 2018 to 2020. 

Martinekaite noted that top officials are also starting to recognize this issue. 

Delegates at the conference, which included top government ministers from various nations, quipped that they were simply burning jet fuel by attending high-level meetings on AI safety summits, most recently in South Korea and the UK, given that they have little yet to show in terms of concrete steps. 

Martinekaite said it is time for governments and international bodies to start rolling out playbooks, frameworks, and benchmarking tools to help businesses and users ensure they are deploying and consuming AI safely. She added that continued investments are also needed to facilitate such efforts.

AI-generated deepfakes, in particular, carry significant risks and can impact critical infrastructures, she cautioned. They are already a reality today: images and videos of politicians, public figures, and even Taylor Swift have surfaced.

Also: More political deepfakes exist than you think

Martinekaite added that the technology is now more sophisticated than it was a year ago, making it increasingly difficult to identify deepfakes. Cybercriminals can exploit this technology to help them steal credentials and illegally gain access to systems and data. 

"Hackers aren't hacking, they're logging in," she said. This is a critical issue in some sectors, such as telecommunications, where deepfakes can be used to penetrate critical infrastructures and amplify cyber attacks. Martinekaite noted that employee IDs can be faked and used to access data centers and IT systems, adding that if this inertia remains unaddressed, the world risks experiencing a potentially devastating attack. 

Users need to be equipped with the necessary training and tools to identify and combat such risks, she said. The technology to detect and prevent such AI-generated content, including text and images, also needs to be developed, such as digital watermarking and media forensics. Martinekaite thinks these should be implemented alongside legislation and international collaboration.

However, she noted that legislative frameworks should not regulate technology, or AI innovation could be stifled and impact potential advancements in healthcare, for example. 

Instead, regulations should address where deepfake technology has the greatest impact, such as critical infrastructures and government services. Requirements such as watermarking, authenticating sources, and putting guardrails around data access and tracing can then be implemented for high-risk sectors and relevant technology providers, Martinekaite said. 

According to Microsoft's chief responsible AI officer Natasha Crampton, the company has seen an uptick in deepfakes, non-consensual imagery, and cyber bullying. During a panel discussion at the summit, she said Microsoft is focusing on tracking deceptive online content around elections, especially with several elections taking place this year.

Stefan Schnorr, state secretary of Germany's Federal Ministry for Digital and Transport, said deepfakes can potentially spread false information and mislead voters, resulting in a loss of trust in democratic institutions. 

Also: What TikTok's Content Credentials mean for you

Protecting against this also involves a commitment to safeguarding personal data and privacy, Schnorr added. He underscored the need for international cooperation and technology companies to adhere to cyber laws put in place to drive AI safety, such as the EU's AI Act. 

If allowed to perpetuate unfettered, deepfakes could affect decision-making, said Zeng Yi, director of the Brain-inspired Cognitive Intelligence Lab and The International Research Center for AI Ethics and Governance, Institute of Automation, Chinese Academy of Sciences. 

Also stressing the need for international cooperation, Zeng suggested that a deepfake "observatory" facility should be established worldwide to drive better understanding and exchange information on disinformation in an effort to prevent such content from running rampant across countries. 

A global infrastructure that checks against facts and disinformation also can help inform the general public on deepfakes, he said.  

Singapore updates gen AI governance framework 

Meanwhile, Singapore has released the final version of its governance framework for generative AI, which expands on its existing AI governance framework, first introduced in 2019 and last updated in 2020. 

The Model AI Governance Framework for GenAI sets a "systematic and balanced" approach that Singapore says balances the need to address GenAI concerns and drive innovation. It encompasses nine dimensions, including incident reporting, content provenance, security, and testing and assurance, and provides suggestions on initial steps to take. 

At a later stage, AI Verify, the group behind the framework, will add more detailed guidelines and resources under the nine dimensions. To support interoperability, they will also map the governance framework onto international AI guidelines, such as the G7 Hiroshima Principles.

Also: Apple's AI features and Nvidia's AI training speed top the Innovation Index

Good governance is as important as innovation in fulfilling Singapore's vision of AI for good, and can help enable sustained innovation, said Josephine Teo, Singapore's Minister for Communications and Information and Minister-in-charge of Smart Nation and Cybersecurity, during her speech at the summit. 

"We need to recognize that it's one thing to deal with the harmful effects of AI, but another to prevent them from happening in the first place...through proper design and upstream measures," Teo said. She added that risk mitigation measures are essential, and new regulations that are "grounded on evidence" can result in more meaningful and impactful AI governance.

Alongside establishing AI governance, Singapore is also looking to grow its governance capabilities, such as building a center for advanced technology in online safety that focuses on malicious AI-generated online content. 

Users, too, need to understand the risks. Teo noted that it is in the public interest for organizations that use AI to understand its advantages as well as its limitations. 

Teo believes businesses should then equip themselves with the right mindset, capabilities, and tools to do so. She added that Singapore's model AI governance framework offers practical guidelines on what should be implemented as safeguards. It also sets baseline requirements on AI deployments, regardless of the company's size or resources.

According to Martinekaite, for Telenor, AI governance also means monitoring its use of new AI tools and reassessing potential risks. The Norwegian telco is currently trialing Microsoft Copilot, which is built on OpenAI's technology, against Telenor's own ethical AI principles.

Asked if OpenAI's recent tussle involving its Voice Mode had impacted her trust in using technology, Martinekaite said major enterprises that run critical infrastructures such as Telenor have the capacity and checks in place to ensure they are deploying trusted AI tools, including third-party platforms such as OpenAI. This also includes working with partners such as cloud providers and smaller solution providers to understand and learn about the tools it is using. 

Telenor created a task force last year to oversee its adoption of responsible AI. Martinekaite explained that this entails establishing principles its employees must observe, creating rulebooks and tools to guide its AI use, and setting standards its partners, including Microsoft, should observe.

These are meant to ensure the technology the company uses is lawful and secure, she added. Telenor also has an internal team reviewing its risk management and governance structures to take into consideration its GenAI use. It will assess tools and remedies required to ensure it has the right governance structure to manage its AI use in high-risk areas, Martinekaite noted. 

Also: Businesses' cloud security fails are 'concerning' as AI threats accelerate

As organizations use their own data to train and fine-tune large language models and smaller AI models, Martinekaite thinks businesses and AI developers will increasingly discuss how this data is used and managed. 

She also thinks the need to comply with new laws, such as the EU AI Act, will further fuel such conversations, as companies work to ensure they meet the additional requirements for high-risk AI deployments. For instance, they will need to know how their AI training data is curated and traced. 

There is a lot more scrutiny and concerns from organizations, which will want to look closely at their contractual agreements with AI developers.

tag-icon Горячие метки: Технологии и оборудование

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.