Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

White House unveils AI rules to address safety and privacy

13 ноября 2023 г Hi-network.com

The Biden administration today announced a new effort to address the risks around generative artificial intelligence (AI), which has been advancing at breakneck speeds and setting off alarm bells among industry experts.

Vice President Kamala Harris and other administration officials are scheduled to meet today with the CEOs of Google, Microsoft, OpenAI, the creator of the popular ChatGPT chatbot, as well as with AI-startup Anthropic. Administration officials plan to discuss the "fundamental responsibility" those companies have in ensuring their AI products are safe and protect the privacy of US citizens as the technology becomes more powerful and capable of independent decision making.

"AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks," the White House said in an statement. "President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy."

This new effort builds on previous attempts by the Biden administration to promote some form of responsible innovation, but to date Congress has not advanced any laws that would rein in AI. In October, the administration unveiled a blueprint for a so-called "AI Bill of Rights" as well as an AI Risk Management Framework; more recently, it has pushed for a roadmap for standing up a National AI Research Resource.

The measures don't have any legal teeth; they are just more guidance, studies and research "and they're not what we need now," according to Avivah Litan, a vice president and distinguished analyst at Gartner Research.

"We need clear guidelines on development of safe, fair and responsible AI from the US regulators," she said. "We need meaningful regulations such as we see being developed in the EU with the AI Act. While they are not getting it all perfect at once, at least they are moving forward and are willing to iterate. US regulators need to step up their game and pace."

In March, Senate Majority Leader Chuck Schumer, D-NY, announced plans for rules around generative AI as ChatGPT surged in popularity. Schumer called for increased transparency and accountability involving AI technologies.

The United States has been a follower in pursuing AI rules. Earlier this week, the European Union unveiled the AI Act, a proposed set of rules that would, among other things, require makers of generative AI tools to publicize any copyrighted material used by the platforms to create content. China has led the world in rolling out several initiatives for AI governance, though most of those initiatives relate to citizen privacy and not necessarily safety.

Included in the White House initiatives is a plan for the National Science Foundation to spend$140 million on creating seven new research centers devoted to AI.

The administration also said it received an "independent commitment from leading AI developers," including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles - on an evaluation platform developed by Scale AI - at the AI Village at DEFCON 31.

"This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration's Blueprint for an AI Bill of Rights and AI Risk Management Framework," the White House said.

Tom Siebel, CEO of enterprise AI application vendor C3 AI and founder of CRM software provider Siebel Systems, this week said there's a case to be made AI vendors could regulate their own products, but it's unlikely in a capitalist, competitive system they'd be willing to rein in the technology.

"I'm afraid we don't have a very good track record there; I mean, see Facebook for details," Siebel told an audience at MIT Technology Review's EmTech conference. "I'd like to believe self-regulation would work, but power corrupts and absolute power corrupts absolutely."

The White House announcement comes after tens of thousands of technologists, scientists, educators and others put their names on a petition calling for OpenAI to pause for six months further development of ChatGPT, which currently runs on the GPT-4 large language model (LLM) algorithm.

Technologists are alarmed by the rapid rise of AI from improving tasks, such as online searches, to being able to create realistic prose and code software from simple prompts, and create video and photos -all nearly undiscernible from actual images.

Earlier this week, Geoffrey Hinton, known as "the godfather of AI" because of his work in the space over the past 50 or so years, announced his resignation from Google as an engineering fellow there. In conjunction with his resignation, he sent a letter toThe New York Timeson the existential threats posed by AI.

Yesterday, Hinton spoke at the EmTech conference and expounded on just how dire the consequences are, and how little can be done because industries and governments are already competing to win the AI war.

"It's as if some genetic engineers said, we're going to improve grizzly bears; we've already improved them with an IQ of 65, and they can talk English now, and they're very useful for all sorts of things. But we think we can improve the IQ to 210," Hinton told an audience of about 400 at the school.

AI can be self-learning and it becomes exponentially smarter over time. Eventually, instead of needing human prompting, it will begin thinking for itself. Once that happens, there's little that can be done to stop what Hinton believes is the inevitable -the extinction of humans.

"These things will have learned from us by reading all the novels that ever where and everything Machiavelli ever wrote [about] how to manipulate people," he said. "And if they're much smarter than us, they'll be very good at manipulating us. You won't realize what's going on. You'll be like a two-year-old who's being asked, 'Do you want the peas or the cauliflower,' and doesn't realize you don't have to have either. And you'll be that easy to manipulate."

Hinton said his "one hope" is that competing governments, such as the US and China, can agree that allowing AI to have unfettered rein is bad for everyone. "We're all in the same boat with respect to the existential threat so we all ought to be able to cooperate on trying to stop it," Hinton said.

Others at the MIT event agreed. Siebel described AI as more powerful and dangerous than the invention of the steam engine, which brought about the industrial revolution.

AI, Siebel said, will soon be able to mimic without detection any kind of content already created by human beings - news reports, photos, videos - and when that happens, there'll be no easy way to determine what's real and what's fake.

"And, the deleterious consequences of this are just terrifying. It makes an Orwellian future look like the Garden of Eden compared to what is capable of happening here," Siebel said. "It might be very difficult to carry on a free and open democratic society. This does need to be discussed. It needs to be discussed in the academy. It needs to be discussed in government."

Margaret Mitchell, chief ethics scientist at machine learning app vendor Hugging Face, said generative AI applications, such as ChatGPT  can be developed for positive uses, but any powerful technology can also be used for malicious aims, too.

"That's called dual use," she said. "I don't know that there's a way to have any sort of guarantee any technology you put out won't have dual use."

Regina Sam Penti, a partner at international lawfirm Ropes & Gray LLP, told MIT conference attendees that companies creating generative AI and organizations purchasing and using the products have legal liability. But most lawsuits to date have targeted large language model (LLM) developers.

With generative AI, most of the issues center around data use, according to Penti, which is because LLMs consume massive amounts of data and information "gathered from all corners of the world."

"So, effectively, if you are creating these systems, you are likely to face some liability," Penti said. "Especially if you're using large amounts of data. And it doesn't matter whether you're using the data yourself or getting it from a provider."

tag-icon Горячие метки: Искусственный интеллект Общий анализ искусственного интеллекта Правительство российской федерации Новые технологии и технологии 13. Чатботы

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.