Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

All eyes on cyberdefense as elections enter the generative AI era

Apr, 08, 2024 Hi-network.com
ai-voting-elections-ballot-box
wildpixel/Getty Images

As countries prepare to hold major elections in a new era marked by generative artificial intelligence (AI), humans will be prime targets of hacktivists and nation-state actors.

Generative AI may not have changed how content spreads, but it has accelerated its volume and affected its accuracy. 

Also: How OpenAI plans to help protect elections from AI-generated mischief

The technology has helped threat actors generate better phishing emails at scale to access information about a targeted candidate or election, according to Allie Mellen, principal analyst at Forrester Research. Mellen's research covers security operations and nation-state threats as well as the use of machine learning and AI in security tools. Her team is closely tracking the extent of misinformation and disinformation in 2024. 

Mellen noted the role social media companies play in safeguarding against the spread of misinformation and disinformation to avoid a repeat of the 2016 US elections.

Almost 79% of US voters said they are concerned about AI-generated content being used to impersonate a political candidate or create fraudulent content, according to a recent study released by Yubico and Defending Digital Campaigns. Another 43% said they believe such content will harm this year's election outcomes. Conducted by OnePoll, the survey polled 2,000 registered voters in the US to assess the impact of cybersecurity and AI on the 2024 election campaign run.

Also: How AI will fool voters in 2024 if we don't do something now

Respondents were provided with an audio clip recorded using an AI voice, and 41% said they believed the voice to be human. Some 52% have also received an email or text message that appeared to be from a campaign, but which they said they suspected was a phishing attempt.

"This year's election is particularly risky for cyberattacks directed at candidates, staffers, and anyone associated with a campaign," Defending Digital Campaigns president and CEO Michael Kaiser said in a press release. "Having the right cybersecurity in place is not an option -- it's essential for anyone running a political operation. Otherwise, campaigns risk not only losing valuable data but losing voters."

Noting that campaigns are built on trust, David Treece, Yubico's vice president of solutions architecture, added in the release that potential hacks, such as fraudulent emails or deepfakes on social media that directly interact with their audience, can affect campaigns. Treece urged candidates to take proper steps to protect their campaigns and adopt cybersecurity practices to build trust with voters.

Also: How Microsoft plans to protect elections from deepfakes

Increased public awareness of fake content is also key since the human is the last line of defense, Mellen told .

She further underscored the need for tech companies to be aware that securing elections is not simply a government issue, but a broader national challenge that every organization in the industry must consider. 

Topmost, governance is critical, she said. Not every deepfake or social-engineering attack can be properly identified, but their impact can be mitigated by the organization through proper gating and processes to prevent an employee from sending money to an external source.

"Ultimately, it's about addressing the source of the problem, rather than the symptoms," Mellen said. "We should be most concerned about establishing proper governance and [layers of] validation to ensure transactions are legit." 

At the same time, she said we should continue to improve our capabilities in detecting deepfakes and generative AI-powered fraudulent content.

Also: Google to require political ads to reveal if they're AI-generated 

Attackers that leverage generative AI technologies are mostly nation-state actors, with others mainly sticking to attack techniques that already work. She said nation-state threat actors are more motivated to gain scale in their attacks and want to push forward with new technologies and ways to access systems they would not otherwise have been able to. If these actors can push out misinformation, it can erode public trust and tear up societies from within, she cautioned.

Generative AI to exploit human weakness

Nathan Wenzler, chief security strategist at cybersecurity company Tenable, said he agreed with this sentiment, warning that there will probably be increased efforts from nation-state actors to abuse trust through misinformation and disinformation. 

While his team hasn't noticed any new types of security threats this year with the emergence of generative AI, Wenzler said the technology has enabled attackers to gain scale and scope.

This capability enables nation-state actors to exploit the public's blind trust in what they see online and willingness to accept it as fact, and they will use generative AI to push content that serves their purpose, Wenzler told .

The AI technology's ability to generate convincing phishing emails and deepfakes has also championed social engineering as a viable catalyst to launch attacks, Wenzler said.

Also: Facebook bans political campaigns from using its new AI-powered ad tools

Cyber-defense tools have become highly effective in plugging technical weaknesses, making it harder for IT systems to be compromised. He said threat adversaries realize this fact and are choosing an easier target. 

"As the technology gets harder to break, humans [are proving] easier to break and GenAI is another step [to help hackers] in that process," he noted. "It'll make social engineering [attacks] more effective and allows attackers to generate content faster and be more efficient, with a good success rate."

If cybercriminals send out 10 million phishing email messages, even just a 1% improvement in creating content that works better to convince their targets to click provides a yield of an additional 100,000 victims, he said. 

"Speed and scale is what it's about. GenAI is going to be a major tool for these groups to build social-engineering attacks," he added.

How concerned should governments be about generative AI-powered risks? 

"They should be very concerned," Wenzler said. "It goes back to an attack on trust. It's really playing into human psychology. People want to trust what they see and they want to believe each other. From a society standpoint, we don't do a good enough job questioning what we see and being vigilant. And it's getting harder now with GenAI. Deepfakes are getting incredibly good."

Also: AI boom will amplify social problems if we don't act now, says AI ethicist

"You want to create a healthy skepticism, but we're not there yet," he said, noting that it would be difficult to remediate after the fact since the damage is already done, and pockets of the population would have wrongly believed what they saw for some time.

Eventually, security companies will create tools, such as for deepfake detection, which can address this challenge effectively as part of an automated defense infrastructure, he added.

Large language models need protection

Organizations should also be mindful of the data used to train AI models. 

Mellen said training data in large language models (LLMs) should be vetted and protected against malicious attacks, such as data poisoning. Tainted AI models can generate false outputs.

Sergy Shykevich, Check Point Software's threat intelligence group manager, also highlighted the risks around LLMs, including bigger AI models to support major platforms, such as OpenAI's ChatGPT and Google's Gemini. 

Nation-state actors can target these models to gain access to the engines and manipulate the responses generated by the generative AI platforms, Shykevich told . They can then influence public opinions and potentially change the course of elections.

Without any regulation yet to govern how LLMs should be secured, he stressed the need for transparency from companies operating these platforms.

Also: Real-time deepfake detection: How Intel Labs uses AI to fight misinformation

With generative AI being relatively new, it also can be challenging for administrators to manage such systems and understand why or how responses are generated, Mellen said.

Wenzler noted that organizations can mitigate risks using smaller, more focused, and purpose-built LLMs to manage and protect the data used to train their generative AI applications. 

While there are benefits to ingesting larger datasets, he recommended businesses look at their risk appetite and find the right balance.

Wenzler urged governments to move more quickly and establish the necessary mandates and rules to address the risks around generative AI. These rules will provide the direction to guide organizations in their adoption and deployment of generative AI applications, he said.

tag-icon Горячие метки: Технологии и оборудование

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.