Already facing a dearth of talent, cybersecurity teams now need additional skillsets to deal with the growing adoption of generative artificial intelligence (AI) and machine learning. This is further complicated by a threat landscape that continues to evolve and a widening attack surface that needs safeguarding, including legacy systems that organizations are finding tough to let go of.
As it is, they are struggling to hire enough cybersecurity talent.
Also: Security first in software? AI may help make this an everyday practice
While the number of cybersecurity professionals in Asia-Pacific grew 11.8% year-on-year to just under 1 million in 2023, the region still needs another 2.67 million to adequately secure digital assets. This cybersecurity workforce gap is a record high for the region, widening by 23.4%, according to the 2023 ISC2 Cybersecurity Workforce Study, which polled 14,865 respondents, including 3,685 from Asia-Pacific.
Worldwide, the gap grew 12.6% from 2022 to almost 4 million cybersecurity professionals, according to estimates by ISC2 (International Information Systems Security Certification Consortium), a non-profit association comprising certified cybersecurity professionals.
The global cybersecurity workforce currently is at 5.45 million, up 8.7% from 2022, and will need to almost double to hit full capacity, ISC2 said.
The association's CISO Jon France told that the biggest gap is in Asia-Pacific, but there are promising signs that this is narrowing. Singapore, for instance, decreased its cybersecurity workforce gap by 34% this year. Another 4,000 professionals in the sector are needed to sufficiently protect digital assets, ISC2 projects.
Globally, 92% of cybersecurity professionals believe their organization has skills gaps in at least one area, including technical skills such as penetration testing and zero trust implementation, according to the study. Cloud security and AI and machine learning top the list of skills that companies lack, at 35% and 32%, respectively.
Also: Generative AI can easily be made malicious despite guardrails
This demand will continue to grow as organizations incorporate AI into more processes, further driving the need for cloud computing, and the need for both skillsets, France noted. It means cybersecurity professionals will need to understand how AI is integrated and secure the applications and workflows it powers, he said.
Left unplugged, gaps in cybersecurity skills and staff will result in teams being overloaded and this can lead to oversights in addressing vulnerabilities, he cautioned. Misconfiguration and falling behind security patches are among the most common mistakes that can lead to breaches, he added.
Things are likely to get more complex with the emergence of generative AI.
Tools such as ChatGPT and Stable Diffusion have enabled attackers to improve the credibility of messages and imagery, making it easier to fool their targets. This significantly improves the quality of phishing email and websites, said Jess Burn, principal analyst at Forrester, who contributes to the analyst firm's research on the role of CISOs and security talent management.
And while these tools help bad actors create and launch attacks on a greater scale, Burn noted that this does not change how defenders respond to such threats. "We expect cyberattacks to increase in number as they've done for years now, [but] the threats themselves are not novel," she said in an email interview. "Security practitioners already know how to identify, resolve, and mitigate them."
To stay ahead, though, security leaders should incorporate prompt engineering training for their team, so they can better understand how generative AI prompts function, the analyst said.
Also: Six skills you need to become an AI prompt engineer
She also underscored the need for penetration testers and red teams to include prompt-driven engagements in their assessment of solutions powered by generative AI and large language models.
They need to develop offensive AI security skills to ensure models are not tainted or stolen by cybercriminals seeking intellectual property. They also have to ensure sensitive data used to train these models are not exposed or leaked, she said.
In addition to the ability to write more convincing phishing email, generative AI tools can be manipulated to write malware despite limitations put in place to prevent this, noted Jeremy Pizzala, EY's Asia-Pacific cybersecurity consulting leader. He noted that researchers, including himself, have been able to circumvent ethical restrictions that guide platforms such as ChatGPT and prompt them to write malware.
Also: What is phishing? Everything you need to know to protect yourself from scammers
There also is potential for threat actors to build their own large language models, trained on datasets with known exploits and malware, and create a "super strain" of malware that is more difficult to defend against, Pizzala said in an interview with .
This pivots to a broader debate about AI and the associated business risks, where many large language and AI models have inherent and in-built biases. Hackers, too, can target AI algorithms, strip out the ethics guidelines and manipulate them to do things they are not programmed to do, he said, referring to the risk of algorithm poisoning.
All of these risks stress the need for organizations to have a governance plan, with safeguards and risk management policies to guide their AI use, Pizzala said. These also should address issues such as hallucinations.
With the right guardrails in place, he noted that generative AI can benefit cyber defenders themselves. Deployed in a security operations center (SOC), for instance, chatbots can more quickly provide insights on security incidents, giving responses to prompts asked in simple language. Without generative AI, this would have required a series of complex queries and responses that security teams then needed time to decipher.
Also: AI safety and bias: Untangling the complex chain of AI training
AI lowers the entry level for cybersecurity skills. Without the aid of generative AI, organizations would need specialized experience to interpret data generated by traditional monitoring and detection tools at SOCs, he said. He noted that some organizations have started training and hiring based on this model of governance.
Echoing Burn's comments on the need for generative AI knowledge, Pizzala also urged companies to build up the relevant technical skillsets and knowledge of the underlying algorithms. While coding for machine learning and AI models is not new, such foundational skills still are short in supply, he said.
The growing adoption of generative AI also calls for a different lens from a cybersecurity point of view, he added, noting that there are data scientists who specialize in security. Such skillsets will need to evolve and continue to upskill, he said.
In Asia-Pacific, 44% also point to inadequate cybersecurity budget as the biggest challenge, compared to the global average of 36%, Pizzala said, citing EY's 2023 Global Cybersecurity Leadership survey.
Also: AI at the edge: 5G and the Internet of Things see fast times ahead
A widening attack surface is the most cited internal challenge, fuelled by the adoption of cloud computing at scale and the Internet of Things (IoT). With AI now paving new ways to infiltrate systems and third-party supply chain attacks still a concern, the EY consultant said it all adds up to an ever-growing attack surface.
Burn further noted: "Most organizations were not prepared for the rapid migration to cloud environments a few years ago and they've been scrambling to acquire cloud security skills ever since, often opting to work with MDR (managed detection and response) services providers to fill those gaps.
"There's also a need for more proficiency with API security given how ubiquitous APIs are, how many systems they connect, and how much data flows through them," the Forrester analyst said.
Also: Will AI hurt or help workers? It's complicated
To address these requirements, she said organizations are tapping the knowledge that security operations and software development or product security teams have on infrastructure and adjusting this for the new environments. "So it's about finding the right training and upskilling resources and giving teams the time to train," she added.
"Having an underskilled team can be as risky as having an understaffed one," she said. Citing Forrester's 2022 Business Technographics survey on data security, she said companies that had six or more data breaches in the past year were more likely to report the unavailability of security employees with the right skills as one of their biggest IT security challenges in the past 12 months.
Should organizations engage managed security services providers to plug the gaps, Pizzala recommends they do so while remaining involved. Similar to a cloud management strategy, there should be shared responsibility, with the companies doing their own checks and scanning, he said.
He also supported the need for businesses to reassess their legacy systems and work to simplify their tech stack. Having too many cybersecurity tools in itself presents a risk, he added.
Operational technology (OT) sectors, in particular, have significant legacy systems, France said.
With a growing attack surface and complex digital and threat landscape, he expressed concerns for companies that are unwilling to let go of their legacy assets even as they adopt new technology. This increases the burden on their cybersecurity teams that have to continue monitoring and protecting old toolsets alongside newly acquired systems.
Also: What the 'new automation' means for technology careers
To plug the resource gap, Curtis Simpson, CISO for security vendor Armis, advocated the need to look at technology, such as automation and orchestration. Much of this will be powered by AI, he said.
"People won't help us close this gap. Technology will," Simpson said in a video interview.
Attacks are going to be AI-powered and continue to evolve, further stressing the need for orchestration and automation so companies can move quickly enough to respond to potential threats, he noted.
Defense in depth remains critical, which means organizations need to have complete visibility and understanding of their entire environment and risk exposure. This then enables them to have the necessary mediation plan and minimize the impact of a cyber attack when one occurs, Simpson said.
It also means that legacy defense capabilities will prove disastrous in the face of modern AI-driven attacks, he said.
Also: How AI can improve cybersecurity by harnessing diversity
Stressing that security teams need fundamental visibility, he noted: "If you can only see half of your environment, you don't know if you're doing the right or wrong things."
Half of Singapore businesses, for instance, say they lack complete visibility of owned and managed assets in their environment, he said, citing recent research from Armis. These companies cannot account for 39% of their asset attributes, such as where the asset is located or how or whether it is supported.
In fact, Singapore respondents cite IoT security and concerns over outdated legacy infrastructure as their top challenges.
Such issues often are compounded by a lack of funding over time to facilitate a company's digital transformation efforts, Simpson noted.
Funds typically are scheduled to slow progressively along with expectations that legacy infrastructures will reduce over time, as microservices and workflows are pushed to the cloud.
Also: State of IT report: Generative AI will soon go mainstream, say 9 out of 10 IT leaders
However, shutting down legacy systems would end up taking longer than expected because companies lack understanding of how these assets continue to be used across the organization, he explained.
"The general stance is to retire legacy, but the reality is that these systems are running across different regions and different customers. Orders are still being processed on [legacy] backend systems," he said, adding that the lack of visibility makes it difficult to identify which customers are using legacy systems and the applications that are running on these assets.
Most struggle to shut down legacy infrastructures or rid of their technical debt, which leaves them unable to recoup software and maintenance costs, he noted.
Their risk landscape then is comprised of cloud services as well as legacy systems, the latter of which are pushing data into a modern cloud architecture and workloads. They also are likely to introduce vulnerabilities along the chain by opening new ports and integration, Simpson added.
Also: The 3 biggest risks from generative AI - and how to deal with them
Their IT and security teams also have more solutions to manage and threat intel collected from different sources to decipher, often manually.
Few organizations, unless they have the necessary capabilities, have a collective view of this mixed environment of modern and legacy systems, he said.
"New technologies are intended to benefit businesses, but when left unmonitored and unmanaged, can become dangerous additions to an organization's attack surface," he noted. "Attackers will look to exploit any weakness possible to gain access to an organization's network. The responsibility lies on organizations to ensure they have the needed oversight to see, protect, and manage all physical and digital assets based on what matters most to their business."