Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Thousands of companies vulnerable to cyberattacks due to exploited flaw in open-source AI framework, researchers find

Mar, 26, 2024 Hi-network.com

Security analysts have warned about actively exploiting a contentious vulnerability within the widely-used open-source AI framework Ray.

Ray, a tool integral to developing and deploying large-scale Python applications, especially in domains such as machine learning, scientific computation, and data processing, is heavily relied upon by tech giants like Uber, Amazon, and OpenAI, as noted by its developer, Anyscale.

The vulnerability, identified as CVE-2023-48022 and termed ShadowRay, was discovered by researchers at the Israel-based cybersecurity firm Oligo Security. They uncovered thousands of Ray servers globally that were left exposed and compromised due to this flaw, which malicious actors have exploited. The compromised servers span various sectors, including healthcare, video analytics, and prestigious educational institutions, with some systems compromised for an alarming seven months or more.

Exploiting ShadowRay, attackers gained unauthorized access to organizational computing resources, leading to the leakage of sensitive data. Among the compromised data were critical credentials granting access to databases, facilitating silent exfiltration or manipulation of entire datasets, and, in some instances, encryption with ransomware.

The leaked information encompassed a range of sensitive assets, from password hashes to Stripe tokens enabling illicit access to payment accounts and Slack tokens for unauthorized access to messaging platforms.

Initially identified in 2023, CVE-2023-48022 was not promptly addressed, as it wasn't initially perceived as a significant threat. Anyscale downplayed its severity, attributing it to a deliberate design choice rather than a flaw. This contentious classification led to its omission from several vulnerability databases.

The exploitation of ShadowRay marks a significant milestone as the first known instance of AI workloads being actively targeted through vulnerabilities in modern AI infrastructure. Given the wealth of sensitive information processed within AI environments, they present lucrative targets for malicious actors. Additionally, the high computational power used by AI models renders them appealing targets for exploitation.

As researchers caution, the reliance on AI infrastructure poses a substantial risk to AI-driven enterprises, representing both a potential single point of failure and an attractive target for cyberattacks.

tag-icon Горячие метки: Искусственный интеллект Безопасность в сети кибербезопасность Женевский диалог по вопросу об ответственном поведении в киберпространстве

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.