Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Новости по теме

Adobe wants your help finding security flaws in Content Credentials and Firefly

May, 01, 2024 Hi-network.com
firefly-abstract-colorful-ai-depcition-38207-1
Generated via Adobe Firefly by Sabrina Ortiz/

Plugging security holes is vital to keeping generative artificial intelligence (AI) models safe from bad actors, harmful image generation, and other potential misuse. To ensure some of its latest and biggest AI projects are as safe as possible, Adobe on Wednesday expanded its bug bounty program, which rewards security researchers for finding and disclosing bugs, to encompass Content Credentials and Adobe Firefly.

Content Credentials are tamper-evident metadata attached to digital content that serve as a "nutrition label", letting users see the content's "ingredients," such as the creator's name, creation date, any tools used to create the image (including generative AI models), and the edits made.

Also: You should rethink using AI-generated images if you're in the trust-building business

In the era of AI-generated images, this provenance tool can help people determine synthetic from human-made content. This only works, however, if Content Credentials are tamper-proof and used as designed. Adobe is now crowdsourcing security efforts for Content Credentials via its bug bounty program to reinforce protections against potential abuses, such as incorrectly attaching credentials to the wrong content.

Some AI image generators, like Adobe Firefly, automatically attach Content Credentials to AI-generated content. Firefly is Adobe's group of generative AI models that can create images from prompts, other pictures, and more. This family of models is readily accessible to the public through a standalone web application and some of Adobe's most popular applications, including Photoshop. 

The release says Adobe wants security researchers to test Firefly against Open Worldwide Application Security Project (OWASP)'s top security risks in large language model (LLM) applications, such as prompt injection, sensitive information disclosure, and training data poisoning. Adobe will then use this feedback to focus its research and further efforts on addressing Firefly's weaknesses. 

"By proactively engaging with the security community, we hope to gain additional insights into the security posture of our generative AI technologies, which, in turn, will provide valuable feedback to our internal security program," Adobe said in its release. 

Also: Google paid out$10 million in bug bounties to security researchers in 2023

Adobe is inviting ethical hackers interested in participating in the bug bounty program to visit the Adobe HackerOne page and to apply via this form, which asks questions about the applicant's security research and expertise.

In addition to Content Credentials and Adobe Firefly, the bug bounty program is available for most Adobe web apps and desktop and mobile versions of its Creative Cloud apps. You can find the full list of included apps on the Adobe Bug Bounty Program webpage.
Oddly, while the HackerOne page lists rewards ranging from$100 to$10,000, Adobe's webpage says that "this program does not provide monetary rewards for bug submissions." It's unclear whether this refers only to Adobe's private bug bounty program. 

Separately, OpenAI also has a bug bounty program, through which security researchers can make anywhere from$200 to$20,000, depending on the type of the vulnerability.

tag-icon Горячие метки: 3. Инновации

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.