NurPhoto / Contributor / Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
The new program focuses on vulnerabilities related to AI products.
Rewards range from $500 to $30,000.
Aaims to tackle past confusion concerning in-scope bugs and problems.
Google has launched a new bug bounty program aimed at addressing security flaws and bugs in products related to artificial intelligence (AI).
Also: Google just gave older smart home devices a useful upgrade for free - including these Nest models
On Monday, Google security engineering managers Jason Parsons and Zak Bennett said in a blog post that the new program, an extension of the tech giant's existing Abuse Vulnerability Reward Program (VRP), will incentivize researchers and bug bounty hunters to focus on "high-impact abuse issues and security vulnerabilities" in Google products and services.
Researchers have earned more than $430,000 since 2023, when Google's bug bounties expanded to include AI-related issues. Now, it is hoped that a standalone program will encourage even more reports -- which could be crucial for the tech giant as it continues to integrate AI into its digital product suite.
What qualifies as an acceptable AI-related bug bounty?
Google has separated potentially acceptable reports into the following areas:
Rogue actions : Attacks that modify accounts or data with a security impact. For example, the use of an indirect prompt to force Google Home to unlock a door.
: Attacks that modify accounts or data with a security impact. For example, the use of an indirect prompt to force Google Home to unlock a door. Sensitive data theft : Attacks leading to the theft of sensitive user data. These could include indirect prompt injections that send email summaries to a threat actor without user consent.
: Attacks leading to the theft of sensitive user data. These could include indirect prompt injections that send email summaries to a threat actor without user consent. Phishing enablement : Phishing attack vectors on Google websites that include persistent, cross-user HTML injections.
: Phishing attack vectors on Google websites that include persistent, cross-user HTML injections. Model theft : Security problems that could allow attackers to steal complete, confidential model parameters, such as exposed Google APIs.
: Security problems that could allow attackers to steal complete, confidential model parameters, such as exposed Google APIs. Context manipulation : Issues leading to the persistent manipulation of an AI environment without significant user interaction.
: Issues leading to the persistent manipulation of an AI environment without significant user interaction. Access control bypass: Attacks leading to data exfiltration from resources that shouldn't be accessible.
In addition, Google will consider reports detailing AI-related issues such as unauthorized product usage, cross-user denial of service, and other forms of abuse.
Also: Google may shift to risk-based Android security patch rollouts - what that means for you
Products included in the new bug bounty program include Gemini, Google Search, AI Studio, and Google Workspace.
There are some caveats
The Google engineers have been careful to point out specific out-of-scope items. These include jailbreaks, content-based issues, and AI hallucinations. The team noted at the end of last year that while some of these areas are of great interest to researchers, there can be difficulties in replicating the findings. For example, a jailbreak may only impact a user's own session.
Also: This fundamental Android feature is 'absolutely not' going away, says Google - but it is changing
"The team is aware of the community interest and continues to reassess our program scope around these issues," Google said.
Furthermore, issues found in Vertex AI or other Google Cloud products are not in scope for this program and should be reported via the company's Google Cloud VRP.
Payouts
Reports accepted by Google provide different financial rewards and incentives, with payouts for most reports ranging from $500 to $20,000. For example, a bug bounty describing a severe rogue action could earn a researcher up to $10,000, whereas an access control bypass might pay out up to $2,500.
Also: Your Android phone's most powerful security feature is off by default and hidden - turn it on now
However, more cash may be on offer depending on the quality of reports and the "novelty" factor of reported vulnerabilities. The new program adopts the same approach as Google's wider VRP, and a bonus of up to $10,000 -- bringing the total to $30,000 -- for novel attacks is available.
"We're excited to be launching this new program, and we hope our valued researchers are too!" the engineers said.
Want more stories about AI? Check out AI Leaderboard, our weekly newsletter.