Google has expanded its bug bounty program to include vulnerabilities specific to generative AI. This expansion is aimed at improving the security of these powerful new technologies, which pose unique security challenges.
With concerns around generative AI ever-present, Google has announced an expansion of its Vulnerability Rewards Program (VRP) focused on AI-specific attacks and opportunities for malice. The company has released updated guidelines detailing which discoveries qualify for rewards and which fall out of scope.
Generative AI brings to light new security issues, such as the potential for unfair bias or model manipulation. For example, discovering training data extraction that leaks private, sensitive information falls in scope, but if it only shows public, non-sensitive data, then it wouldn’t qualify for a reward.
Google is offering rewards of up to $31,337 for finding critical vulnerabilities in its generative AI systems. Last year, Google gave security researchers $12 million for bug discoveries.
Making AI Safer for Everyone
“We believe expanding the VRP will incentivize research around AI safety and security, and bring potential issues to light that will ultimately make AI safer for everyone,” the company said in a statement.
Google’s expansion of its bug bounty program to target generative AI attacks is a significant step towards ensuring the safety and security of these rapidly evolving technologies. It not only incentivizes researchers to uncover potential vulnerabilities but also contributes to making AI safer for everyone.