Tech giant Google has recently announced an expansion of its Vulnerability Rewards Program (VRP), inviting external researchers to identify potential security flaws in its generative AI software. This move is in line with Google’s continued commitment to fortifying the security of its AI-based offerings amidst the ongoing expansion of AI applications across various products and services.
In a recent blog post, Google executives Laurie Richardson and Royal Hansen emphasized the critical need for robust security measures in the realm of generative AI, citing concerns about potential biases, model manipulation, and data misinterpretations associated with these advanced AI systems. Acknowledging the evolving nature of security threats, they underscored the significance of external researchers in identifying and addressing vulnerabilities, leading to the enhancement of their VRP program.
The VRP initiative encompasses a wide array of potential attack scenarios, spanning from adversarial prompt exploitation to unauthorized model access and behavioral manipulation. Google’s engineering team, including Eduardo Vela, Jan Keller, and Ryan Rinaldi, stressed the program’s comprehensive scope, with rewards varying based on the severity and type of identified security risks.
Google’s move follows a series of steps to bolster the security of its generative AI products, including the Bard chatbot and Lens image recognition technology, within its extensive product portfolio. In tandem with the AI bug bounty program, Google also announced its collaboration with the Open Source Security Foundation, leveraging initiatives such as Supply Chain Levels for Software Artifacts (SLSA) and Sigstore to strengthen the AI supply chain’s security framework.
As the technological landscape continues to evolve, Google remains dedicated to fostering a secure AI ecosystem, demonstrating its commitment to proactive measures to safeguard the integrity and reliability of its AI-based solutions.