In recent months governments around the world have begun to converge around one solution to managing the risks of generative AI: red teaming.
How to Red Team a Gen AI Model
The harms that generative AI systems create often differ from other forms of AI in both scope and scale.
January 04, 2024
Summary.
Red teaming, a structured testing effort to find flaws and vulnerabilities in an AI system, is an important means of discovering and managing the risks posed by generative AI. The core concept is trusted actors simulate how adversaries would attack any given system. The term was popularized during the Cold War when the U.S. Defense Department tasked “red teams” with acting as the Soviet adversary, while blue teams were tasked with acting as the United States or its allies. In this article, the author shares what his specialty law firm has discovered what works and what doesn’t in red teaming generative AI.