
Researchers Warn AI Can Design Zero-Day Biology Threats With Deadly Toxins
🎙️ Paranoid Newscast
AI continues to permeate our daily life with many companies finding new ways to integrate the technology into its products. AI can be a double-edged sword, however, with nefarious actors continually finding new ways to leverage the technology in troubling ways. A new study by researchers at Microsoft highlights another one of the potential dangers of AI.
A “zero day” vulnerability is a term in cybersecurity that describes an undiscovered and unaddressed flaw in software that can lead to serious exploits. The researchers at Microsoft are now using this term to describe a similar problem in biosecurity systems used to safeguard genetic sequencing.
The researchers used generative AI to design genetic sequences capable of producing dangerous toxins and pathogens. Typically, bio-tech companies have systems in place to prevent someone from sequencing such harmful proteins. However, the samples the researchers submitted were structured in a way that allowed them to bypass the safeguards put in place to prevent this from happening.
Microsoft has informed the US government and the affected companies, but there’s still the possibility that malicious sequences can sneak past updated systems. Adam Clore, director of technology R&D at Integrated DNA Technologies, and one of the report’s coauthors, says that “this isn’t a one-and-done thing. It’s the start of even more testing, and that “we’re in something of an arms race.”
While Microsoft points to the weaknesses in the safety systems implemented by bio-tech companies working on DNA sequences, others think it’s the wrong approach to focus on this particular link of the chain. AI Safety researcher from the University of California, Berkely, Michael Cohen, notes how tenuous the fix will be. “We’re going to have to retreat from this supposed choke point, so we should start looking around for ground that we can actually hold.” Instead, Cohen believes that safeguards should be included in generative AI models to prevent any harmful sequences being generated in the first place. Hopefully experts can come together to find a solution to this issue.