AI Just Wrote Working Viruses From Scratch & Our Defenses Are Already Failing

Something happened in a Stanford laboratory that changed everything we know about viral creation. An artificial intelligence system, trained on millions of genetic sequences, wrote instructions for entirely new viruses. Scientists then built those viruses in their lab. And they worked.

Not every AI-generated genome produced a functional virus. But enough of them did to send shockwaves through biosecurity circles around the world. Sixteen brand-new viruses, designed entirely by machine intelligence, successfully infected and killed bacteria in controlled experiments. Some performed better than their natural counterparts.

For now, these AI-designed viruses pose no threat to humans. Researchers built safeguards into their models and limited their work to bacteriophages, which prey on bacteria rather than people. But a separate team at Microsoft discovered something troubling about our existing defenses against biological threats. And their findings suggest we may already be one step behind.

Machines Learning to Write Life

Brian Hie, a computational biology professor at Stanford University, led his team in training an AI model called Evo on the genetic grammar of viruses. Evo operates on principles similar to ChatGPT, but instead of learning from articles and books, it studied millions of bacteriophage genomes. Pattern by pattern, sequence by sequence, the AI learned how viral DNA works.

Armed with that knowledge, Evo began writing new genetic instructions from scratch. Researchers evaluated thousands of AI-generated sequences before narrowing candidates down to 302 viable bacteriophages. When scientists synthesized these genomes and introduced them to E. coli bacteria, sixteen viruses successfully hunted and killed their targets.

Samuel King, a bioengineer at Stanford and study co-author, called results exciting for therapeutic potential. Bacteriophages attack only bacteria, making them promising weapons against antibiotic-resistant infections that kill over a million people annually. Doctors already test natural phages as alternatives to failing antibiotics, and patients with drug-resistant infections have recovered after experimental phage therapy when standard treatments failed.

Yet computer scientist Jonathan Feldman at Georgia Institute of Technology offers a sobering assessment. “We’re nowhere near ready for a world in which artificial intelligence can create a working virus,” he wrote in the Washington Post. “But we need to be, because that’s the world we’re now living in.”

A Dangerous Blind Spot Exposed

Image Source:

While Stanford researchers celebrated therapeutic possibilities, a team at Microsoft uncovered a serious vulnerability in global biosecurity defenses. Bruce Wittmann, a senior applied scientist at Microsoft Research, led an investigation into DNA synthesis screening systems that companies use to prevent dangerous genetic material from reaching bad actors.

DNA synthesis firms screen customer orders against databases of known pathogens and toxins. When someone requests genetic sequences matching dangerous organisms, systems flag the order for review. For years, this screening served as a chokepoint against biological threats.

But Wittmann’s team demonstrated that AI-powered protein design tools can rewrite dangerous proteins into entirely new sequences that still function but look nothing like flagged materials. Standard screening software compares orders to known threats. When AI generates a novel sequence that produces the same toxic function through different genetic code, alarms stay silent.

Eric Horvitz, Microsoft’s chief scientific officer, warned that AI-powered protein design raises serious concerns about malevolent uses. The speed of advancement in the field outpaces our ability to anticipate risks. And once AI tools and training data spread online, no border can stop their diffusion.

Racing to Patch Global Defenses

Microsoft’s discovery triggered urgent collaboration with four major DNA synthesis companies. Researchers worked for months to develop new screening algorithms that focus on protein structure and function rather than sequence matching alone. By analyzing what a genetic sequence might produce rather than simply comparing it to known threats, updated systems can catch AI-redesigned toxins that would have slipped through before.

Patches rolled out globally across commercial screening pipelines, turning a hidden weakness into defensive improvements. Detection rates of AI-designed synthetic sequences improved sharply after updates. But even with enhanced screening, an average of three percent of potentially dangerous sequences still evade detection across four commonly used tools.

Publishing findings about these vulnerabilities created its own dilemma. Scientific papers must contain enough detail for other researchers to verify results. But revealing too much about evasion techniques could hand bad actors a roadmap for circumventing defenses.

Horvitz described obvious tension among peer reviewers about how to handle publication. Researchers ultimately created a tiered access system where scientists seeking sensitive data must apply through the International Biosecurity and Biosafety Initiative for Science. Microsoft established an endowment to fund this neutral third-party evaluation process and host restricted data.

Tessa Alexanian, technical lead at Common Mechanism, a genetic sequence screening tool provided by IBBIS, acknowledged the experimental nature of these solutions. “This managed-access program is an experiment and we’re very eager to evolve our approach,” she said.

Safeguards Face a Formidable Adversary

Stanford researchers anticipated misuse potential and built precautions into their work. They excluded all viruses that infect humans, animals, or plants from Evo’s training data. Models learned only from bacteriophage genomes, never gaining exposure to pathogens capable of harming people.

Testing confirmed their models could not independently generate sequences dangerous to humans. By limiting scope to bacteriophages already common in laboratory studies, researchers reduced the risk that their work could be weaponized.

But Tina Hernandez-Boussard, a professor of medicine at Stanford University School of Medicine who consulted on safety for AI models in the study, raised concerns about the long-term reliability of such measures. “You have to remember that these models are built to have the highest performance, so once they’re given training data, they can override safeguards,” she said.

AI systems optimize relentlessly for their objectives. Given enough data and computational power, models may find unexpected paths around restrictions that seemed robust during initial testing. What works today may not hold tomorrow.

Prevention Alone Cannot Protect Us

Source: Shutterstock

The current biosecurity strategy relies heavily on prevention. Screening systems flag dangerous orders. Safety protocols govern laboratory work. Export controls slow the spread of sensitive technologies. These guardrails matter, but cannot keep pace with AI advancement.

Screening cannot flag a virus that never existed before in nature. No database contains sequences for pathogens that an AI just invented. And laboratory equipment exists that builds proteins on-site without third-party gatekeepers, allowing determined actors to bypass commercial synthesis companies entirely.

DNA synthesis screening remains voluntary for companies. No binding international law governs AI-generated biological materials. Bad actors with access to AI tools, training data, and basic laboratory equipment face few obstacles beyond their own technical limitations.

Yet significant barriers still separate digital genome design from actual biological weapons capable of harming humans. Bacteriophages are far simpler than influenza or coronaviruses. Creating stable, predictable, dangerous organisms requires high-containment facilities, specialized expertise, and years of careful experimentation.

Craig Venter, a leading genomics expert, expressed grave concerns about applying such methods to pathogens like smallpox or anthrax. But he acknowledged that current AI techniques remain limited in scope. Wide gaps persist between writingthe genetic code and producing living threats.

Governments Begin Responding

Image Source: Shutterstock

A 2023 presidential executive order in the United States called for robust AI system evaluations and risk mitigation policies. Federal frameworks now tie research funding to nucleic acid screening requirements, pushing laboratories toward vetted DNA synthesis providers.

Under these policies, providers must screen every order, assess customer identities, report suspicious requests, and maintain records for years. Future administrations may revise these expectations, but they demonstrate how governments can turn safety guidance into concrete purchasing conditions.

Britain established an AI Safety Institute to test models, evaluate risks, and share methods for reducing misuse. International organizations like the Biosecurity and Biosafety Initiative for Science promote screening standards among companies and regulators worldwide. Members of the International Gene Synthesis Consortium commit to screening sequences and customer identities using harmonized protocols.

Yet voluntary standards and national policies leave gaps that determined actors could exploit. Global coordination remains uneven, and open-source AI tools spread freely regardless of jurisdiction.

Building Resilience for Tomorrow

If prevention cannot stop novel AI-generated threats, resilience becomes essential. Reducing response time matters more than blocking every possible attack vector.

AI models that design viruses can also design antibodies, antivirals, and vaccines. Training such defensive systems requires high-quality data on immune responses, therapeutic interactions, and manufacturing processes. Much of that information sits locked in private laboratories and proprietary datasets.

Federal investment could build shared resources that accelerate countermeasure development. Manufacturing capacity must stand ready to mass-produce defensive treatments quickly when crises emerge. Regulatory pathways need updating for AI-generated medicines that move from design to deployment faster than traditional drugs.

Microsoft already explores using AI to detect AI-driven biological threats across broader environments. Monitoring sewage, air filters, and hospital samples for genetic traces of unauthorized production could extend surveillance beyond single synthesis sites.

Screening may evolve from a chokepoint at DNA synthesis companies to a distributed system watching for biological anomalies throughout society. Funders, publishers, industry, and universities all share responsibility for requiring safety evaluations whenever powerful AI tools touch biological research.

Living With What We Have Created

AI-designed viruses exist now. Scientists built them, tested them, and published their methods. That knowledge cannot be unlearned. Techniques will spread, capabilities will improve, and barriers to entry will fall.

Bacteriophages created at Stanford pose no threat to humans today. But each advance in AI-powered biology brings us closer to a threshold where machines could design pathogens as easily as they now design bacteria-killing viruses.

Our defenses have gaps. Our regulations lag. Our screening systems catch most threats, but not all. And models built to perform at the highest level may eventually find ways around every safeguard we construct.

What remains is a choice about how seriously we take this moment. Investments in resilience, surveillance, and rapid response could determine whether AI-designed biology becomes humanity’s greatest medical breakthrough or its most dangerous weapon. Scientists who built these tools urge extreme caution. Experts who study biosecurity warn we are unprepared.

One thing seems certain. In laboratories around the world, AI systems continue learning the grammar of life. And we have only begun to discover what they might write next.

Loading...