Sentinel, a transformer-based AI safeguard created to stop big language models and scientific AI systems from producing new chemical agents, was deployed by Lunai Bioworks on Tuesday. According to the biotech business based in Sacramento, the technology embeds directly within foundation models to screen chemical outputs in real time. CEO David Weinstein referred to this technology as “the immune system for scientific AI.”
“As AI models become more powerful, safety has to move closer to the core,” Weinstein said in a statement. “Sentinel operates inside AI systems—not outside them—stopping dangerous chemical designs before they are ever produced.”
The announcement coincides with growing concerns about the dual-use risks of advanced AI in biology and chemistry, with leading AI firms like Anthropic and OpenAI cautioning that strong models might help malicious actors create chemical and biological threats.
The Operation Of Sentinel
Sentinel employs transformer-based molecular encoders trained to identify structural and mechanistic signals linked to toxicological and chemical-weapons-relevant activities, in contrast to traditional keyword filters or rule-based screening. Lunai’s own biological and toxicological datasets are used to refine the system, which is based on over 550 million publicly available chemical structures.
Sentinel assesses the request in molecular embedding space when an AI system tries to create, analyze, or recommend a molecule. It can then flag or prohibit outputs associated with neurotoxic, cytotoxic, or other dangerous mechanisms. According to the business, this enables it to detect “latent risk in novel molecular structures” even in the absence of a publicly available toxicity annotation.
Sentinel is a component of Lunai’s larger biodefense strategy, which also includes two other programs: Counteract, which develops medical countermeasures against new threats, and Pathfinder, which quickly identifies novel chemical agents.
Growing Concerns About AI Biosecurity
The technology comes at a time when leaders in AI are becoming more outspoken about biosecurity threats. AI may soon be able to create biological pathogens or weapons on its own, according to Anthropic CEO Dario Amodei, who recently referred to such systems as a “country of geniuses in a datacenter.” According to OpenAI, its most recent agentic AI tools have “high” biorisk capacity, which means they might be of significant use to inexperienced actors trying to pose chemical or biological risks.
Microsoft researchers showed in October 2025 that AI protein design tools might produce thousands of synthetic variations of known toxins that could evade synthesis businesses’ DNA screening methods. The study team referred to this weakness as a “zero day” threat in biosecurity.
According to Lunai’s announcement, contemporary chemical weapons are becoming more “short-acting, localized, and infrastructure-preserving,” citing allegations from Ukraine about the employment of prohibited chemical agents in combat. According to the business, Sentinel is designed for government organizations, life sciences platforms that need secure molecular design environments, and AI developers looking for integrated biosecurity protection.

