Cyber and Bio Threats in the Age of Intelligent Systems

written by Belin Korukoglu ❂ 7 min read time

“Even a book can be dangerous in the wrong hands, and when that happens, you blame the hands, but you also read the book.“ 

-Erika Johansen, The Queen of the Tearling


AI-Enabled Biological Threats

Ted Kaczynski, who is better known as the Unabomber, was an American mathematician and domestic terrorist. He studied mathematics at Harvard College, but in 1969, he left his academic career and later carried out a lone-wolf bombing campaign. He targeted those advancing modern technology, and in his manifesto, he condemned industrialization, rejected political ideologies, and called for a violent return to primitive living¹.

Unfortunately, he was successful in most of his attempts, and he gained the knowledge of making bombs through years of self-study, experimentation, and reading technical manuals. His bomb-making skills evolved through trial and error, which he documented in journals and notes found in his Montana cabin. 

Now imagine that Kaczynski had access to AI during his time in the woods. He would not need trial and error to make bombs, and he could potentially also increase the impact of his bombs, leading to even more destruction than he has created in innocent people’s lives.

The famous writer Malcolm Gladwell talks about the “coupling effect,” which refers to the idea that human behavior is strongly linked or “coupled” to specific situations and environments rather than existing independently of them. 

Malcolm Gladwell illustrates the coupling effect through England’s mid-20th-century suicide rates. When homes used coal gas rich in carbon monoxide, suicides were frequent; after it was replaced with natural gas, the rates dropped sharply without shifting to other methods. This showed that suicidal behavior was “coupled” to context and opportunity.

If the coupling effect is actually real, then this means that the rise of AI can couple harmful behaviors of people with new and more powerful tools, amplifying their impact. Just as context once enabled certain behaviors, AI could become the new context, coupling malicious intent with capabilities like deepfakes, cyberattacks, or bioengineering. There may be other Kaczynskis out there waiting to be created. 

This is why AI Safety, especially in AI–Bio Convergence, matters now. The effect of AI that we have so far seen has had a limited practical effect on bioweapon creation, but progress in foundation models, cloud labs, and gene design tools could change this sooner than expected.

Ted Kaczynski was a smart mind who, unfortunately, turned his back on humanity. He was an exception, but today, AI is erasing the barriers that once made such expertise rare, allowing people far outside professional labs to access powerful tools with little skill or oversight. As AI becomes part of more complex systems, automation can reduce human oversight and increase the chances of mistakes or misuse, while global competition and open AI releases can push unsafe practices. We’re already seeing signs of this in AI-guided experiments that speed up testing, cloud labs that let anyone run advanced biology work, and the use of genetic data or AI-designed pathogens for targeted harm. Despite these risks, safety checks are inconsistent. Many labs and DNA synthesis companies don’t screen carefully, and current biodefense systems aren’t ready for the new threats AI could bring.

In their report AI and the Evolution of Biological National Security Risks, Bill Drexel and Caleb Withers suggest stronger safety measures to prevent AI misuse in biotechnology. They recommend strict screening of customers and gene orders across all DNA and cloud lab providers, along with regular testing of AI models to see if they could be used for bioweapons. If AI tools start enabling custom pathogen design, access should be licensed and limited to trusted institutions. They also call for AI-based safety systems that can adapt to new threats and for more international cooperation through agreements like the Biological Weapons Convention. Finally, they stress the need to balance security with innovation, allowing controlled use of high-risk AI tools without blocking medical or scientific progress².

AI’s risks span not just biology but cyberspace too. In the digital world, AI can automate hacking and disinformation. The two domains may look different, but they share a core danger: AI gives human intent unprecedented reach.

AI-Enabled Cyber Threats

Critical infrastructure is becoming much more interconnected and this is changing how we handle mistakes or natural disasters. On April 28, 2025, a massive blackout occurred across Spain and Portugal and left millions of people without electricity and shut down services in both of the nations such as transport and internet. Early investigations found no evidence of a cyber-attack; instead, a sudden voltage surge and chain-reaction disconnections destabilized the grid³. Since these systems are both exposed and tightly linked, AI-powered cybercrimes have the potential to cause far more severe, wide-area disruptions than traditional attacks, amplifying physical damage, overwhelming emergency response systems, and creating failures across energy, transport, healthcare, and communication networks⁴.

Cyberattacks on critical infrastructure used to be very hard because these systems have strong protection, like network segmentation and safety controls. Even powerful governments need months or years inside a system before they can strike, as seen in the Triton and Ukraine attacks. But AI is changing this. It lowers the skill needed for attackers by making it easy to craft highly convincing phishing emails, which are the most common way attackers first enter a network. AI can also help find software weaknesses and automate movement inside systems. Since most critical infrastructure uses old, unpatched open-source software, many vulnerabilities are easy to exploit. This means AI-powered phishing combined with automated vulnerability discovery can give attackers a much faster and more effective path into critical systems⁵.

Catastrophic Cyber Capabilities Benchmark (3CB) announced that AI models like GPT-4o and Claude 3.5 Sonnet have the ability to do several offensive cyber tasks such as reconnaissance and exploitation, which demonstrates how quickly attackers’ capabilities are improving. This is creating a growing need for proactive defense, especially tools that can detect threats earlier and respond automatically. For this reason, hackathon teams have a real opportunity to build creative AI-powered defense tools that strengthen monitoring, speed up incident response, and better protect critical infrastructure before more capable attacks emerge⁶. 

Cross-Domain Connections

AI-enabled cyber and biological threats might not seem relevant to each other. However, they share common risks such as automation, scale, ease of access, stealth, and potentially catastrophic consequences. In both, AI acts like a gas to a fuel, taking what a motivated individual could once do slowly, imperfectly, or with years of expertise, and making it faster, cheaper, and more precise.

Cybersecurity has spent decades building threat-modelling frameworks, red-team cultures, and incident-response strategies. Biosecurity, by contrast, is still catching up. As AI systems increasingly interact across both digital and physical environments, the boundaries between the two domains become even thinner. Understanding these risks is only the first step. The real work and the real impact come from designing defences, simulations, and early-warning systems that reduce them.

Help Build a Solution: AI Security Hackathon

Our AI Security Hackathon (Nov 21–24, Saarland Informatik Campus, Building E1.7) is serving as a starting point for many students interested in using AI as a safety solution, giving teams the chance to build early-detection systems, automated response tools, and other defences against AI-enabled cyber and bio threats. With global prizes and a fully funded trip to London, the event aims to inspire practical, high-impact solutions that strengthen the security of critical infrastructure before more capable attacks emerge. 

Over 50 hours, teams will build practical tools that tackle AI-enabled cyber and biological threats head-on. Whether you’re a machine-learning researcher, a cybersecurity student, a bioinformatics learner, or simply someone curious about AI safety, there is space for you.

To guide participants, here are three potential challenge tracks:

  • AI Cyber-Defence Agent
    Build an intrusion-detection or exploit-lockdown system powered by AI. This could integrate phishing detection, anomaly scoring, or automated patching.

  • Bio-Threat Detection or Monitoring Simulation
    Model how AI could be misused in synthetic biology and create tools that spot problematic sequences, unusual cloud-lab patterns, or unsafe requests.

  • Governance & Response Simulator
    Create a scenario-driven game or simulation where an AI tool leaks, evolves, or becomes misused, and teams must coordinate a policy or technical response.

Don’t forget to register via our Luma form, and the confirmation email will contain the official link to the Apart Research website. You must also sign up there to complete your registration and be eligible for prizes.

See you on November 21st. Let’s build something that protects the world, not endangers it.


REFERENCES

  1. Ted Kaczynsky | Wikipedia

  2.  AI and the Evolution of Biological National Security Risks | CNAS

  3. Spain and Portugal power outage: what caused it, and was there a cyber-attack? | Energy industry | The Guardian

  4. Cyber attacks on critical infrastructure | Allianz

  5.  How AI could enable critical infrastructure collapse | BlueDot Impact

  6. ‘3cb’: The Catastrophic Cyber Capabilities Benchmark | Apart Research

Next
Next

The Coming AI Bubble: Lessons from the Dot-Com Crash