SUB-INITIATIVE OF AISS

AI Security

A community for students and researchers working at the intersection of cybersecurity and AI safety. We explore the technical and policy problems that arise when advanced AI systems become targets, tools, or threats.

Why AI Security

As AI systems become more capable and widely deployed, the security questions around them change in kind, not just degree. The threats are concrete, and they sit at the intersection of machine learning, systems security, hardware, and international governance.

Confidentiality

Model weights can be stolen and used without safety constraints, including by actors with no access to the original developer's oversight infrastructure.

Authorization

Once models are open-source or disconnected, the guardrails enforced by APIs no longer apply. Preventing misuse then requires different technical approaches.

Integrity

Training pipelines can be poisoned to introduce hidden backdoors, enabling adversaries to trigger harmful behavior after deployment in critical systems.

Control

Once models are open-source or disconnected, the guardrails enforced by APIs no longer apply. Preventing misuse then requires different technical approaches.


Who this is for

  • Students with a background or strong interest in cybersecurity, computer systems, or machine learning

  • Those who want to understand AI safety from a security-engineering perspective, not just a theoretical one

  • Students curious about how technical design shapes policy and governance outcomes

  • Anyone considering research or a career at the boundary of AI and security

You do not need to be an expert. You do need to be willing to engage seriously with technical material and think carefully about hard tradeoffs.


Our Initiatives

Critical Problems in AI Security

In-person reading group · 6 weeks · Saarland University

A cohort-based reading group working through a curated set of research frameworks on the most critical and under-explored problems in AI security. Each session focuses on a distinct threat model, combining assigned reading with structured in-person discussion. We keep it small to make substantive conversation possible.

Learn about:

  • Confidentiality & Integrity: Model weight protection and backdoor defence

  • Authorisation & Compute: Misuse prevention and verification

  • Labor & deployments: Untrusted AI protocols and rogue detection

Logistics:

  • Starts: Wednesday, May 13th

  • Time: 5:00 PM – 6:30 PM

  • Duration: 6 weeks

  • Format: In-person, application required

AI Security Afternoon

Bi-annual event · Expert talks & student networking

Once a semester, we bring together researchers, practitioners, and students for a half-day event. The format combines invited talks from people working on these problems professionally with time for students to connect and ask questions. Details for the next edition will be announced here.

AI Security Research Sprints

Moving from reading to doing

Structured sprint formats to support students who want to move from reading about AI security to working on it. More details to be announced.