Blog
Think pieces, news&governance coverage and research digests about all things AI safety. This blog explores all areas AI intersects with in society: across technology, policy, healthcare, education, economics, and beyond.
The War of Copyright Over Training Data: Transformation or Theft?
Every time a chatbot generates an image, a short essay or a piece of code for you, it’s not creating out of nowhere. Instead, it’s pulling from millions — even billions— of scraped work whose owners aren’t even aware that their work is being used.
Lately, the copyright issue over works used in AI training has become one of the major discussion points, with major corporations getting hit with copyright infringement claims left and right: Anthropic, Microsoft and OpenAI are just some of the companies that came under fire.
In this week’s article, we discuss the origins of copyright, what is considered “fair use” and whether AI training qualifies as one, and how to move forward on governance and technical fronts.
Read the full analysis on our blog.
Cyber and Bio Threats in the Age of Intelligent Systems
The infamous Unabomber, Ted Kaczynski, spent years mastering bomb making through trial and error. How would this scenario play out today? Someone with his intents wouldn't need years of trial and error, rather only days or hours, given they have access to an internet connection and an AI model.
What happens when AI becomes an enabler of malicious intent? In this week's blog post, we explore the dangerous convergence of AI with possible biological and cyber threats. While our current technology is getting closer to allow for the once unbreakable barriers to be in reach, defense can also scale.
Join our AI Security Hackathon (Nov 21-24) to build real detection systems, automated defenses, and simulations that could prevent the next catastrophe. Build your solution in 50+ hours, and compete to win global prizes and a fully funded trip to London.
Find out more and read the full article on our blog.
The Coming AI Bubble: Lessons from the Dot-Com Crash
March 2000: portfolios up 300%, billion-dollar companies with no products, champagne flowing as people believe: "The internet will change everything."
March 2002: NASDAQ down 80%, billions evaporated overnight.
Sound familiar? In 2025, just swap "dot-com" for "AI-powered."
Our latest blog post explores the uncomfortable parallels between the two scenarios: 95% of enterprise AI projects fail to drive revenue, valuations exceed the 1990s peak, and even OpenAI's CEO admits we're in a bubble.
But unlike the dot-com era, AI has real deployments and fundamental value. So what happens if this bubble bursts?
Read the full analysis on our blog.
THE DIGITAL COUCH: Why Everyone Is Using ChatGPT as a Therapist, and the Dangers Involved
In 2025, we see millions turning to chatbots for emotional support, trauma processing, and the comfort of having someone, or something, listen when the world rarely does.
In our newest blog post, The Digital Couch, we explore why this has become such a common occurrence, the red flags it raises and practical guardrails we can apply to make this experience safer.
Read the full article on our blog.
AISS Kickoff Event: Anthropic’s Jan Kirchner on Scalable Oversight Research
Is AI progress accelerating or plateauing? Do we still have time to solve alignment? What does continuous progress mean for alignment timelines?
Last Thursday, AI Safety Saarland kicked off the semester with a guest lecture by Anthropic's Jan Kirchner. Over 350 people gathered to hear and discuss one of AI Safety's most critical challenges: how do we oversee systems smarter than us?
Read the full recap on our blog.