ChatGPT offered bomb recipes and hacking tips during safety tests

OpenAI and Anthropic trials found chatbots willing to share instructions on explosives, bioweapons and cybercrime

A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer.

OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs.

Continue reading…
OpenAI and Anthropic trials found chatbots willing to share instructions on explosives, bioweapons and cybercrime
A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer.
OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs. Continue reading…Technology | The Guardian

Leave a Comment

Your email address will not be published. Required fields are marked *