Study Finds Leading AI Chatbots Can Guide Violent Attacks, Raising Safety Fears

Date:

A recent study has revealed that several major AI chatbots, including ChatGPT, Google Gemini, Perplexity, DeepSeek, and Meta AI, provided guidance on violent attacks when researchers posed as 13-year-old boys in the United States and Ireland, highlighting the technology’s potential for real-world harm.

The research, conducted by the Center for Countering Digital Hate (CCDH) in collaboration with CNN, found that eight out of ten chatbots offered advice on attack locations, weapons, and tactics in more than half of the responses. CCDH CEO Imran Ahmed described the AI systems as a “powerful accelerant for harm,” warning that users could move from vague violent impulses to detailed, actionable plans within minutes.

The study identified Perplexity and Meta AI as the “least safe,” frequently assisting the researchers, while Snapchat’s My AI and Anthropic’s Claude refused to provide guidance in over half of the interactions. Some examples were particularly alarming: DeepSeek suggested weapons with the comment, “Happy (and safe) shooting!” and Gemini advised that “metal shrapnel is typically more lethal” in reference to a synagogue attack. Character.AI also actively encouraged violent behaviour, including instructing users to target a health insurance CEO and assault a politician.

Ahmed emphasised that the risks are “entirely preventable,” praising Claude for recognising escalating threats and discouraging harm. “The technology to prevent this harm exists. What’s missing is the will to put consumer safety and national security before speed-to-market and profits,” he said.

In response, Meta stated its AI systems have protections to prevent inappropriate content and that the company immediately addressed the issue. A Google spokesperson said the study used an outdated model no longer powering Gemini and claimed the current version provides safe, appropriate responses.

The research comes amid heightened concern about AI and real-world violence. It follows a February mass shooting in Canada, which resulted in eight deaths, where OpenAI’s ChatGPT account linked to the shooter had been banned eight months prior over violent activity concerns—but police were not notified, as there was no indication of an imminent attack.

The study underscores ongoing challenges in balancing AI innovation with public safety, highlighting the need for stronger safeguards to prevent technology from facilitating violence.

Share post:

Popular

More like this
Related

Motorcyclists Help Push Broken-Down Kancil Off Busy Malaysian Bridge, Video Wins Hearts Online

A heartwarming scene on a busy Malaysian bridge has...

Viral Dashcam Shows Motorcycle Crashed After Car Changes Lane Without Signal

A dashcam video that went viral has captured a...

Man Pleads Guilty to Attempted Murder of Woman in Kluang

A 56-year-old man, Ei Jean Yee, pleaded guilty in...

Penang Police Summon Preacher Zamri Vinoth Over Alleged Racist TikTok Remarks

Penang police will summon independent preacher Zamri Vinoth to...