institutional access

You are connecting from
Lake Geneva Public Library,
please login or register to take advantage of your institution's Ground News Plan.

Published loading...Updated

It's Still Ludicrously Easy to Jailbreak the Strongest AI Models, and the Companies Don't Care

Summary by Futurism
You wouldn't use a chatbot for evil, would you? Of course not. But if you or some nefarious party wanted to force an AI model to start churning out a bunch of bad stuff it's not supposed to, it'd be surprisingly easy to do so. That's according to a new paper from a team of computer scientists at Ben-Gurion University, who found that the AI industry's leading chatbots are still extremely vulnerable to jailbreaking, or being tricked into giving ha…

4 Articles

All
Left
1
Center
1
Right
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 50% of the sources lean Left, 50% of the sources are Center
50% Center
Factuality

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

Tech.co broke the news in on Wednesday, May 21, 2025.
Sources are mostly out of (0)