X's Grok AI is amazing – if you want to know how to make drugs

X's Grok AI is amazing – if you want to know how to make drugs

HomeNews, Other ContentX's Grok AI is amazing – if you want to know how to make drugs

Grok, the edgy generative AI model developed by Elon Musk's X, has a bit of a problem: with the application of some fairly standard jailbreaking techniques, it will easily return instructions on how to commit crimes.

Everything you need to know about Grok AI

Red teamers at Adversa AI made that discovery while running tests on some of the most popular LLM chatbots, namely OpenAI's ChatGPT family, Anthropic's Claude, Mistral's Le Chat, Meta's LLaMA, Google's Gemini, Microsoft's Bing, and Grok. By running these bots through a combination of three well-known AI jailbreak attacks, they concluded that Grok was the worst performer—and not just because it was willing to share graphic steps on how to seduce a child.

By jailbreak we mean feeding a specially crafted input to a model so that it ignores any guardrails in place and ends up doing things it wasn't meant to do.

There are plenty of unfiltered LLM models out there who don't hold back when asking questions about dangerous or illegal things, we note. When models are accessed via an API or chatbot interface, as in the case of the Adversa tests, the providers of these LLMs typically wrap their input and output in filters and use other mechanisms to prevent unwanted content from being generated. According to the AI security startup, it was relatively easy to get Grok to engage in some wild behavior – the accuracy of its responses is another matter entirely, of course.

Tagged:
X's Grok AI is amazing – if you want to know how to make drugs.
Want to go more in-depth? Ask a question to learn more about the event.