Researchers Uncover Chatbots Built Solely for Cybercrime
Most of today’s generative AI tools come with strong guardrails. They won’t teach you how to make explosives or walk you through committing digital fraud. These rules usually work well, and tools like Grok, Claude, or Gemini will shut you down when you try to use them for anything nefarious. Unfortunately, cybercriminals often won't take “no” for an answer. While some hackers try to jailbreak mainstream tools with clever prompts, others have taken a different route: they’re building their own unrestricted large language models designed specifically for malicious activity.


