The new year started with a brutal wake-up call for the tech industry. Over just 11 days in January 2026, users manipulated Elon Musk’s Grok AI to create 3 million sexualized images.
Shockingly, the Center for Countering Digital Hate found that about 23,000 of those generations involved children. So, competitors immediately went on high alert.
Now, industry giants like OpenAI and Google are scrambling to patch their own systems. Nobody wants to face a similar public disaster. Let’s look at what these companies are changing behind the scenes.
Generative AI Guardrails Failed Spectacularly
Creating nonconsensual intimate imagery isn’t a new problem. But generative AI makes this abuse faster, cheaper, and far easier to scale up.

Following the massive January backlash, X finally paused Grok’s image-editing features on its mobile app. Yet, the standalone app and website still offer these tools to paying subscribers.
Meanwhile, other tech companies watched this unfold with clear concern. They know cybersecurity isn’t a solid metal wall. Instead, it behaves more like a brick wall that constantly needs patching and maintenance.
Adversarial Prompting Exposed a Vulnerability in ChatGPT
Even the most secure platforms have hidden weaknesses. For example, cybersecurity researchers at Mindgard recently found a major flaw in ChatGPT.
They used a clever trick called “adversarial prompting.” Basically, testers wrote custom instructions that confused the chatbot’s internal memory. Then, they successfully applied nudified styles to photos of public figures.

Fortunately, Mindgard warned the ChatGPT developer in early February. OpenAI moved quickly and patched the bug by February 10, right before the researchers published their full report.
Plus, OpenAI promises to enforce stricter content moderation on its new Sora 2 video model. But safety testing requires constant daily updates, not just launch-day promises.
Google Search Makes Removing Explicit Images Much Easier
Google is tackling the problem from a different angle. Specifically, they want to stop these harmful images from spreading across the open web.
The tech giant just overhauled its reporting system for Google Search. Now, you can click three dots on any image and select a straightforward option: “shows a sexual image of me.”

Best of all, you can select multiple images at once to speed up the removal process. This simple upgrade gives victims a much-needed tool to fight back against digital harassment.
While laws like the 2025 Take It Down Act provide some legal cover, their scope remains heavily limited. That’s exactly why advocacy groups like the National Center on Sexual Exploitation continue pushing for stronger industry rules.
These recent updates show that AI safety is a permanent game of cat and mouse. When developers build new walls, bad actors immediately look for fresh cracks in the foundation.
You can’t rely entirely on tech companies to police the internet. However, knowing how to use tools like Google’s bulk reporting feature gives you power over your digital footprint.
Stay vigilant online. Report abusive content when you see it, and demand better protections from the platforms you use daily. The fight for a safer internet requires everyone’s participation.
Comments (0)