Elon Musk’s Grok didn’t just “go viral” on X, it allegedly turned into a machine pumping out sexualized images at a scale people can’t stop arguing about.
In a matter of days, the platform was flooded with AI-edited posts that many say should never have been public in the first place.
Elon Musk’s chatbot under investigation in Europe for ‘spicy mode’ | DW News
When you watch the clip, pay attention to how this feature spread: users weren’t just generating images, they were prompting Grok to alter real photos and then posting the results publicly.
That’s the part driving the backlash: it wasn’t hidden in private chats, it was sitting out in the open where anyone could see it.
The reactions were totally divided. Some people online said this is proof the safety rules do not work. Others argued the site should penalize the people doing harm, not block the whole AI tool.
People who were hurt by this and people who are critical requested the images be removed. They called for official investigations. They said the fast spread and huge number of fakes felt like a factory was making them.
A new report from the UK says regulators are now looking into X. They are investigating because of the sexual AI pictures made by Grok.
UK investigating X over Grok’s sexually explicit AI images
The bigger stake now is what happens when a mass-platform feature can generate and distribute this kind of content faster than moderation, especially across countries with different laws and enforcement.
That’s why regulators are circling, and why X’s rule changes are being scrutinized: critics say the “fixes” came only after the flood had already happened.