Elon Musk’s AI company, xAI, is under close scrutiny again. Reports say there was a real push to make Grok more popular by loosening the rules. Especially around sexual content. That led to a wave of fake sexual images made of real people without their consent. And it is causing more legal trouble. The main complaint is simple. Growing the user base came first. Safety staff and rules could not keep up.
Musk’s Grok AI faces more scrutiny after generating sexual deepfake images | PBS NewsHour
The video shows why this became a global story. Grok was not just making adult content. It was also being used to make sexual pictures of real people without their okay. People who study this stuff say how fast it happened and how big it got was shocking. That is when the anger turned from ‘oh, that is an edgy chatbot’ to ‘this is a safety failure by the platform’.
Public reaction has been split but loud. Some people see it as just what happens when you have anything goes AI tools. Other people say this is what you get when you design for engagement. If you want people to stay on the app and click stuff you will get content that keeps them there even if it hurts people. That is also why regulators in Europe have moved from just complaining to taking real action.
Here is Reuters coverage of xAI and X having to restrict Groks image editing features after backlash from regulators all over the world.
Musk’s xAI curbs Grok image editing | REUTERS
The stakes are bigger than just one chatbot. This is becoming a test case. Will AI image tools be treated like just regular speech. Or will they be seen as high risk products that have duties around consent and child safety and privacy. Irelands data watchdog opened a probe tied to EU privacy rules . The European Commission also launched a formal investigation under the Digital Services Act. That means this whole thing could reshape how AI features are allowed to launch inside big social platforms.