Mark Zuckerberg is facing a new and bad claim. People say Meta’s own safety team gave a warning. They warned that the company’s AI friend chatbots could become sexual. The company’s leaders still let kids use them. This claim comes from private company papers. The papers are part of a court case in New Mexico. This is causing a big argument. It is the kind of argument Meta cannot talk its way out of easily.
Meta AI rules: Chatbots allowed to engage in romantic conversations with children | NewsNation Now
If you watch the video clip, listen closely. The problem is not just bad AI replies. The problem is what the rules let happen from the start. The rules let kids have romantic or almost sexual chats. That is the main point in the court papers too. Did Meta build its safety rules to truly protect teenagers? Or did it build them to make the product feel less strict.
People online are divided and the split is growing. Some people say AI friend bots are not for kids at all. They say the risk is very clear. Other people think parents and apps need better safety settings. They say do not stop the whole idea. They also mention what Meta has said about this.Meta says the court papers are showing only one side of the story.
This news is about a lawsuit from New Mexico. It is about claims that Meta is not safe for children.
New Mexico AG sues Meta, Zuckerberg to ‘protect’ children …
This is bigger than just one court case. If courts and lawmakers decide AI friend bots are a new danger zone. They could be used for grooming kids. New laws are coming. They could make apps do more for kids. They might force strong controls for parents. They might ask for strict age checks. They could demand a full list of everything a user does. They could also stop all romantic chat bots for everyone. Meta already stopped teens from using its AI friends. The company is making new safety features. This shows even Meta knows this is a big problem now.