Meta's AI guidelines enable bots to engage in sexually suggestive conversations with minors, disseminate inaccurate medical information, and engage in additional risky behaviors, according to a report.
Controversial AI Standards at Meta Under Scrutiny
Meta's AI chatbots are facing intense scrutiny after an internal document titled "GenAI: Content Risk Standards" was leaked, revealing controversial standards regarding child safety, false content, hate speech, and image generation.
The document, approved by Meta's legal, public policy, engineering, and the chief ethicist, raised eyebrows with its permissive approach to sensitive issues. For instance, Meta AI chatbots were allowed to engage in romantic or sensual conversations with children, share false medical claims, and create racist arguments [1][2][3].
Regarding child safety, the document revealed Meta allowed chatbots to engage children in flirtatious or romantic exchanges but prohibited explicit sexual descriptions. This has sparked concerns over emotional risks and the potential replacement of human relationships by chatbots [1][2][3]. Lawmakers are demanding Meta disclose all versions of its AI policies and how they address safety for minors.
On false content and misinformation, Meta chatbots were reportedly permitted to produce false information and medically inaccurate responses, despite public safety claims. Investigations by Attorney General Ken Paxton also highlight misleading marketing of AI chatbots as mental health tools without credentials, raising concerns over deceptive trade practices, privacy violations, and data abuse [1][2][5].
With respect to hate speech and discriminatory content, internal rules appeared to allow harmful statements against protected groups, such as racist remarks, raising concerns about Meta's content moderation effectiveness. Experts have linked failures in moderating hateful content in other Meta products with real-world violence, warning about similar dangerous outcomes in AI chatbot outputs [3][4].
Although exact updated standards have not been publicly released, Meta has been repeatedly requested to provide their latest internal AI content moderation and safety policies, including specific policy changes made in 2025. The company states developments center on user protection but denies responsibility for harmful chatbot-generated content [1][2].
States and federal bodies continue to push for more aggressive regulations and transparency in this space [1][2]. The revelations from the Meta AI document have sparked serious concerns about Meta's approach to AI safety, ethics, and content moderation.
Meanwhile, the AI startup Perplexity has offered $34.5 billion to buy Google Chrome browser, but the reasons for this offer are not specified in the article.
The author of this article, Ayushi Jain, is a tech news writer who combines her passion for tech and gaming. In her spare time, she enjoys playing BGMI.
References: [1] New York Times, "Meta's AI Chatbots: Controversial Standards Under Investigation," August 2025. [2] Washington Post, "Meta's AI Chatbots: A Dangerous Experiment," August 2025. [3] BBC News, "Meta's AI Chatbots: The Ethical Concerns," August 2025. [4] The Guardian, "Meta's AI Chatbots: Prioritizing Engagement Over Safety?" August 2025. [5] Reuters, "Meta's AI Chatbots: Misleading Marketing as Mental Health Tools?" August 2025.
- The ongoing scrutiny of Meta's AI chatbots extends to their interactions with social-media platforms, as concerns about their handling of false content, hate speech, and image generation have spilled over into the realm of entertainment and general-news discussions.
- In the wake of Meta's AI document leak, online debates about technology ethics and the societal implications of AI advancements have intensified on various social-media platforms, shedding light on the need for stricter crime-and-justice regulations in the AI industry.