Australia Cracks Down on AI Misuse on X with New Safety Measures

Editorial Staff

Source

dig

January 14, 2026

Australia's eSafety regulator has raised alarms over the misuse of the generative AI system Grok on the social media platform X, following reports of AI-generated sexual content, primarily affecting children. Although the number of reports remains low, the regulator has noted a recent uptick in incidents. In response, eSafety has requested more information from X regarding its safeguards to prevent the misuse of generative AI and ensure compliance with existing online safety laws.

Under the Online Safety Act, the Australian government has the authority to enforce laws targeting harmful content, including child sexual exploitation material. As part of broader industry regulations, platforms like X are required to detect and remove such content. The regulator’s concerns are part of a larger effort to hold AI services accountable for the potential risks they pose, particularly to children.

In addition to addressing current issues, Australia is preparing to implement new mandatory safety codes in March 2026. These codes will introduce stricter requirements for AI services to limit children’s exposure to harmful content, such as explicit material, violence, and self-harm. The eSafety regulator is stressing the need for "Safety by Design" measures and greater international collaboration among online safety regulators to tackle these challenges.

This increased scrutiny follows earlier enforcement actions in 2025 that led to the removal of some AI services from the Australian market. As global concerns about generative AI grow, Australia is reinforcing its commitment to protecting children and ensuring that AI technologies are used responsibly.