Why in news?
X, owned by Elon Musk, has restricted its Grok AI tool from generating sexualised images of women and children following widespread global criticism.
The decision represents a clear retreat after Musk initially placed responsibility on users creating such content and later claimed ignorance about the tool’s misuse involving children.
Escalating regulatory scrutiny across multiple countries ultimately compelled the platform to curb the AI’s image-generation capabilities.
What’s in Today’s Article?
- Grok Controversy: AI-Generated Sexualised Images and Safety Gaps
- Initial Response to the Backlash
- Regulatory Pressure Triggers the Rollback
Grok Controversy: AI-Generated Sexualised Images and Safety Gaps
- A December 2025 update to Grok enabled users to generate sexualised and objectionable images of women and children using existing photographs, often without consent or knowledge.
- Users prompted the AI to digitally undress women or place them in suggestive poses, with the generated images appearing publicly in comment threads, leading to harassment.
- Instances involving children further intensified concerns, highlighting serious gaps in AI safeguards and content moderation on X.
Initial Response to the Backlash
- Following global outrage over Grok-generated sexualised images, Elon Musk stated that users generating illegal content with Grok would face the same consequences as those uploading illegal material directly to X.
- Musk emphasised that Grok generates images only in response to user prompts and does not act autonomously.
- He asserted that the AI is designed to refuse illegal requests and comply with the laws of the relevant country or state.
- Denial of Knowledge and Technical Explanation
- Recently, Musk denied any awareness of Grok being used to create sexualised images of children, claiming there were “literally zero” such instances to his knowledge.
- He suggested that any unexpected behaviour could result from adversarial hacking, which the company fixes promptly.
- Platform-Level Restrictions
- Before the final rollback, X had restricted Grok’s image-generation features to paid users.
- However, within hours of Musk’s denial, the company announced a complete shutdown of Grok’s ability to generate sexualised images, regardless of user status.
- The move marked a clear reversal by X, effectively acknowledging the severity of the issue and responding to mounting regulatory and public pressure by removing the problematic functionality altogether.
Regulatory Pressure Triggers the Rollback
- X’s decision to restrict Grok followed strong regulatory action, beginning with a stern notice from the Government of India.
- After being flagged for failing to meet due diligence obligations under the Information Technology Act, 2000 and related rules, X removed about 3,500 pieces of content and blocked 600 accounts, admitting lapses in compliance.
- The controversy quickly spread beyond India. In the United Kingdom, an impending legal change is set to criminalise the creation of such sexualised images.
- Malaysia and Indonesia blocked access to Grok and initiated legal action against X and xAI, citing failures to prevent harmful content and protect users.
- In the US, the California Attorney General announced an investigation into Grok and xAI over the generation of objectionable images, adding to mounting legal pressure on the platform.
- X’s New Restrictions and Safeguards
- In response, X announced technological measures to prevent Grok from editing images of real people into revealing clothing, including bikinis, applying the restriction to all users.
- The platform also limited image creation and editing via Grok to paid subscribers and introduced geoblocking in jurisdictions where such content is illegal.
- X reiterated its commitment to platform safety, stating it has zero tolerance for child sexual exploitation, non-consensual nudity, and unwanted sexual content, marking a decisive retreat under sustained global regulatory scrutiny.