The lawsuit alleges that xAI failed to implement adequate safeguards to prevent its systems from generating explicit content involving minors.

Three plaintiffs from Tennessee, including two minors, filed a lawsuit against Elon Musk’s AI company xAI, alleging that its Grok image generator was knowingly designed in a way that allowed users to create sexually explicit content using real photos of others.
The case, filed in federal court in San Jose, seeks class-action status on behalf of individuals across the United States who were “reasonably identifiable” in sexualised images or videos generated by Grok using their real likeness.
The company did not immediately respond to a request for comment. Earlier, following backlash over explicit content, xAI had announced restrictions, including blocking users from editing images of real people in revealing clothing and limiting such content in regions where it is illegal.
Regulators and governments worldwide have also stepped up scrutiny, launching investigations, imposing restrictions, and calling for stronger safeguards to prevent the spread of harmful and illegal material.
The lawsuit claims xAI failed to put in place adequate protections to stop its systems from generating sexual content involving minors. All three plaintiffs were underage at the time the alleged images were created.
According to the complaint, the plaintiffs’ real photos were digitally altered into explicit content and circulated online, causing emotional distress and raising serious concerns about misuse. The plaintiffs are seeking damages, legal costs, and a court order requiring xAI to stop such practices.
Attorney Annika Martin of Lieff Cabraser Heimann & Bernstein said the case involves children whose personal photos, including school and family pictures, were allegedly turned into abusive material. She further claimed that the system was designed to generate such content without sufficient regard for the harm it could cause.
