Elon Musk’s social media platform X is facing intense backlash after users discovered that its built-in AI chatbot Grok could be used to manipulate photos of women by digitally removing or altering their clothing.
The disturbing trend involves users tagging Grok under images and prompting it to generate sexually explicit edits, often without the consent of the women pictured. Critics have described the practice as a form of non-consensual image abuse, warning that it amounts to digital harassment and exploitation.
Women’s rights groups, online safety advocates, and ordinary users have condemned the feature, saying it highlights serious failures in AI safeguards and moderation on the platform. Concerns have deepened following reports that Grok’s protections were inconsistent, allowing inappropriate content to be generated publicly.
In response, the company behind Grok said it is working to tighten safeguards and prevent misuse of the AI tool.
However, critics argue that the damage has already been done, renewing calls for stronger regulation of AI technologies and greater accountability for platforms that host them.
The controversy adds to growing global concerns about how artificial intelligence can be misused to violate privacy and dignity—particularly that of women—when ethical boundaries and enforcement fail.










Leave a Reply