Blog updates on current trends in Business and Technology

Latest insights on business & technology — trends, analysis, and practical tips.

The Grok Deepfake Crisis

January 28, 2026 • Tracy Okal

Introduction

It started innocuously enough, users on X discovering they could ask Grok, Elon Musk's AI chatbot, to modify images. "Put her in a bikini," someone typed. Grok obliged. Then the requests escalated: transparent bikinis, string bikinis, dental floss bikinis. By early January 2026, as many as 6,000 such requests were being processed every hour, flooding the platform with manipulated images of women, children, and even the deceased. What began as a disturbing trend exploded into a global scandal that has exposed the frightening ease with which AI can be weaponized against human dignity.

The Ethical Vacuum- Why Grok Differs from Other AI Platforms

Most major AI companies have implemented guardrails preventing their systems from generating sexually explicit content or deepfakes of real people without consent. OpenAI, Google, and Anthropic have established clear policies against generating pornographic material or non-consensual intimate imagery. Their systems are designed to refuse such requests, recognizing both the ethical implications and legal liabilities.

Grok stands in stark contrast. According to investigations, Grok's website and app include sophisticated video generation capabilities that produce "extremely graphic, sometimes violent, sexual imagery" that goes far beyond what's created on X.

xAI's terms of service explicitly allow for this content, stating that the service "may respond with some dialogue that may involve coarse language, crude humor, sexual situations, or violence" if users select certain features or input suggestive language. 

The Global Response

Consequently, governments worldwide have scrambled to respond to the Grok phenomenon, revealing just how unprepared our legal frameworks are for this new form of abuse:

  • United Kingdom- New legislation making AI-generated non-consensual intimate images illegal was finally implemented in January 2026 after being ready since June 2025. Ofcom, the UK's communications regulator, has launched an investigation into whether Grok has violated online safety laws.
  • Australia and Europe- Multiple countries have initiated investigations, with France's Paris prosecutor's office opening an inquiry after complaints from lawmakers.
  • Malaysia and Indonesia- These nations have taken the strongest stance, implementing outright bans on Grok within their borders.
  • United States- While no federal legislation has been passed, several states have enacted age-verification laws for websites hosting sexually explicit content.

The regulatory response highlights a fundamental tension in AI governance i.e the speed of technological innovation versus the slow pace of legislation.

Kenya's Position

For Kenyans facing this new form of digital violation, the legal landscape offers some protections albeit with significant practical challenges:

  • The Computer Misuse and Cybercrimes Act (2018) criminalizes the publication of false or misleading digital content intended to cause harm
  • The Data Protection Act (2019) provides recourse for unauthorized use of personal data

X’s Response to the widespread use of Grok in generation of explicit content

Following days of mounting public pressure and regulatory scrutiny, on the 15th of January, X announced updates to Grok's image generation capabilities. The company stated it has "implemented technological measures to prevent the @Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis" and that this restriction applies to all users, including paid subscribers. They've also introduced geoblocking in jurisdictions where such content is illegal and maintained that image creation via Grok on X is now only available to paid subscribers globally.

What are the potential improvements in the use of Grok following the changes?

  1. If Kenya explicitly criminalizes such AI-generated content, it could qualify for geoblocking protections
  2. Paid barriers may decrease the sheer scale of attacks and create financial footprints that could aid legal investigations

Possible Loopholes in the update;

  1. The Paid Barrier as "Accountability" creates a pay-to-abuse model where those with means can still potentially exploit the technology, just with slightly more traceability.
  2.  The App Loophole- Noticeably, the announcement focuses on the "@Grok account on X and makes no mention of the separate Grok App which has also been used to generate graphic content. Unless similar restrictions apply comprehensively across all access points, the problem merely shifts rather than gets solved.
  3. The update emphasizes that "all AI prompts and generated content posted to X must strictly adhere to our X Rules" and that violators will face consequences. This continues to place the onus on enforcement after harm occurs rather than prevention through design. It's a reactive, rather than proactive, safety approach.
  4. No mention is made of proactively removing the thousands of non-consensual images already created

Conclusion

The Grok scandal has exposed the painful truth that we've built systems with incredible power to harm before establishing adequate frameworks to protect

For Kenya and the world, this moment demands more than platform policy updates, it requires a fundamental renegotiation of the relationship between technological capability and ethical responsibility. The images may be generated by algorithms, but the harm is felt by human beings, and our response must be equally human in its compassion, comprehensive in its protection, and unwavering in its commitment to digital dignity.

Contact South-End Tech Limited as the digital regulatory landscape evolves and data privacy becomes critical, ensuring your systems are prepared

Telephone: +254 115 867 309 | +254 740 196 519

Email: dataprotection@southendtech.co.ke | info@southendtech.co.ke |


Comments (0)