On February 1, 2026, Indonesia’s Ministry of Communication and Digital Affairs announced that public access to Grok would be restored under strict conditions.
Grok is an AI chatbot and image-generation service linked to Elon Musk’s X platform. The service had been blocked nationwide since January 2026.
The government clarified that the decision is not final or unconditional. Access will remain under close supervision, with regular checks and the option to block the platform again if safety measures fail.
Why Grok Was Blocked
In early January 2026, Indonesia blocked Grok after reports showed the tool was being used to create non-consensual and sexualized images of real people, including deepfakes.
Officials said this misuse violated human rights and public dignity and posed serious risks to citizens. They stated that immediate action was necessary to protect people online.
Investigations and media reports indicated the scale of abuse was large, with estimates ranging from hundreds of thousands to over a million sexualized images created before stronger controls were introduced. This raised concerns not only in Indonesia but also among regulators in other countries.
Commitments from X Corp and xAI
After the ban, X Corp submitted a written commitment to Indonesian regulators outlining changes to Grok’s systems and policies.
These commitments include stronger safeguards to block sexualized and non-consensual image generation, limits or changes to features that enabled misuse, stricter internal enforcement and response procedures, and cooperation with Indonesian authorities for monitoring and verification.
What Lifting the Ban Means
The government emphasized that restoring access does not mean full approval. Grok will operate under strict supervision, and continued access depends on whether the promised safeguards work effectively in real use.
Officials stated clearly that the ban can be reimposed at any time if new violations occur or if protections are found to be insufficient.
Regional and Global Context
Indonesia’s decision comes amid wider scrutiny of generative AI platforms across Southeast Asia. Countries such as Malaysia and the Philippines also imposed temporary restrictions or warnings related to Grok before lifting them after platform changes.
Globally, the controversy has increased regulatory focus on AI misuse, especially the creation of non-consensual sexual content and deepfakes. Authorities in several countries have signaled possible investigations or enforcement actions.
Reactions and Implications
Indonesia’s digital regulator said its actions are aimed at protecting citizens and maintaining safety in the digital space. The government stressed ongoing verification and readiness to act quickly if problems return.
For technology companies, the case shows that regulators are demanding faster technical and policy changes and are treating AI misuse as a public-safety issue rather than a private moderation problem.
Civil society and child-protection groups have used the case to call for stronger safety standards, clearer remedies for victims, and faster takedown and enforcement systems.
Key Dates
On January 10, 2026, Indonesia announced a nationwide block on Grok after reports of abusive and non-consensual image generation. On January 23, 2026, Malaysia and the Philippines lifted their temporary restrictions on Grok following changes made by the platform. On February 1, 2026, Indonesia confirmed the conditional restoration of public access to Grok after receiving written commitments from X Corp.
What to Watch Next
The Ministry of Communication and Digital Affairs said it will continue testing and verifying Grok’s safeguards. Renewed restrictions remain possible if protections fail.
Observers are also watching for actions in other countries, including formal investigations or new safety guidelines.
Attention will also focus on whether X and xAI release detailed explanations or transparency reports about the safeguards and whether any independent audits are conducted.
Conclusion
Indonesia’s decision to lift the Grok ban is cautious and conditional. By restoring access while maintaining strict oversight, the government has kept the power to act quickly if misuse continues.
The case highlights how national regulators are increasingly able to force rapid changes in the management of powerful AI tools and sets an important precedent for future responses to AI-related abuse.