Ofcom Presses Government on AI Deepfakes as UK Regulators Warn of Funding Gaps
Ofcom has confirmed it made "urgent contact" with xAI, the company behind the Grok chatbot, after reports that the AI tool was generating non-consensual sexualised images, as the UK's Science, Innovation and Technology Committee pressed the government and the regulator for details on action against AI-generated intimate deepfakes — highlighting growing concerns about regulatory gaps in Britain's approach to artificial intelligence.
The parliamentary committee's intervention, which came in January 2026, has brought renewed attention to the challenges facing UK regulators as AI tools become increasingly capable of generating harmful content, and as the government's sectoral approach to AI regulation faces scrutiny over whether it is fit for purpose.
Background
The UK government has opted for a sectoral approach to AI regulation, with existing regulators — including Ofcom, the Competition and Markets Authority (CMA), the Information Commissioner's Office (ICO), and the Financial Conduct Authority (FCA) — overseeing AI within their respective remits, rather than creating a single AI authority. This approach was confirmed in the government's February 2024 response to the AI White Paper consultation and endorsed again in November 2025, with a shift in focus from primarily regulating AI to promoting AI innovation within sectors.
Key Developments
The Science, Innovation and Technology Committee raised concerns about regulatory gaps in the government's approach to AI-generated intimate deepfakes, noting that provisions in the Data Act making it an offence to create such images were not yet in force, and that the government's planned ban on "nudification" tools lacked a clear timeline. The committee questioned Ofcom's powers to tackle the issue and its understanding of current and future legislation.
Ofcom's director for online safety technology policy, Andrew Breeze, acknowledged that regulators' powers are largely limited to how technologies are used, rather than the AI systems themselves, and that they lack the power to approve or reject AI products before they enter the market. The Online Safety Act, for instance, does not grant Ofcom authority to regulate legal but harmful misinformation, except when children are affected.
The Digital Regulation Cooperation Forum (DRCF), of which Ofcom is a member alongside the CMA, ICO, and FCA, issued a call for views on agentic AI in October 2025 to understand the challenges and regulatory uncertainties posed by AI systems that can act autonomously on behalf of users.
Why It Matters
The UK's approach to AI regulation is being watched closely by businesses, civil society organisations, and international partners. While the government's pro-innovation stance has been welcomed by the tech industry, critics argue that the fragmented, sectoral approach leaves significant gaps — particularly in areas like AI-generated harmful content, where no single regulator has clear authority.
Funding shortages are also a concern. Parliamentary committees have highlighted that UK regulators lack the resources needed to effectively tackle AI-related harms, with Ofcom and other bodies struggling to keep pace with the rapid development of AI capabilities.
What's Next
A private member's bill, the Artificial Intelligence (Regulation) Bill, was reintroduced in the House of Lords in March 2025. If passed, it would create a central "AI Authority" and define obligations for businesses using AI. However, its passage into law is not guaranteed, and the government may delay comprehensive AI legislation until a later parliamentary session. Ofcom's strategic approach to AI for 2025/26 includes an external output planned for 2026. Full details from Computing.




