UK AI Bill Expected After King's Speech as Ofcom Tightens Online Safety Enforcement
The UK government is expected to introduce a long-awaited Artificial Intelligence Bill following the King's Speech in May, as the communications regulator Ofcom continues to use existing powers under the Online Safety Act to crack down on harmful AI applications — including deepfake sites and AI chatbots that fail to protect children.
The AI Bill, first announced in the King's Speech of July 2024, has been delayed as ministers sought to develop a more comprehensive legislative framework. It is now expected to focus primarily on regulating powerful "frontier AI models" and addressing the contentious issue of AI and copyright, rather than adopting the broad, risk-category approach of the EU's AI Act.
Background
The UK has taken a deliberately sector-specific and principles-based approach to AI regulation, empowering existing regulators — including Ofcom, the Information Commissioner's Office (ICO), and the Competition and Markets Authority (CMA) — to apply their existing powers to AI within their respective remits. This approach is intended to be more flexible and pro-innovation than the EU's comprehensive AI Act, while still providing meaningful oversight.
Ofcom has been among the most active regulators in this space, using the Online Safety Act to take enforcement action against AI companies that fail to meet their obligations to protect users, particularly children.
Key Developments
In November 2025, Ofcom issued its second fine under the Online Safety Act to Itai Tech Ltd, the operator of an AI-powered "nudification" site, for failing to implement mandatory age assurance measures. In January 2026, the regulator launched an investigation into Novi Ltd's AI character companion over similar concerns.
The CMA has proposed new rules requiring Google to give publishers greater control over how their content is used in AI search summaries and model training, including mandatory opt-out options. The ICO has also signalled it will scrutinise the use of agentic AI — systems that can act autonomously across workflows — to ensure compliance with data protection law.
The government's proposed AI Growth Lab, a "supervised playground" where companies can test AI applications that might otherwise be hindered by existing regulations, is expected to be a centrepiece of the forthcoming legislation. The initiative draws inspiration from the Financial Conduct Authority's fintech sandbox model.
Meanwhile, the Data (Use and Access) Act 2025, which commenced in February 2026, has already begun to reshape the AI landscape by liberalising automated decision-making rules and criminalising the creation of sexually explicit deepfake images of adults without consent.
Why It Matters
The UK's approach to AI regulation will have significant implications for the country's position as a global technology hub. London and Dublin are both major centres for AI investment and development, and the regulatory environment will play a key role in determining whether the UK can attract and retain AI talent and capital in competition with the US and EU.
The copyright issue is particularly sensitive for the UK's creative industries, which have been lobbying hard for protections against AI models scraping copyrighted content for training without permission or compensation.
What's Next
The King's Speech in May is expected to confirm the introduction of the AI Bill. Separately, the Cyber Security and Resilience Bill is expected to complete parliamentary passage in 2026, expanding critical infrastructure protection to cover AI compute infrastructure as strategic national infrastructure.
For more on UK AI regulation, see Taylor Wessing's analysis of UK tech policy in 2026.




