Ofcom Investigates AI Chatbot Services Under Online Safety Act as UK Tightens Digital Regulation
UK communications regulator Ofcom has launched investigations into AI character companion chatbot services and the use of the Grok AI chatbot on X, as Britain's sector-led approach to artificial intelligence regulation begins to show its teeth under the Online Safety Act.
Background
The United Kingdom has taken a principles-based, sector-led approach to AI regulation rather than adopting a single comprehensive AI Act like the European Union. Under this framework, existing regulators β including Ofcom, the Competition and Markets Authority (CMA), the Information Commissioner's Office (ICO), and the Financial Conduct Authority (FCA) β apply five guiding principles to AI within their specific domains.
Key Developments
Ofcom announced in January 2026 that it was investigating an AI character companion chatbot service and the use of the Grok AI chatbot on X (formerly Twitter), following concerns about potential harms to users. The investigations are being conducted under the Online Safety Act 2023, which Ofcom has confirmed applies to generative AI and chatbot services. In November 2025, Ofcom issued a fine to an AI-powered "nudification" site for failing to implement age assurance measures β the first such enforcement action under the Act.
The ICO has also been active, announcing its "Preventing Harm, Promoting Trust" strategy and developing a statutory code of practice on AI and automated decision-making. The CMA has initiated five merger control investigations into AI partnerships since December 2023, including scrutiny of the Microsoft/OpenAI, Amazon/Anthropic, and Alphabet/Anthropic relationships.
A Private Member's Bill β the Artificial Intelligence (Regulation) Bill β was introduced in March 2025 and would, if passed, establish a central AI Authority and require businesses to appoint an "AI Responsible Officer." However, the Bill currently lacks government backing.
Why It Matters
The UK's approach contrasts sharply with the EU's comprehensive AI Act, which imposes strict obligations and potential fines on high-risk AI systems. Businesses operating across both markets will likely need to meet the stricter EU standards, but UK regulators argue their flexible, innovation-friendly approach better supports the growth of the domestic AI sector.
What's Next
Ofcom's investigations into AI chatbot services are expected to set important precedents for how the Online Safety Act applies to AI-generated content. The outcomes will be closely watched by the tech industry and civil society groups alike. Read more at Bird & Bird.




