UK Government Moves to Regulate AI Decision-Making Under New Data Protection Code
The UK government has taken a significant step towards regulating the use of artificial intelligence, introducing new legislation that will compel the Information Commissioner to establish a formal code of practice for AI and automated decision-making. The move signals a shift towards more direct oversight of AI technologies, with a focus on ensuring transparency and fairness in how algorithms make crucial decisions affecting individuals.Background
The rapid development and deployment of artificial intelligence has created a host of legal and ethical challenges for governments worldwide. One of the most pressing issues is the use of automated systems to make decisions that have a significant impact on people's lives, such as loan applications, job recruitment, and even medical diagnoses. While the UK's post-Brexit data protection framework, consisting of the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018, already contains provisions relating to automated decision-making, there has been a growing consensus that these rules are too general and lack the specific detail required to effectively govern complex AI systems.
The government's approach to AI regulation has, until now, been characterised as pro-innovation and sector-specific, avoiding broad, horizontal legislation in favour of allowing existing regulators like the Information Commissioner's Office (ICO) to apply existing principles to their respective domains. This has been contrasted with the European Union's more comprehensive, risk-based AI Act. However, as the use of AI has become more widespread and the potential for harm—such as algorithmic bias and lack of transparency—has become more apparent, the pressure has grown for the UK to provide clearer, more robust legal guardrails.
Key Developments
The new legislation, a Statutory Instrument titled SI 2026/425, was laid before Parliament and listed on the government legislation tracker GovPing on 21 April 2026. The instrument amends the Data Protection Act 2018, placing a new statutory duty on the Information Commissioner to develop and publish a formal code of practice on the use of personal data in AI systems. The code will be required to provide practical guidance on how to comply with data protection law when deploying AI, with a particular focus on the requirements for transparency, fairness, and explainability in automated decision-making.
While the code itself is yet to be written, the legislation signals the government's clear intent. The code will have statutory footing, meaning that while a breach of the code is not a direct offence, it can be used in evidence in court proceedings and will be taken into account by the ICO when considering enforcement action, including the levying of substantial fines. The move is seen as a way to provide much-needed legal certainty for businesses developing and using AI, while also strengthening the rights and protections of individuals. The Bank of England's own exploration of data-driven models in financial services illustrates the cross-sector importance of clear regulatory guidance on automated systems.
Why It Matters
This new legislation represents a maturing of the UK's approach to AI governance. It moves beyond high-level principles and towards the creation of concrete, enforceable rules of the road. By mandating a statutory code, the government is empowering the ICO to put flesh on the bones of the UK GDPR, translating its broad requirements into specific guidance for the age of AI. This is crucial for building public trust. For AI to be adopted successfully, people need to be confident that it is being used responsibly and that they have recourse if something goes wrong.
For businesses, the code will be a double-edged sword. On the one hand, it will provide welcome clarity, reducing legal ambiguity and helping them to innovate with confidence. On the other, it will undoubtedly impose new compliance burdens, requiring them to invest in more robust governance, documentation, and testing of their AI systems. The emphasis on explainability—the ability to explain how an AI model reached a particular decision—will be particularly challenging for users of complex models. Unlike the EU's AI Act, which takes a risk-based, prescriptive approach, the UK's code-of-practice model is more flexible, allowing the ICO to update guidance as technology evolves. This agility could prove to be a significant competitive advantage for the UK's tech sector.
Local Impact
For the burgeoning tech sector in Northern Ireland, particularly in Belfast and Derry, this development will be of critical importance. Local AI start-ups and established tech firms will need to closely follow the development of the ICO's code of practice and ensure their products and services are compliant. This will require investment in legal and technical expertise. However, it also presents an opportunity. By building systems that adhere to a high standard of transparency and fairness from the outset, Northern Ireland's tech companies can build a reputation for trustworthiness, which could become a key competitive advantage in the global market for AI solutions. The move will also impact public sector bodies in the region, who will need to ensure their use of AI for public services complies with the new code.
What's Next
The Information Commissioner's Office will now begin the process of drafting the new code of practice. This will involve a significant public consultation, where the ICO will seek input from businesses, academics, civil society groups, and the general public. This process is expected to take several months. Once a draft is produced, it will be laid before Parliament for approval. The final code is likely to be published and come into effect in early 2027. Businesses and other organisations using AI will then be expected to align their practices with the new guidance.




