On Tuesday, 6 February 2024, the UK government released its consultation response to its March 2023 white paper titled ‘A pro-innovation approach to AI regulation’. The response provides further details on the UK government’s approach to artificial intelligence regulation.
Key takeaways
- Sticking with a context-based approach. The UK government confirmed that it plans to stick to the context-based framework – underpinned by cross?sectoral principles –outlined in the white paper. The aim is to retain an agile regulatory approach that does not dampen innovation and can adapt to evolving AI risks.
- Cross-sectoral principles, interpreted by existing regulators. The five core principles established in the white paper have been retained – namely:
- Safety, security and robustness.
- Appropriate transparency and explainability.
- Fairness.
- Accountability and governance.
- Contestability and redress.
These principles will be interpreted by existing regulators and applied with those regulators’ existing remits – e.g., the UK Competition and Markets Authority (CMA) will be responsible for taking steps to ensure that AI markets work effectively from a competition and consumer protection perspective.
- Potential for binding requirements applicable to only the most capable AI systems. Despite the general approach noted above, the UK government is considering specific binding requirements for what it calls ‘highly capable general-purpose AI systems’.
- It appears that these systems will currently include only the most cutting-edge foundation models that underpin consumer-facing applications – however, there is room for new systems to fall into this category as technology evolves.
- Beyond certain generic descriptions of ‘high performance’, the consultation response is not definitive on what will/will not fall within this category of system. However, the UK government’s preliminary analysis indicates that initial thresholds could be based on forecasts of a system’s capabilities using a combination of two indirect indicators:
- Compute – i.e., the amount of computational processing power used to train the model.
- Capability benchmarking – i.e., assessing capabilities in certain risk areas to identify high capabilities considered to result in high risk.
- The UK government appears to be of the opinion that developers of these highly capable systems currently face the least clear legal responsibilities and challenge its context-based approach. It considers that some of the risks to which such systems may contribute might not be effectively mitigated by existing regulation. In the words of the consultation response, they have ‘the least coverage by existing regulation while presenting some of the greatest potential risk’.
- The UK government considers this ‘gap’ exists primarily because existing regulation better anticipates coverage at a deployment or application layer (e.g., risks resulting from use of a platform based on a highly capable system, or an application developer making such a platform available to users). However, it may fail to adequately address upstream risks at the foundation model development layer, where the government appears to consider much control and responsibility around potential risks to reside.
- With that in mind, the intent appears to be that any binding requirements would be designed to ‘fill the gap’ in existing regulation. The consultation response is clear, however, that such requirements would follow the five core principles set out in the white paper, with a view to maintaining a ‘pro-innovation’ approach to regulation.
- For now, it appears that any binding requirements in this area would be targeted at only the very small number of organisations actually capable of developing these highly capable general-purpose AI systems.
- It is expected that the UK government will publish an update on its work on new responsibilities for highly capable general-purpose AI systems by the end of 2024.
Other points to note
Development of an AI risk register
The establishment of a new central function to monitor and assess risks presented by AI – and to support regulatory coordination – is already underway. As part of this, in 2024, the UK government intends to launch a targeted consultation on a cross-economy AI risk register, intended to provide a ‘single source of truth’ on AI risks for use by regulators, government departments and others. The register is intended to support the UK government’s work to identify risks that fall across or in between regulators’ remits, so that it can identify gaps and prioritise further action as needed – recognising that a sectoral- or context-based framework is right today, but will almost certainly need bolstering over the longer term, including as new risks emerge and/or understanding of risks matures.
Balancing the value and risks of open sourcing
The consultation response emphasises that open sourcing of AI has, overall, been beneficial for innovation, transparency and accountability, but notes that there is a balance to strike between that openness and the mitigation of potential risks. It identifies an emerging consensus on the need to explore pre-deployment capability testing and risk assessment for the most powerful AI systems, including where systems might be open sourced. This testing could inform the deployment options available and change the risk prevention steps needed prior to a model’s release. In light of the UK government’s position on this issue, it will be particularly interesting to see the CMA’s views in its much-anticipated updated AI foundation model review findings. The CMA’s initial report on the UK AI foundation model market emphasised the positive market outcomes that likely result from a sustained diversity of business models, supported by the availability of both closed-source and open-source foundation models.
No agreement on intellectual property
The consultation response highlights that the UK Intellectual Property Office’s working group, made up of rights holders and AI developers, could not agree on an effective voluntary code on the interaction between copyright and AI. The UK government now intends to explore mechanisms to deliver greater transparency from AI developers in relation to data inputs and attribution of outputs.
More sectoral guidance coming
Although a number of regulators have already followed the directions in the white paper to take actions within their remit on the impact of AI – e.g., the CMA’s review of foundation models (see our October 2023 blog) – a number have yet to do so. The UK government has written to a number of regulators, including the Office of Communications (Ofcom), the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA), the CMA, and the Medicines and Healthcare products Regulatory Agency (MHRA), amongst others, to ask them to publish updates outlining their strategic approach to AI by 30 April 2024, to include:
- An outline of the steps they are taking with reference to the expectations set out in the white paper.
- An analysis of AI-related risks in their sectors and the actions they are taking to address these.
- An explanation of their current capability to address AI.
- A forward look at their plans over the coming 12 months.
The UK government encourages all regulators that consider AI to be relevant to their work to publish their approaches – and notes that its prioritisation of regulators may change over time to reflect evolving factors, such as its risk analysis. So, there’s more to come in this space.
Overarching AI-specific regulation in the future?
Beyond potential plans for targeted binding requirements specific to highly capable general-purpose AI systems, the UK government has expressed a belief that the UK and all countries globally will ultimately seek to regulate AI more generally – however, it believes that this will come once an understanding of potential AI risks has matured.The UK government expresses a desire both to influence and respond to international developments, cognisant that AI governance requires international cooperation. That said, it is apparent that the UK government’s intent is that there should be a level of coherence with other regulatory regimes, which is ‘appropriate’ so as not to hamper a pro-innovation approach.
If you have any questions about the consultation response, the topics discussed in this post, or the wider UK landscape relevant to AI systems and actors, please contact one of the authors listed below.
Of course, the consultation response and the UK’s context-based framework are just one piece of a wider regulatory puzzle. If you want to look beyond the UK and find out more about emergent AI regulation across the European Union and US, you can take a look at the following posts from the Cooley team.