In April, the UK Government issued a White Paper setting out its proposed approach to regulating the development and use of Artificial Intelligence. We also saw the open letter from the Future of Life Institute, signed by leading figures from the technology industry, calling for a pause in the training of more powerful AI systems in order for protocols to be developed to ensure greater safety and oversight in relation to AI. And the week ended with the Italian Data Protection Regulator issuing a ban on the processing of personal data by OpenAI LLC, the developer and operator of ChatGPT.
This article first explains the UK Government’s direction of travel in its White Paper and then assesses this against other recent developments.
The UK Government’s aim set out in the White Paper is for the UK to be best place to build, test and use AI technology (see here). It seeks to create an environment where innovation thrives whilst at the same time addressing the risks that AI poses in order to build the necessary trust in its further development and deployment in society. The White Paper suggests no new legislation is needed to achieve this at this stage in addition to what is already in place. Instead, it proposes a sector-based approach where existing regulators in those sectors will guide and police the development and deployment of AI based on a non-statutory framework of five common principles. These five principles are:
AI systems should function robustly, securely and safely throughout their lifecycle with risks monitored and managed throughout.
There must be sufficient transparency around when AI is being used and for what purposes and an explanation about how it reaches decisions.
AI systems should not undermine the legal rights of any individual or organisation or discriminate unfairly (for example in areas such as provision of credit, in recruitment and in insurance).
Governance measures should be put in place to ensure effective oversight over the AI and organisations and individuals should be accountable for the AI and be able to demonstrate this.
Those impacted by AI should be able to contest decisions that are harmful or create material risk of harm and be able to seek redress through existing mechanisms (no new redress mechanisms are proposed for now). The White Paper specifically recognises the risk of bias in AI systems.
It is envisaged that regulators (such as the Information Commissioner, the Equality and Human Rights Commission, the Financial Conduct Authority and the Competition and Markets Authority) will need to issue revised or new guidance reflecting these five principles. In fact, the ICO issued revised guidance on AI and Data Protection only in March 2023, but this will probably need to be re-visited. The Government recognises that regulators will need to coordinate with one another to ensure consistent guidance and may issue joint guidance in some cases.
The White Paper recognises that the Government may later need to underpin the five principles with a statutory obligation on regulators to apply them but would rather first see how effective the non-statutory framework is before reaching a decision on this.
The UK Government proposes to play a monitoring and coordination role over the implementation of its proposals, but makes it very clear that there will be no new regulator of AI in the UK.
The White Paper is seeking responses by 21 June 2023. The Government then proposes to publish its response by September 2023. The ambition is then to implement the main part of the proposals by Spring 2024 with ongoing work thereafter on other elements. The expectation is that regulators will begin to develop and publish guidance from Autumn 2023 onwards.
There are questions of detail that remain to be answered. For example, the White Paper states that as the framework of five principles is not proposing new legal requirements, it will not change the territorial applicability of existing legislation relevant to AI. It isn’t entirely clear what this means. For example, the UK GDPR already has legislative impact on AI and how this is applied may be impacted by the adoption of the five principles. UK GDPR does have extra-territorial impact on those based outside the UK (see Article 3.2 of UK GDPR for example). So, is the UK Government suggesting that insofar as the five principles affect those existing laws, they will not be applied to those based outside the UK?
Then, there are questions about how realistic the White Paper is regarding the UK’s proposed independent approach, particularly when it appears to be at odds with that being adopted by the European Union in the draft EU AI Act. The EU approach is to legislate directly and prescriptively to put in place control over AI (in particular those AI systems that are deemed to be of higher risk) on a consistent cross-sector basis. Having said this, the UK Government White papers calls out many of the risks that the EU classifies as higher risk – impacting human health, affecting critical infrastructure, and posing serious risks to privacy. It is simply that the way these will be regulated differs. This could mean in practice that those that follow either (or both) regimes arrive at the same result. However, the devil will be in the detail over how UK regulators regulate against the five principles, and in the meantime that will create continuing uncertainty over how AI can be developed and deployed.
Questions also arise from the decision of the Italian Data Protection Regulator on 30 March 2023 to issue a temporary ban on the processing of personal data of individuals based in Italy by OpenAI LLC, the developer and operator of ChatGPT (see here). The bases for this decision are allegations that those individuals have not been informed about what their data is being used for, that there is no lawful basis to process it, that some of the data is inaccurate and that there is no age verification in place for those under 13 years of age. It is unclear what impact this will have on the ChatGPT service given the limitations of the ban to personal data of individuals based in Italy. However, in a Large Language Model such as ChatGPT it might be difficult to disentangle the data covered by this prohibition from other non-Italian data. For those developing and deploying AI, this decision illustrates the exposure to regulatory oversight in other jurisdictions. Compliance with the requirements sought to be established in the White Paper may not be enough and the ambition to create an environment where the UK is the best place to build, test and use AI may well be a pipedream.
The open letter from the Future of Life Institute calling for a pause on the training of more powerful AI systems is indicative of increasing concerns about some iterations of AI (see here). Concerns are being expressed about safety and wider societal impacts. The approach the UK Government is proposing in the White Paper will mean there will be no legislative debate of these proposals. Whether that approach can survive what is likely to be an increasingly tense debate over AI in the coming few months remains to be seen. Interestingly, the White Paper does recognise the special character of Large Language Models (like ChatGPT) and the need to pay close attention to them. Perhaps pressure will emerge on the UK Government to take a more prescriptive stance with LLMs?
What is clear is that anyone involved in the development and use of AI will need to monitor these developments carefully over the next few months to see whether the approach set out in the White Paper is maintained after the consultation closes and, if so, where the detailed regulatory guidance takes us.
If you require further information about anything covered in this briefing, please contact Ian De Freitas or your usual contact at the firm on +44 (0)20 3375 7000.
This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.
By Ian De Freitas
Farrer & Co LLP
London, England