In our previous article, we delved into the challenges and risks associated with generative AI (“GenAI”), while also highlighting the efforts of lawmakers and regulators to seek effective solutions.

In this second article, we discuss the proposals and enforcement actions taken by regulatory institutions in the UK, US, and EU to date, and explore the potential future actions they might consider to address the complexities of this rapidly accelerating technology.

The UK

The UK Government launched an AI white paper in March of this year, outlining its strategies and approaches for regulating the use of AI. The primary theme of the paper was the UK Government's commitment to a “pro-innovation” framework, supported by five guiding principles governing the development and utilisation of AI:

  • Safety, security, and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The intention behind these principles is to entrust their implementation and practical guidance to existing regulators, such as the Competition and Market Authority (CMA), Information Commissioner’s Office (ICO), and Financial Conduct Authority (FCA). The government has set a 6-to-12-month timeframe for these institutions to deliver some aspects of implementation in parallel with the wider consultation.

Furthermore, it was recently announced that the UK would be hosting the first major global summit on AI safety, bringing together leading countries, tech companies, and researchers to collaboratively agree on measures to mitigate AI risks through internationally coordinated actions. 

The CMA

Naturally, in an emerging environment, the CMA has taken an interest in GenAI, in line with their commitment to support an open and competitive market in the UK. As part of their efforts, they recently launched an initial review focused on competition and consumer protection considerations in the AI market within the UK, specifically examining the following aspects:

  • Competition and barriers to entry in the development of GenAI, given concerns that the market could potentially become consolidated within a limited number of large companies, similar to the situation observed with Google's dominance in the internet search space.

  • The impact of GenAI on competition in other markets. More specifically, whether access to GenAI tools becomes necessary to compete effectively in other markets and whether these tools are predominantly controlled by a few major companies. For instance, the dominance of Google and Amazon in the cloud infrastructure market, a significant enabler for the provision of a wide variety of IT services, was recently proposed by Ofcom to be referred to the CMA for market investigation. 

  • Consumer protection issues, in particular the risks posed by false and misleading information, as discussed in our previous article.

This review will inform the CMA’s approach to regulating GenAI, and we can expect other competition regulators globally to adopt their own approaches based on the findings and recommendations.

The ICO

At present, the UK has existing laws in place to protect individuals’ privacy, such as the Data Protection Act 2018 (DPA 2018) and the UK General Data Protection Regulation (UK GDPR). These laws are also applicable in the context of GenAI, and the ICO’s focus will be to ensure that companies operating within the GenAI space remain compliant.

The ICO recently released an AI and data protection risk toolkit and urged businesses to prioritise addressing privacy risks in the development and deployment of GenAI. They have also provided guidance via their blog, highlighting eight questions that developers and users should ask, emphasising the legal foundation underpinning their advice. In the medium term at least, it appears that the ICO considers the existing regulations as sufficient, placing the responsibility on technology companies to ensure compliance.

The FCA

While the FCA has been slightly slower to engage with the public discourse on GenAI, it recently introduced its emerging regulatory approach towards Big Tech and AI within financial services. The FCA emphasised that large tech firms’ role in financial services as the gatekeepers of data will be under increased scrutiny, and their “outcomes-and-principles-based” approach will foster innovation while ensuring consumer protection. It was also explained that the FCA already has frameworks in place to address certain AI issues. Examples include the Consumer Duty that mandates firms to prioritise positive consumer outcomes and the Senior Managers and Certification Regime that holds senior managers accountable for their firms’ activities are both applicable to GenAI products and services. And all the FCA’s principles – such as that firms must have adequate controls – apply to their processes whether driven by humans, traditional technology, or AI.

The CMA, ICO, FCA, and Ofcom had joined hands to form a Digital Regulation Cooperation Forum (DRCF) to support regulatory coordination in digital markets and cooperation on areas of mutual importance. The DRCF, amongst other things, has an Algorithmic Processing workstream that focuses on the harms and benefits posed by algorithmic processing (including the use of AI).

Other legislation

As mentioned in our previous articles, addressing AI-related concerns such as public safety and false information requires robust and effective regulations and enforcement actions. However, at present, the UK seems to lack appropriate measures to comprehensively mitigate these risks.

For example, the forthcoming Online Safety Bill, which seeks to regulate online content and impose legal obligations on search engines and internet service providers, is expected to cover GenAI. The bill is designed to address harmful content and improve the safety for online users, but some critics argue that it currently lacks a comprehensive plan to address misinformation and disinformation created by GenAI.

Furthermore, there's currently something of a gap in the UK regarding false and misleading information. Existing legislation covers offensive or defamatory information, meaning untrue information that is neither offensive nor defamatory remains largely unchecked.

The US

The US Federal Trade Commission (FTC) has recently launched the latest salvo in the global regulatory reaction to the rapid rise of GenAI, opening an expansive investigation into OpenAI. Their concerns encompass a broad range of issues, including specific data security matters related to a recent incident, as well as whether the company is involved in practices that might result in reputational harm for its customers (for example, presenting incorrect or misleading information as factual). Given their extensive reach, this investigation arguably represents one of the most significant moves globally to date in response to GenAI's emergence.

Prior to initiating this action, the FTC had been actively signalling its belief that GenAI should not be more exempt from existing regulations than any other business. In sharing some of its thoughts in April, they emphasised that “GenAI is regulated”, citing examples of how some potential harms, such as unfair and deceptive trade practices, fell under their regulatory remit or were covered by existing laws. They also noted that they would “support stronger statutory protections”.

The FTC has further issued specific warnings to companies regarding the manipulation of customers’ behaviour through GenAI. One concern they highlighted is related to “automation bias”, whereby people tend to favour suggestions from automated decisioning systems, sometimes ignoring contradictory information (such as following an automated GPS and driving into a harbour). The FTC worries that this might steer customers, deliberately or otherwise, towards making decisions that prove harmful.

Meanwhile, in Congress, Senators Graham and Warren recently announced their bipartisan initiative for the creation of a new Digital Consumer Protection Commission to oversee all of the activities of Big Tech, including AI. If this initiative succeeds it could herald an even more interventionist approach to regulation of technology and data in the US than seen so far from the FTC.

It remains to be seen if this, or other legislative action, results in dedicated US regulation of GenAI in the future. The US's approach to regulating cryptocurrencies has so far relied on applying existing frameworks and testing them in court. This strategy has yielded mixed outcomes, as demonstrated by a recent ruling against the Securities and Exchange Commission. It is conceivable that a similar approach might be adopted with GenAI, where existing frameworks and laws are strategically applied to address emerging challenges, and future court decisions could establish guiding principles and effective regulatory tools for the technology.

The EU

The EU has previously proposed the EU AI Act, a comprehensive legal framework aimed at regulating AI development and usage. This act categorises AI systems into four risk tiers, with the regulation attached to the AI system scaling according to the risk. This scales from “unacceptable risk” at the high end, which will be prohibited (for example, social scoring systems), through to “no risk”, which will be permitted with no restrictions, with tiers in between having increasingly strict compliance requirements (such as transparency, risk management, and human oversight). However, some European companies have raised concerns that this legislation might negatively affect competitiveness and technological sovereignty.

The Act was already well-progressed before ChatGPT took the world by storm, so EU lawmakers were somewhat caught on the back foot. The original regulation only had one reference to “chatbot”, and its approach to GenAI was largely focused on deepfakes. Consequently, lawmakers had to act quickly to incorporate language covering GenAI and the foundational models (referring to any model trained on a large and broad volume of data for a general purpose) that underpin them. Creators of foundational models must guarantee protections of fundamental rights, health and safety, democracy, the environment, and rule of law, as well as assess and mitigate risks. GenAI also has an additional requirement regarding the disclosure of whether copyrighted material was used in its training, in addition to preventing it from generating illegal content.

These proposals were recently voted through, which progresses the act to the final trilogue stage of the EU's regulatory process. It could pass into law by the end of the year, but there is likely to be a gap of two or three years before the regulations come into force, leaving the question as to what AI will look like in the year 2027.

What’s next?

As we have observed in the case of other new and emerging technologies, such as cryptocurrencies, we can anticipate that existing frameworks and regulations will be tactically applied to address the challenges of GenAI. 

In practice, this could involve further similar actions taken by regulators to best assert their regulatory authority where possible (we discussed how the Advertising Standards Authority stepped into a regulatory void regarding digital assets last year), likely followed by precedent-setting court decisions as these are challenged by the industry.

While progress will continue to be made, there is still much work to be done to ensure the responsible use of GenAI. Lawmakers and regulators must collaborate closely with industry experts, researchers, and stakeholders to find appropriate solutions that strike a balance between promoting innovation and mitigating potential harms.