As anyone who has been monitoring the news recently (or planning weddings) will be aware of, the launch of ChatGPT by OpenAI has marked a milestone in the development of generative artificial intelligence (AI).

Generative AI, or GenAI, has the remarkable ability to create seemingly original content across various domains, including text, images, music, videos, and computer code, all using relatively simple human-understandable prompts. The creative capacity of GenAI has showcased its potential to revolutionise established markets and pave the way for new ones. 

Naturally, the immense power and potency of these technologies raises significant ethical – and therefore soon to be legal – risks. In addition to potentially leading to civil disputes (as discussed in a previous blog post), regulators and lawmakers are now starting to pay considerable attention to this new technology. 

Historically, regulators and lawmakers have found themselves struggling to keep up with technological advancements. This results in technology companies operating without clear guidelines and frameworks (in comparison to, for example, banking), and results in the regulators taking a reactive approach rather than a proactive one when resolving emerging issues. 

We saw recently how this can play out in a space without clear compliance rules with the ‘insider trading’ conviction of a former product manager at OpenSea, a large NFT marketplace (also discussed in a previous blog post). Additionally, in a move that is unprecedented by a leader in the tech industry, OpenAI’s CEO Sam Altman is speaking to regulators worldwide, calling for “regulatory intervention by governments” to “mitigate the risks of increasingly powerful AI models”.

In this first article we will explore why lawmakers and regulators are turning their eyes to GenAI. In our subsequent article, we look more at how this scrutiny is manifesting itself in these areas, and what the future might hold.

Public safety

GenAI models like ChatGPT will usually have some form of content moderation, restricting the information it can share. This will typically involve preventing users from asking for assistance with illegal activities, providing medical or financial advice, or revealing personal information. 

However, enterprising users are finding ways of bypassing these restrictions, including a particularly creative way of fooling these models (“jailbreaking”) into revealing recipes for dangerous substances. These workarounds are typically identified and closed down by the developers, though these do effectively represent something of a game of “whack-a-mole”, as users try to identify these bypasses, and developers look to shut them down. 

As well as producing a thriving community of cybercriminals sharing their own “jailbreaks” to use on ChatGPT, a tool called WormGPT was recently exposed in an investigation by SlashNext, offering a means to access what is effectively a version of ChatGPT with no ethical limitations imposed.

These “jailbreaks” and alternative GenAI models present means by which harmful information might be disseminated. This could give rise to a risk to public safety, not to mention liability concerns to the companies themselves. Lawmakers and regulators globally will need to decide how, or indeed if, to address these concerns.

Data Privacy and Security 

Data privacy rarely feels like it is out of the news, with high profile data breaches reported on a regular basis (with WH Smith and JD Sports being amongst the recent targets). The ascent of GenAI provides another nexus through which it is possible for private data to be collected, and potentially leaked.

GenAI creates content by collecting and training on extensive amounts of data obtained from publicly available sources, which may include individuals' sensitive personal information. Of note is that, as the information is processed automatically by GenAI models, there is no way for users to give consent to the use of their personal data in this manner. Additionally, once a GenAI model is trained, it becomes difficult to remove specific instances of information. This lack of fine-grained control makes it difficult to enforce the right to be forgotten when erasure of specific information is required. The extremely complex nature of these models, and the difficulty in unpicking personal data, will pose significant challenges to regulators and to industry alike.

Disinformation, Misinformation and Fraud

“A lie can travel halfway around the world while the truth is still putting on its shoes.” This quotation, ironically often misattributed to Mark Twain, is itself fairly rooted in fact. Researchers at MIT found in a 2018 study that false information spreads farther, faster, deeper, and more broadly than the truth. GenAI provides another tool that can be used to create, or spread, misinformation. 

These tools enable the efficient generation of misleading, inaccurate, or fabricated content. The most well-known use of this is in producing “deepfakes”, creating images of fake events. This can be as innocuous as the recent image of the Pope wearing a puffer jacket that went viral in March, or for the purposes of defrauding vulnerable people, such as when consumer finance expert Martin Lewis’ face and voice was used to recommend dubious investment products. It is also possible to create entirely new people, as was suspected in the case of the viral left-wing twitter user Erica Marsh that was recently banned.  

Additionally, advanced chatbots can help produce false narratives and stories on a large scale, at a rapid pace, and with minimal cost (we are already seeing lawsuits against developers relating to alleged misinformation). The increasing prevalence of GenAI exacerbates the problem of disinformation and misinformation, making it even more challenging to address.

Identifying and validating false information can be challenging, as it often takes more time to validate or refute the information than it does to create it. Furthermore, creators (humans or bots) of information are often difficult to trace. This lack of accountability contributes to the perpetuation of false narratives, as those responsible cannot be held liable for their actions. With a suite of elections taking place over the next couple of years, the stakes are becoming higher when it comes to tackling this kind of misinformation.

Competition

Currently, only a limited number of players are competitive in the realm of GenAI. The challenges associated with accessing vast amounts of high-quality data, powerful computational resources, highly specialised expertise and skills, and substantial capital investments represent considerable barriers to entry that may contribute to market concentration and even monopolisation. Major tech companies such as Google, Meta, Amazon, and Microsoft are among the few leading at the forefront of large language model developments, thereby solidifying their advantageous positions in the AI era. Meanwhile, other companies, such as Elon Musk’s recently launched xAI, find themselves largely playing catch-up.

When new technologies emerge, competition regulators will need to be mindful of how these might impact existing markets. We’ve seen in the past how easy it can be for one company to become dominant in a particular space (such as with Google, in internet search), and these can require remedies. Those of us who have been online for long enough will recall the “browser wars”, where Microsoft was able to use their dominance in the operating system market to displace Net, despite the intervention of the Federal Trade Commission.

Conclusion

In summary, GenAI has the potential to have a transformative effect on our lives. However, as with any new technology, it has the potential to be misused and to cause harm to us as users, as consumers, and as participants in society. 

It is something of a trope that legislation lags behind technology, and GenAI does not appear to be an exception so far. Lawmakers and regulators are currently engaged in finding solutions to address the challenges and risks it presents, such as those described above. In our next article, we look at the moves that have been, and are being made, such as the UK’s AI white paper, and the EU’s AI act, in more detail.