Governments regulating AI must mirror how this technology operates – the key is interoperability.

The AI rubber is 100 miles down the road and speeding up. It will not wait for governments to debate how to regulate.

Pity our poor leaders. They just about get their heads and legislative timetables around crypto and blockchain technology, then AI turns up and gives them another regulatory headache.

The potential and benefits to support economic activity are massive and well documented. As are the risks and fears. And technology has a tendency to operates without borders.

In the UK alone, AI contributes £3.7 billion to the UK economy. Over 50,000 people across the country working on this technology. The UK sector is the third largest in terms of AI Private Capital globally, following the United States and China, That’s a lot of bandwidth.

So how do national governments balance keeping citizens safe while promoting innovation in their territories and globally? The answer is to mirror how this technology operates and functions.

Data interoperability is the critical component of AI and machine learning. Regulation and legislation needs to do likewise – with the closest possible harmonisation between nation states.

Yesterday the Kings Speech – which outlines the UK governments upcoming legislative programme – delivered a bit of a nothing burger on AI legislation. Announcement of an AI bill was expected. It was not forthcoming. That’s largely because 1. They have a lot in their in tray and 2. Regulating AI is, to put it mildly, tricky. Who, what, how far, how to police?

We are in unchartered, unprecedented administrative waters here. It’s analogous to the 3D chess governments and regulators have been playing around cryptocurrencies and DLT – how to harness the good/utility while protecting citizens from risk.

So far the UK government has committed to “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. That’s a bit like replaying the brief, as we say in agency-land. “We need a law.” Fine, so what is it? 

So, no AI bill just yet. Labour’s manifesto has a few more clues – prior to the election the party indicated an intention to introduce “binding regulation on the handful of companies developing the most powerful AI models”.

The biggest hint of the direction of play was plans to tackle the issue by imposing a “statutory code” for firms developing AI to share safety test data with the government and its AI Safety Institute.

That’s a wholly different approach from the previous administration which had proposed a non-binding agreement from developers on AI safety. Self-policing is a controversial approach. It hasn’t worked out too well for some areas of the technology and financial spaces.

We expect the UK’s AI bill to look crib heavily from their neighbours in Europe. The EU’s AI Act approved in March, and pretty clear and provides binding rules for AI developers. 

The Act is contains four levels of risk: minimal, limited, high and unacceptable. AI use in the unacceptable category is banned. That includes web scraping of facial images, clear and intentional misinformation and social scoring/profiling. It also requires developers to maintain logs of safety testing and to share them with regulators.

Currently, there is no comprehensive federal legislation or regulation in the US to regulate the development of AI or specifically prohibit or restrict its use. That needs to change  I don’t think it is any overstatement to say that this particular regulatory challenge is one of the most important and difficult challenges for governments.

Rishi Sunak’s AI Safety Summit” at Bletchley Park brought together all the main global players from big tech to individual developers and governments to discuss seriously what to do about a technology that may change the world. Tellingly, as one of the organisers, observed, in contrast to other big policy debates, such as climate change, “there is a lot of good will” but “we still don’t know what the right answer is.”

Whichever way this goes, governments need to move decisively and quickly to balance protection with innovation. This technology also has another characteristic that legislators would do well to note. It moves fast. Very fast.

For further reading and a neat comparison of global AI regulatory approaches, can we point you to this excellent summary in MIT’s Technology Review by Melissa Heikkilä

Chatsworth

We were the first communications agency to focus on fintech.

We’ve been building fintech reputations for 20 years, steering start-ups through launch, growth, and onto corporate action while protecting and enhancing established infrastructures.

For intelligent, informed and connected fintech PR which delivers results and value, let us help build your reputation and tell your story.

Amplify your fintech story

Chatsworth Communications Limited is a company registered in England and Wales with company number 05333272.
Our registered office address is 27-31 Clerkenwell Close, London, EC1R 0AT

Let's connect

Privacy Policy*