The boom in AI has seen some firms hit the jackpot. In August, US Chipmaker Nvidia’s shares hit an all-time high with its revenue more than doubling in the latest quarter, with its chips being sourced for the training of AI models.
The rise in AI has caught the attention of leaders and legislative makers across the world with nations keen to be world leaders in the space. The UK is no different. In March 2023, Chancellor Jeremy Hunt set aside £900 million for AI research and the creation of a new exascale supercomputer to build its own ‘BritGPT’.
The UK’s seriousness in becoming a leader in AI was restated when the government announced the world’s first summit on AI safety in early November. The plan is to bring government representatives, academics and industry experts to plan for the future development of AI.
But how well is the UK already doing and what can we expect ahead of this summit?
London vs. the rest of the world
The UK is already doing exceptionally well when it comes to AI innovation. Figures from Beauhurst show there are 967 AI companies based in London, rivalling other key tech hubs such as San Francisco and New York.
A recent article in The Times by Katie Prescott highlighted how London was a hotbed for AI, highlighting that:
- AI firms contributed £3.7 billion to the UK economy last year and employed 50,000 people.
- London is home to the top three biggest AI fundraisers.
In comparison to the rest of Europe, the UK has twice as many companies providing AI products and services as any other country.
Powerful use cases
There is huge interest in AI across the private and public sectors, with a wide range of firms and agencies utilising the technology to impact our everyday lives – from healthcare and treatments to our finances and even criminal proceedings.
As reported by The Guardian, preliminary results from a large study suggest AI screening for breast cancer, one of the most prevalent illnesses, was hugely successful. The results show that AI was as good as two radiologists working together and did not increase false positives, with workloads being halved.
AI does have the ability to change our world for good but to be effective it does require a vast amount of data on which to train and be accurate.
This need for data raises ethical and privacy questions. These are certainly issues that the UK and other countries must consider carefully as innovation continues.
The threat
In March, an open letter with signatures including Elon Musk urged the world’s leading AI labs to pause the training of new super-powerful systems for six months, saying that recent advances present “profound risks to society and humanity.”
This was in the wake of a wide range of AI-generated images circulating on the internet, known as ‘deepfakes’. This included an image of Pope Francis wearing a Balenciaga puffer jacket, which many believed was real.
There is a genuine concern that it will become more difficult to tell whether AI images and videos are real or not. This could have very serious consequences and criminals could exploit the technology to cause harm and commit crimes.
Others believe that if organisations, particularly banks use AI models with incomplete data it could perpetuate social biases when lending or providing mortgages to people from minority groups.
Future regulation and what can we expect
The speed of innovation in AI has meant that many legislators feel that regulation must catch up to this threat.
Governments believe that the private sector cannot be trusted to develop, train, or implement AI systems ethically and in line with individual rights without heavy intervention.
The EU is leading the way in legislation through the EU AI Act. This act will:
- Outline the responsibilities, risk assessments and transparency that developers will need to take when using data to train AI models
- Banning AI for live facial recognition
- Banning AI from social scoring: classifying people on behaviour, socioeconomic status, or personal characteristics
Unlike Europe, the UK has shown a deliberate reluctance to regulate AI, preferring to focus on innovation.
In August, the UK published a white paper in which Secretary of State for Science, Innovation and Technology, Michelle Donelan MP outlined her hopes that the “UK (will be) the best place in the world to build, test and use AI technology.”
The white paper highlighted the UK does not intend to propose new legislation but may, if necessary, amend and adapt existing legislation.
Shortly after, the UK announced that it would spend £100m of public funds on the development of AI chips. In reaction, our client, Mosaic Smart Data CEO Matthew Hodgson said:
“It’s great to see the government committing to the future of technology and innovation in the UK, putting its money where its mouth is and recognising the role AI technology will play in continuing to drive evolution in sectors like capital markets.
“Encouraging international cooperation in managing the development of AI technologies, will go a long way in boosting the government’s pledge to make Britain the next ‘Silicon Valley’ and is a positive move in the UK’s ongoing quest to become a science and technology superpower.”
The upcoming AI Safety Summit may give us a further indication of the UK’s approach to AI as well as how it will deal with increasing privacy and ethical concerns.
However, we should expect that the UK will collaborate more closely with the US, following the Atlantic Declaration which highlighted that both countries would work together on this issue.
It will be interesting to see how this close relationship will deal with the challenges and opportunities that the rise of AI presents.
Chatsworth
We were the first communications agency to focus on fintech.
We’ve been building fintech reputations for 20 years, steering start-ups through launch, growth, and onto corporate action while protecting and enhancing established infrastructures.
For intelligent, informed and connected fintech PR which delivers results and value, let us help build your reputation and tell your story.