See reviews >

The AI Safety Summit: Avoiding a Tech-Catastrophe

Richard Meehan 's profile picture

Richard Meehan - Senior Associate

Published
5 minutes reading time

It’s safe to say that AI has been dominating the news lately, particularly over the past 12 months, with the launch of AI applications such as ChatGPT.

As AI systems have become more ‘mainstream’, we have seen an increase in pressure on Governments and other regulatory bodies for clarity on what safeguards will be implemented to protect against the potential risks of AI (such as minimising job opportunities and exploitation by cyber criminals) whilst still being able to realise its full potential.

For example, an open letter published in March, which was signed by dozens of tech leaders, including Elon Musk and Stuart Russell, suggested that the ‘race to develop and deploy AI systems was out of control’ and called for a ‘six-month pause on developing AI systems which are more powerful than those already on the market’

Currently, the UK Government has no intention to introduce any particular legislation or an independent regulator but is instead drawing upon the experience of existing regulatory bodies to put forward approaches which are tailored to the way in which AI is being used in the specific sector.

We are already seeing contrasting approaches to the regulation of AI by other nations.

For instance, the EU is already in the latter stages of finalising the AI Act, which is set to become the world’s first comprehensive AI law.

 With concerns rising, the UK announced that it would host the first global AI Safety Summit at the beginning of November to consider the risks of AI and discuss how they can be mitigated through joint global efforts.

Myerson Solicitors' Technology team have set out some of the key takeaways of the Summit below.

Contact Our Technology Solicitors

The AI Safety Summit Avoiding a Tech Catastrophe

3 Key Takeaway Points of the AI Summit 

The three key emerging points from the AI summit are:

The Bletchley Declaration 

28 countries (including heavyweights such as the USA, the EU and China) have signed this aptly named declaration with a nod to the infamous wartime Codebreakers.

The declaration establishes a shared understanding of the opportunities and risks posed by AI development and the need to collectively manage potential risks through a joint global effort to ensure AI is developed and deployed in a safe and responsible manner.

In theory, such a joined-up approach is welcomed. Still, on a practical level, there will need to be an equal investment in the training of experts in adjudicating and enforcement.

It remains to be seen as to whether this be a global body or whether countries are left to implement their own systems, which doesn’t feel like it would sit with the global nature of AI that has no physical or jurisdictional boundaries at the moment.

It therefore raises the question as to whether we need gatekeepers of the gatekeepers, a two-tiered approach?

Safety Institutes

  1. The AI Safety Institute - The UK announced its new AI Safety Institute, whose mission is to ‘minimise surprise to the UK and humanity from rapid and unexpected advances in AI’. The Safety Institute shall be tasked with testing the safety of new and emerging types of AI and assessing the potentially harmful capabilities.
  2. U.S. Safety Institute - Vice President of the US, Kamala Harris, announced that the US is establishing an AI Safety Institute, which will develop technical guidance and enable information sharing and research collaboration with peer institutions, including the UK’s AI Safety Institute.
  3. European AI Office - Similarly, the president of the EU Commission announced the launch of its proposed European AI Office, which shall delve into the most advanced AI systems. It is envisioned that the EU AI Office will have a global reach, helping to foster standards and testing practices for AI systems, and shall also cooperate with similar entities around the world, including the newly formed UK and US AI institutes.  

Isambard-AI 

The UK government announced a major £225 million investment into a new supercomputer called Isambard-AI, which will be built at the University of Bristol and is intended to achieve breakthroughs across healthcare, energy, climate modelling and other fields and forms part of the UK’s aim to lead in AI while partnering with allies like the US.

Get In Touch With Myerson Solicitors

3 Key Takeaway Points of the AI Summit

What’s Next for AI?

Whilst the Summit will be the first of many, with France agreeing to host the next in-person AI Safety Summit in a year’s time, it has been made clear that the Summit is just the start of the discussions, and it’s unlikely that the two-day Summit will establish any major new global policies on AI.

Further, the UK has said it is in no rush to regulate the sector but rather wants to start a general, global conversation around AI safety.

With the U.K. set to publish its anticipated responses to the AI regulation white paper later this year, we will continue to closely monitor the latest developments in relation to AI. 

Whilst it’s understandable that the Government is not rushing into creating legislation which could quickly become outdated and struggle to be agile enough to adapt with this rapidly ever-evolving technology. 

Speak With Our Technology Team

Whats Next for AI

Comments

The Ministerial Foreword to the Government’s recently published policy paper on the AI Safety Institute, Rt Hon Michelle Donelan MP stated, “we were surprised by rapid and unexpected progress in a technology of our own creation”.

Really? Do we need to revisit some of the Hollywood blockbusters that have been warning us for decades?

Films such as:

  • 2001: A Space Odyssey (1964) – a cult classic depicting an early interpretation of what could happen when AI rebels. 
  • The Terminator (1984 onwards) – SKYNET - do we need to say anymore?·       
  • Robocop (1987) - despite being over three decades old, it’s still thought of as a thought-provoking look into the ethics of AI and the dangers of misusing technology.
  • The Matrix (1999) – we now have the ever-evolving Metaverse.
  • Minority Report (2002) - predicting the not-so-distant future – the AI central to the film is a crime prediction tool. Similar A.I. tools have been deployed within various agencies in the U.S. since 2012.
  • I, Robot (2004) – rogue self-thinking robots capable of human crimes – how would we begin to police this – send in Robocop?
  • Chappie (2015)- highlights the perils of AI in criminal hands.

And the list goes on. Whilst Sci-fi films allow for exaggeration and imagination, it’s not beyond the realms of possibility that some fiction could become fact!

Do we need to book the film nights, revisit some classics, and start taking notes from these movies?

Get In Touch With Myerson Today

Comments v4

Contact Our Technology Solicitors

If you would like to discuss any of the points mentioned in the blog in further detail, please get in contact with our Tech Solicitors, who would be happy to assist you.

01619414000

Richard Meehan 's profile picture

Richard Meehan

Senior Associate

Richard is a Senior Associate in our Commercial Team and Head of the Life Sciences sector with over 13 years of experience acting as a Commercial solicitor. Richard has specialist expertise in the negotiation of commercial contracts relating to the supply and distribution of goods and services, the licensing of software, and intellectual property.

About Richard Meehan >