It’s safe to say that AI has been dominating the news lately, particularly over the past 12 months, with the launch of AI applications such as ChatGPT.
As AI systems have become more ‘mainstream’, we have seen an increase in pressure on Governments and other regulatory bodies for clarity on what safeguards will be implemented to protect against the potential risks of AI (such as minimising job opportunities and exploitation by cyber criminals) whilst still being able to realise its full potential.
For example, an open letter published in March, which was signed by dozens of tech leaders, including Elon Musk and Stuart Russell, suggested that the ‘race to develop and deploy AI systems was out of control’ and called for a ‘six-month pause on developing AI systems which are more powerful than those already on the market’.
Currently, the UK Government has no intention to introduce any particular legislation or an independent regulator but is instead drawing upon the experience of existing regulatory bodies to put forward approaches which are tailored to the way in which AI is being used in the specific sector.
We are already seeing contrasting approaches to the regulation of AI by other nations.
For instance, the EU is already in the latter stages of finalising the AI Act, which is set to become the world’s first comprehensive AI law.
With concerns rising, the UK announced that it would host the first global AI Safety Summit at the beginning of November to consider the risks of AI and discuss how they can be mitigated through joint global efforts.
Myerson Solicitors' Technology team have set out some of the key takeaways of the Summit below.