See reviews >

UK AI regulation: Government Report Highlights 12 Challenges with AI

Published
4 minutes reading time

The House of Commons Science, Innovation, and Technology Committee published its interim report following its inquiry into the governance of AI.

AI has experienced exponential change and development in 2023, with AI solutions such as ChatGPT now fast becoming household names. Given the unprecedented development AI has experienced, an inquiry was deemed necessary to consider the potential risks and challenges such rapidly evolving technology poses and the issues arising from governing the same.

The report, published on 31st August 23, has identified 12 primary challenges which must be addressed via domestic policy and international engagement. Our Technology Law experts outline these challenges below.

Contact Our IT Lawyers

AI Challenges 2

The 12 Challenges

The 12 challenges highlighted in the report include:

  1. Bias. AI can introduce or perpetuate biases that society may find unacceptable.
  2. Privacy. AI can allow individuals to be identified and their personal data to be misused.
  3. Misrepresentation. AI can generate material that deliberately misrepresents someone’s character, behaviour or opinions.
  4. Access to data. AI requires large datasets which are held by few organisations, raising competition concerns.
  5. Access to compute. AI requires significant computing power, which is limited to a few organisations, again raising competition concerns.
  6. Black box. AI cannot explain why certain results are produced, which raises concerns that it is inexplicable and its process is not transparent. 
  7. Open-source. Views differ on whether the software code used to make AI should be publicly available (open-source). 
  8. Intellectual property and copyright. Use by AI solutions of proprietary materials must protect the rights holder.
  9. Liability. Government policy should consider whether the AI developers and providers are liable should the AI cause harm.
  10. Employment. Government policy should manage the disruption caused to the employment market by AI. 
  11. International coordination. AI is a global technology, and developing governance frameworks to regulate its uses must be an international undertaking.
  12. Existential. If AI presents a threat to human life, governance should afford protection for national security.

The UK is set to host the first global AI summit this autumn, where key countries, leading Tech companies and researchers shall meet to agree on the collective measures required to mitigate the risks arising from AI technology. The report recommends that the 12 challenges identified form the basis of such risk analysis conducted at the summit.

Contact Myerson Solicitors

the 12 challenges

What Should Businesses Operating Within The Tech Sector Do Next?

The Government’s response to the interim report is due by 31st October 2023. In the meantime, the inquiry continues, and a final form report is awaited in due course.

AI and other emerging technologies are demonstrating fast-paced development and change, and the Government’s report on governance concerns highlights the growing demand for proactive regulation and governance in this area.

Organisations operating within the Tech sector should ensure that they remain up to date with any proposed changes to the regulation of AI or other emerging technologies and the legal and practical consequences this may have for their organisations.

Enquire With Our Technology Law Solicitors Today

What Should Businesses Operating Within The Tech Sector Do Next

Contact Our IT Solicitors

If you need legal advice regarding technology-related matters, such as the legalities behind using AI platforms, please contact our IT / Technology lawyers on

0161 532 9780