Navigating the Evolving AI Regulatory Landscape  

The UK has emerged as a prominent player in AI, with a thriving market and a projected value of £803.7 billion by 2035. [1] However, the rapid pace of AI development has raised questions about regulatory frameworks and the Government’s capacity to manage this innovation effectively.

The path forward for policymakers is shrouded in complexity, largely because AI is a fundamentally new and rapidly evolving technology. Over the last decade, the number of UK AI companies has surged by a staggering 688%.[2]  The speed at which AI systems are advancing far outpaces traditional regulatory frameworks and often leaves policymakers navigating uncharted waters. The intricacies of machine learning algorithms, neural networks, and AI applications can be daunting even for seasoned experts, let alone policymakers who may not have a deep technical understanding of these innovations.

Striking the right balance between fostering AI innovation and safeguarding against potential risks is a delicate task, further complicated by the dynamic and multifaceted nature of AI. This knowledge gap underscores the importance of informed, collaborative, and agile policymaking in this transformative era.

Political Landscape and AI

The Office for Artificial Intelligence was established in 2018 and it works under the Secretary of State for Science, Innovation and Technology to oversee the implementation of the National AI Strategy. It focuses on AI research and development. The National AI Strategy was launched in September 2021 and aims to guide the use of artificial intelligence in the UK, drive responsible innovation, and maintain public trust in this revolutionary technology. There is also a Minister for AI and Intellectual Property. This role is part of the Department for Science, Innovation and Technology and the current role holder is Viscount Camrose, who was appointed in March 2023.

In the political arena, AI is both a tool and a subject of concern. The UK Government has recognised AI as one of the five critical technologies, and significant investments have been made in AI research and development. Initiatives like the AI Tech Missions Fund, AI Research Resource, and plans for a cutting-edge supercomputer underline the Government’s commitment to AI innovation. Recent efforts include consultations on AI regulation and commitments to establish a regulatory sandbox for AI.

However, the rapid adoption of AI raises political questions and concerns. The evolving regulatory landscape must address issues such as transparency, accountability, and ethical considerations.  The need for expertise in AI governance within the government is evident, given the novel challenges posed by this technology. Alicia Kearns, an influential backbench Conservative MP who chairs the Foreign Affairs Select Committee, urged AI experts at a Conservative Party conference fringe event with Tech UK to consider standing as MPs, to ‘upskill’ parliament and prepare it for a future in which AI will become much more prominent.[3]

Some are concerned after the UK Government quietly disbanded the advisory board of its Centre for Data Ethics and Innovation (CDEI) on 9 September 2023, causing concerns about its commitment to AI governance. [4] The CDEI was created in June 2018 to promote ethical AI use but shifted focus over time. [5] The AI Council also held its final meeting on 21 June 2023.[6]

What do the main political parties say about AI?

The Conservative Party have emphasised the importance of the UK being a key player in AI regulation and is hosting the first ‘global summit’ on AI in November this year.[7] In terms of policy, the Conservative Party has pledged to increase public funding in research and development (R&D) to £20 billion, with a focus on enhancing the UK’s strengths in AI, life sciences, quantum, fintech, and green technology.[8]

The Labour Party has expressed concerns about the risks posed by Artificial Intelligence (AI) and has suggested that AI should be licensed in a similar way to medicines or nuclear power.[9] The party has called for stricter rules around companies training their AI products on vast datasets, such as those used by OpenAI to build ChatGPT. Labour’s former digital spokesperson, Lucy Powell MP, emphasised the need for regulation of large language models that can be applied across a range of AI tools.[10] She proposed that AI should be governed by arms-length governmental bodies.[11] The Labour Party says it recognises the potential of AI in different sectors and aims to use new capabilities in data analysis and artificial intelligence to increase productivity, deliver better public services, and improve the quality of life for all.[12]

Future Developments in AI

The future of AI in the UK is promising yet complex. AI language models, like Chat GPT and others, are gaining widespread awareness but also facing scrutiny. While these models offer transformative potential, concerns about their ethical use and impact on human skills persist. Eight out of twenty four Russell Group universities have formally banned the use of Chat GPT and similar models, reflecting the ongoing debate.[13]

The UK Government is launching a new advisory service, funded with over £2 million, to help businesses comply with regulatory standards for digital technology and AI innovations. This service, operated by the Digital Regulation Cooperation Forum, aims to streamline regulatory processes, promote responsible innovation, and facilitate economic growth. It will run a pilot scheme for about a year to assess its feasibility and industry engagement.

Additionally, the Government has established a central AI risk function to monitor AI risks and is hosting a global AI Safety Summit to address AI safety and responsible use.

As part of the evolving regulatory landscape, the Competition and Markets Authority (CMA) consultation is actively engaging with AI-related issues. The CMA’s consultation seeks to address competition concerns related to AI and digital markets. It explores how AI technologies may impact competition, consumer choice, and market dynamics. The CMA’s efforts underscore the importance of ensuring that AI-driven businesses adhere to fair competition practices.

Navigating the Regulatory Landscape

As AI continues to advance, the UK Government’s approach to regulation is vital. The upcoming AI Safety Summit, scheduled for November 2023 at Bletchley Park, exemplifies the Government’s commitment to addressing AI’s challenges and opportunities.

AI companies face the dynamic and complex political landscape surrounding their innovations, it is imperative to consider greater political engagement. Embracing active involvement in shaping AI policies and regulations can ensure a conducive environment for responsible AI development and foster a more collaborative and informed approach to navigating the evolving challenges of the industry.

BREVIA CONSULTING PROVIDES STRAIGHTFORWARD PUBLIC AFFAIRS AND PUBLIC RELATIONS SUPPORT TO BUSINESSES AND CHARITIES.

Discover how Brevia can help you and your organisation by contacting the Brevia Team on 020 7091 1650 or contact@brevia.co.uk

[1] Forbes, ‘UK Artificial Intelligence (AI) Statistics And Trends In 2023’, 26 June 2023, Link

[2] Forbes, ‘UK Artificial Intelligence (AI) Statistics And Trends In 2023’, 26 June 2023, Link

[3] PoliticsHome, ‘Tory MP Urges Government To “Follow Through” With Global AI Summit’, October 2023, Link

[4] Business Matters, ‘The UK government has disbanded the independent advisory board of its Centre for Data Ethics and Innovation (CDEI) without any announcement amid a wider push to position the UK as a global leader in AI governance’, September 2023, Link

[5] Business Matters, ‘The UK government has disbanded the independent advisory board of its Centre for Data Ethics and Innovation (CDEI) without any announcement amid a wider push to position the UK as a global leader in AI governance’, September 2023, Link

[6] AI Council, ‘AI Council’, October 2023, Link

[7] Chatham House, ‘Conservative Party conference: Does AI offer a new global role for Britain?’, September 2023, Link

[8] Conservative Party, ‘Your priorities are our priorities: Rishi Sunak sets out our vision for you’, January 2023, Link

[9] The Guardian, ‘AI should be licensed like medicines or nuclear power, Labour suggests’, 5 June 2023, Link

[10] The Guardian, ‘AI should be licensed like medicines or nuclear power, Labour suggests’, 5 June 2023, Link

[11] The Guardian, ‘AI should be licensed like medicines or nuclear power, Labour suggests’, 5 June 2023, Link

[12] ODI, ‘Labour Conference 2022: key takeaways for data and digital policy, September 2022, Link

[13] INews, ‘Oxford and Cambridge ban ChatGPT over plagiarism fears but other universities chose to embrace AI bot’, 28 February 2023, Link

LATEST NEWS

General

Who are the potential Labour Party tech policy champions?

As the UK gears up for the upcoming general election, the Labour Party has a number of prospective parliamentary candidates with backgrounds in technology. If elected, these individuals could provide unique perspectives and expertise in a policy field that is rapidly evolving. Below Brevia has highlighted potential Labour Party tech stakeholders after the next election.

Read More »
Transport

How is accessible transport policy changing in the UK?

24 per cent of the UK population are disabled yet public transport is frequently described as ‘difficult’ and ‘inaccessible’ by disabled passengers. The Government has set out how it aims to improve this, most notably through the Inclusive Transport Strategy (2018) and the National Disability strategy (2021). Below, Brevia analyses the most frequent issues surrounding access in public transportation and the associated political debates and policies.

Read More »
  • Get in touch to arrange your free monitoring trial.

  • This field is for validation purposes and should be left unchanged.