What AI regulations apply
to your organisation?

One of the reasons that many businesses are holding back from creating and committing to AI strategies, is not just fear of getting it wrong, but fear of getting it wrong in a way that exposes the organisation to reputational and regulatory risks. 

The fact that AI is new means it feels riskier, and it is true that we are in uncharted territory — AI is, after all, the first data product or service ever to be regulated. However, if you demystify AI and the regulation surrounding it, you should be able to take a more confident approach. 

To start, there are three main areas of legislation that are likely to apply to AI applications that you use. 

  • Existing legislation that is not AI-specific but is relevant, such as GDPR 

Here is a short look at those and the rules within them. 

 

UK AI Framework 

The UK has not written any legislation yet that explicitly governs AI. However, it has signed up to Council of Europe’s The Framework Convention on Artificial Intelligence, which means it is committed to writing AI legislation designed to: 

  • protect human rights 
  • protect democracy 
  • protect the rule of law (and laws like GDPR) 

Prior to that agreement, the government used the Kings Speech in July 2024 to confirm that it will ‘seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.’ 

Whatever AI regulation the UK passes may well resemble the EU’s legislation, given the overlap between EU membership and Council of Europe membership — all 27 EU member states are Council of Europe members, so already fall under the EU AI act. Additionally, the current UK framework already runs alongside EU regulations.  

The Framework Convention ‘aims to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation.’ 

 

The EU AI Act 

  1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
  2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating.

EU Artificial Intelligence Act, Chapter III, Article 9, paragraphs 1 and 2 

 

If you operate in the EU, even purely online, you will have to abide by The European Union Artificial Intelligence Act, which carries penalties up to of 7% of turnover or €35m, whichever is greater. 

The Act defines four AI risk categories for AI systems and applications: 

  • Unacceptable risk: exploitative, subliminal, and emotional manipulation tools. Tools in category are illegal. 
  • High risk: tools that risk negatively affecting wellbeing, human rights, or access to services. These systems are highly regulated. 
  • Limited risk: AI directly interacts with humans (e.g. chatbots), AI that can imitate specific people, or generate synthetic content 
  • Low risk: others 

 

Under the act, organisations that use AI are obliged to: 

  • Ensure and maintain AI literacy across their team 
  • Assess the risks of the AI tools they use 
  • Test their AI applications: High risk AI requires quarterly auditing, limited risk programmes every six months, and low risk every year. 
  • Have AI quality management protocols 


Existing legislation and its relevance to AI
 

Of course, there are laws which don’t govern AI directly or specifically, but which are relevant to your AI strategy. Those include laws on: 

  • Data protection 
  • Consumer protection 
  • Intellectual property 
  • Human rights 

When AI is not safe and effective, its unseen or un-guard railed biases can discriminate unfairly against specific groups; if your teams do not know what AI they are using or how to use it, they risk leaking sensitive information; if AI is used by multiple organisations to set prices, then an industry may unwittingly be price fixing through its automated systems and practices. Regular testing ensures the necessary guard rails are in place. 

 

Protect yourself from the legal risks of AI 

Agile’s expansive range of expertise means we are the one partner that you need throughout the AI journey and beyond. 

  • Our methodology delivers value early on and in continual increments 
  • Our consultants have a deep understanding for UK and EU regulatory frameworks 
  • Our data scientists and engineers are highly qualified in AI applications and related systems 

 

Our Artificial Intelligence Framework can help you design an AI strategy that supports your organisation’s goal and ambitions while aligning your applications and your approach with regulations and upcoming legislation. 

Create and implement a framework for safe and reliable AI strategy, get in touch to discuss practical, immediate steps. Email swhiting@agilesolutions.co.uk, or call 01908 010618.