Preparing for AI regulation
Do you really know what
AI you are using,
and how you are using it?

The five pillars of AI ethics (fairness, contestability, accountability, transparency security) are not simply guidelines — they are written into international law. 

The EU has now introduced the Artificial Intelligence Act which imposes a legal responsibility on organisations to “take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.” (Chapter 1, Article 4) 

Having to educate teams about AI and the AI systems in use is a substantial task in itself, but there is also the risk that much of an organisation’s AI use is ‘under the radar’, either embedded in common platforms and software, or as ‘shadow AI’ which is the unauthorised use of AI by teams or individuals outside of a governance framework. 

Maintaining best practice is made considerably harder by not knowing what practices are taking place. 

 

Remaining compliant with AI laws and regulations 

As AI develops, it becomes an increasingly common feature of business software. AI assistants feature in project management software, transcription apps are powered by AI, natural language processing tools can draft documents, and business intelligence apps generate reports with automation and conversational AI, just to mention a few. 

Many organisations are also building specific projects and strategies around AI, like this energy distributor that used AI to cut the cost of Net Zero, using artificial intelligence to accurately forecast demand and power load. 

Whatever your relationship with AI, your organisation and your teams are legally and ethically obliged to understand: 

  • what AI they are using 
  • what they are using it for 
  • how to use it safely and responsibly 

Ignorance of those areas can jeopardise your compliance, for example by introducing biased or inaccurate data. AI will ‘learn’ from the data it is offered, and by basing decisions on that information, it will very quickly compound the problem, amplify the inaccuracy, and possibly fall foul of regulations, or even human rights laws. 

AI literacy is therefore a vital safeguard — users must know the ‘cause and effect’ of their AI usage, while remaining vigilant for AI they may be using less deliberately. 

 

Managing AI use throughout an organisation 

AI literacy does not only cover the systems that the organisation sanctions and mandates. Individuals can be prone to using tools independently, particularly Large Language Models like ChatGPT. 

It is tempting to ask AI to do research, write memos, or draft emails and reports, simply because it can perform work in seconds that would take a human several hours. However, using that AI irresponsibly can be disastrous. For example, in 2023, a Samsung software engineer accidentally leaked sensitive source code onto ChatGPT. When intellectual property is at risk from AI use, so can personal data or other sensitive private or commercial information. 

Casual and ad hoc AI usage is hard to govern, because many tools are free and accessible, and management may not even know it happens. AI literacy is the best guard against it. When teams understand the risks properly, they are more likely to police their own actions. 

 

How to protect your AI compliance 

If you suspect that your AI standards may not be robust, that risky AI practices are taking place, or that you need to improve your relationship with AI to achieve a specific goal, then the first step is to take an AI Maturity Assessment. 

That assessment is a third-party audit of your team’s 

  • knowledge of the types of AI and automation 
  • understanding where automation, LLMs, and NLP are already functioning 
  • appreciation of the risks associated with the tools (bias, security, intellectual property etc.) 
  • understanding of AI best practice 

Your assessor can then create a roadmap for achieving the level of AI maturity that your operations and your strategy require, and suggest any immediate remedial work to resolve the most pressing shortcomings. 

In response to the growing regulatory pressure and mounting list of AI-related PR disasters, many businesses are urgently taking stock of their AI processes and systems. If you want to protect your business against the legal or commercial consequences of poor AI practices, Agile Solutions is perfectly placed to assist. 

Our teams of engineers, data scientists, and AI consultants have unrivalled experience managing data and transformation initiatives for large and complex organisations, as well as deep knowledge and experience navigating EU and UK regulations. 

To speak to a data, AI, and strategy expert who can assess your relationship with AI, discuss your goals, and make strategic recommendations for your AI initiatives, contact swhiting@agilesolutions.co.uk.