Why data quality
will determine whether
your AI will be safe

To put it in very simple terms, AI is like a very enthusiastic learner. If you give it a piece of information, it will take that ‘knowledge’ and apply it over and over again. So, in the same way that you shouldn’t teach someone something false or misleading, you shouldn’t let AI learn from data that contains inaccuracies or biases. 

AI takes what it believes to be true and bases all future decisions on it. So, what is the effect of an AI bias? 

 

The effect of bias on AI decision-making 

In order to train an AI model to make decisions, it does of course need training data. If the training data has an unwanted bias so will the AI’s decisions. 

Let’s say a bank is training an AI to assist with loan applications. A sense of urgency to speak to the market might mean that the bank is tempted to use all of its application data in training the model, but the vast majority of that data is surplus to requirements, and risks introducing biases that the bank is unaware of. The result could be unfair loan decisions, which risks violating AI regulations, consumer law, and even human rights law. 

 

AI Regulation and Compliance 

The five pillars of AI ethics are: 

  • Fairness 
  • Contestability and redress 
  • Accountability and governance 
  • Transparency and explainability 
  • Safety, security, and robustness 

There is already legalisation — like the UK’s Equality Act 2010 — which protects the public from unfair discrimination. It is quite possible that there will soon be additional legislation to cover discrimination specifically when AI is in use. 

The EU has already introduced that with its Artificial Intelligence Act, where non-compliance could mean a fine of €35million or 7% of turnover, whichever is greater. In the UK, the King’s Speech, July 2024, announced that the government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. 

Transparency, explainability, and contestability requirements will expose any unfairness in AI systems, and profiling customers is, of course, illegal. 

 

How to protect data quality and prevent AI bias 

In the example of the bank’s loan decisions from earlier, training data should be limited to specific data that allows responsible lending, like credit score and income. ‘More’ does not necessarily mean ‘better’ when not all training data is relevant. 

On a wider level, to safeguard the compliance and reliability of your AI, you must ensure that the training data accurately reflects reality. Otherwise, you risk creating false patterns and misleading information which could contaminate your AI’s training data and corrupt its decision-making. 

Unfortunately, many organisations suffer from fragmented and inconsistent data because of unstandardised practices, weak data cultures, and data silos. Addressing those issues requires Data Governance and Master Data Management (MDM). 

Data Governance and MDM both combine people, processes, and technology to protect the integrity and reliability of data. Data Governance addresses data sources, structure, storage, cleanliness, and accessibility. MDM standardises contextual, non-transactional data from multiple sources to achieve a ‘golden record’ or single version of the truth (SVT). 

Both Data Governance and MDM will be fundamental to compliant AI. Even if the data problems are much less severe than illegal discrimination, they can have major ramifications. If your training data misleads your AI, it will make decisions for a reality that doesn’t exist, rather than the reality in which your organisation operates. That is at best useless and at worst dangerous. 

 

Your AI Strategy 

As you will already be well aware, the pace of AI development is accelerating. Many sectors are simultaneously cautious about using it, and impatient to get started. Competitors are creating strategies and offerings powered by AI, and while keeping pace with the market is vital, it is also essential to do so responsibly. 

Agile Solutions is the perfect partner to help you navigate the AI revolution. Not only do we have incredible breadth and depth of experience designing and implementing data and transformation projects for large and complex organisations, but our AI Frameworks offering can provide the full data management required for AI implementation, with experienced Engineers, Data Scientists, AI consultants, advisory, risk and compliance teams ready to help you take advantage of the opportunities of AI, while mitigating the risks. We offer: 

  • AI maturity assessments to establish your organisation’s readiness to adopt AI tools, and create roadmaps to your AI goals 
  • AI Artefact assessments to provide peace of mind that your AI systems are safe and effective 
  • AI Model Maintenance for ongoing assessment and repair of your AI systems 
  • AI Model Development to create AI systems and training data tailored to your desired outcome 
  • AI Application Environment which creates a secure and controlled environment for the development of your AI tools 

Agile can manage your AI roadmap, with certified and experienced AI Engineering teams working within the AI EU Governance and UK frameworks, active project management that delivers your goals, not in rigid projects that deliver ROI only after many months, but in sprints that show returns early and incrementally. 

To speak to a data, AI, and strategy expert who can assess your relationship with AI, discuss your goals, and make strategic recommendations for your AI initiatives, contact swhiting@agilesolutions.co.uk.