You are confident in your organisation’s AI Maturity, your overall business data quality is ready for AI, you have policies to help your AI usage remain safe, ethical, and compliant, and you have a framework and strategy for AI. Now, you need to know that you have the means of executing your strategy and making AI a transformative success.
The next priority is to establish whether the AI tools you have, offers the breadth and depth of capabilities to support your strategy, are fit for purpose, are compliant with regulations, and are in line with your AI policies.
How to assess your AI applications
When you have the applications that offer the necessary functions, you need to know with certainty whether they will be safe and effective and will deliver the expected results.
The first stage is to assess the system for bias. Bias in data does not necessarily have to be removed in every case, but you do always need to be aware of it and be sure it matches your expectations. If it does not match your expectations, then you will need to train the AI application to remove or balance the bias.
For example, imagine you have an AI application designed to filter job applications. Because it is illegal to base a hiring decision on gender (except in highly specific circumstances) you should exclude (or remove) gender from the training data — AI cannot generate bias about characteristics that is does not see.
However not all biases are so easy to eliminate or balance, and it may take some close inspection to discover what biases the AI has developed. Anyone in the process can introduce bias, and once the bias is in the system, the system will learn from biased data and amplify the bias.
At this stage in your AI strategy, you should have a support process in place, which allows complaints against the AI tool, by anyone who notices an issue. The AI model also requires formal, scheduled monitoring and maintenance, so as not to rely on ad hoc observations for quality control.
The necessary AI application testing process is based around this sequence:
- Define your business requirements, objectives, and expectations
- Ensure the data is fit for purpose — cleaning any obvious invalid data, ensuring it meets output requirements and matches business requirements, defining acceptable error ranges
- Manage any outlying data points in the training data
- Balance and cleanse the data to ensure all aspects are equal
- Check for and remove bias where appropriate
- Split the dataset into development, testing, validation, and document the process and any issues discovered during the process
- Test your AI application
- Evaluate the model against expectations
- Set the application live and monitor
Within the systematic approach above, the reviewer must consider review and issues that are raised about the application.
As a hypothetical, let’s say an application like a chatbot had repeated complaints, about the accuracy of its information. Reviewers would look at the training data ask if it is expansive enough, and whether it is still valid. It might be, for example, that some business terms and conditions have changed, but the chatbot hasn’t been retrained with that data. Reviewers would recommend and/or implement the necessary retraining, and include in the project plan an action to retrain the chatbot with every change of terms and conditions.
In the case of a repeated complaint about unfair bias, the course of action would likely be to scrap the data and retrain the chatbot. It may first be a matter of expanding the training conditions with ‘noisier’ data to test the chatbot in simulated real-world conditions.
How to develop your own AI model
Of course, it may be that the models you have, or the ‘off-the shelf’ AI applications cannot achieve what you want. In which case, you may need to develop your own.
The first stage is to ensure that you establish the business requirements of your AI model, and that your development team have the same understanding of those requirements as your wider team.
The next steps are:
- Managing any outlying data points in the training data
- Balancing and cleansing the data to ensure all aspects are equal
- Checking, documenting, and removing bias
- Splitting the data into training, testing and validating
- Releasing the model test for final verification
- Evaluating the model versus expectations
- Releasing the model to production and monitoring
Once the model is live, it will require continual monitoring, for example to ensure it has not acquired any biases and is still delivering the expected results.
If you don’t have the skills and experience in-house to create an AI model, or to monitor and maintain your applications, then outsourcing development gives you access to those capabilities without having to invest time and money in acquiring talent.
Agile’s diverse team and expansive range of expertise mean we are the one partner that you need throughout the AI journey and beyond. Our consultants have a deep understanding for UK and EU regulatory frameworks, and our data scientists and engineers are highly qualified in AI applications and related systems.
We can offer:
- expert development of bespoke AI models that match your business requirements
- data cleaning as part of the development process
- application development in line with EU and UK regulations
To book your own AI Application Assessment, or to discuss bespoke AI model development, email swhiting@agilesolutions.co.uk, or call 01908 010618.