uk iconUK

 

 

 

6 AI lessons from the Post Office scandal

The Post Office scandal is a tragic and shameful example of what can happen when blind faith is placed in any AI tool, and humans are not offered enough oversight or training.

When such tools are designed, wielded and managed with care, however, with deep consideration and planning around relevance, purpose and transparency, their use can revolutionise a business’s performance – without sacrificing safety and integrity.

6 AI lessons from the Post Office scandal
smsfadviser logo

Between 1999 and 2015, the Post Office wrongly accused 736 sub-postmasters of theft and fraudulent accounting, because of problems caused by its Horizon IT system. The accusations resulted in convictions (many now overturned), bankruptcy, the loss of livelihoods and homes, family breakdown, and at least four deaths by suicide.

SMEs and small accounting firms hoping to make use of potentially powerful technology tools should heed the lessons of the Post Office scandal – a clear example of misguided use of AI tools without transparency or oversight.

1 Design a human handbrake

Professor Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, says there is a lot of hype around AI at the moment – and organisations selling AI systems have an interest in people buying into that hype.

But it would be a terrible mistake for any business to blindly place trust in a computer system on the basis that it will always make the most accurate decision, she says.

“We’re currently reeling from a decade-long scandal where people who ran local post offices were wrongly convicted of theft and fraud, because the accounting systems they used were malfunctioning and presenting the wrong financial figures,” Neff says.

When sub-postmasters raised concerns about the figures the system was producing, they were told that they were alone in their beliefs, that others did not share their experiences. This was not true, but it had the desired chilling effect on such discussions, preventing the ‘human handbrake’ that is an essential oversight and quality control in the deployment of AI tools.

“Making sure people are able to push back or question these systems is critically important.”

2 Good AI begins with perfect data

“AI is developing at a dizzying pace,” says Steve Salvin, CEO and founder of Aiimi, a technology company specialising in AI and data which has worked with the several government bodies in the UK and the Financial Conduct Authority.

“As public models like ChatGPT make the technology more accessible, small businesses that are keen to unlock efficiencies, spot new opportunities and scale are among those rushing to experiment.”

However, to use AI of any kind effectively and responsibly, the business must first ensure its data is in order.

“This means that data must be well governed and high quality,” Salvin says. “The reason for this is simple: if you put garbage in, you get garbage out. In other words, accurate data makes for more relevant insights and accurate decision-making.”

3 Recognise that AI can be deeply flawed

Neff has studied the way teams make decisions around automation tools.

“AI can be a fantastic tool to aid in creative processes or automate monotonous tasks, but anyone looking to adopt or scale up use of AI needs to recognise that the systems we have currently are far from perfect,” Neff says.

“Technically, AI tools can recreate the social biases that inevitably exist in training data. We know of examples where AI recruitment tools favoured men over women for certain jobs, or preferred people from certain ethnic or societal backgrounds, because the systems had learned from data where those groups were already being favoured.”

Therefore, she says, the implementation process can be tricky on a social level. Businesses must be open to asking hard questions about how to fit AI systems into their organisations and teams.

“Getting people ready to use these systems responsibly is a key first step in building AI capacity,” she says.

And that requires training, transparency and trust – none adequately present in the Post Office’s deployment of the Horizon system.

4 Don’t rush

There are ways businesses can implement AI securely and reap the many rewards the technology has to offer. While innovation at pace can be a good thing, it must be balanced with thoughtful implementation. And this is a human issue, not a technology issue.

Businesses that rush to adopt AI are in danger of overlooking risks that can turn useful tools into problems. Robust governance structures should be in place before such a system is selected or implemented.

“Unless businesses themselves have a good handle on their data and have ensured files are up to date, properly secured and accessible to the right people, they can’t expect it to lead to accurate or relevant AI outputs,” Salvin says.

He warns that poor governance risks poor decision making, wasted time and cost, and data security.

But the Post Office scandal provides an example of a worst-case scenario for organisations with poor governance structures implementing AI tools – the greatest risk is to people’s livelihoods and lives.

5 Focus on your desired outcome

Many are distracted by the immense capabilities of AI systems.

Instead of focussing on what the technology can do, it’s far more useful to concentrate on what you need it to do. So, what is the desired outcome of AI in your business?

For example, according to a Gartner study, almost half of digital workers struggle to find the information needed to effectively perform their jobs, Salvin says.

“Every single one of these employees would benefit from practical AI tools that harness the right models securely … and work to safely and reliably help users get to the answers they need,” he says.

The sub-postmasters accused of so much wrongdoing were highly skilled and highly trained – they knew very well how to perform the work that had been automated, and with greater accuracy than the flawed tool.

Starting small to achieve a desired outcome requires a thorough assessment of the workforce’s capabilities and strengths, to identify the most effective and impactful ways to augment workers’ highest-value labour.

“Staff who have accurate and relevant information at their fingertips are more efficient, make better decisions, can spot opportunities and identify innovative solutions to business challenges.”

6 Use AI’s power for good

The Post Office example was as disappointing for its technical failure as it was for the fact that the corrupted data it produced was then weaponised against innocent people on an uneven playing field.

The few tools each lone sub-postmaster could have used to defend themself – paper and electronic records – were also removed from them.

Businesses looking to use AI tools should assess vulnerabilities among the people who would be using and affected by the use of those tools.

“Bad AI reinforces the existing power imbalances in society,” Neff says. “Those most at risk are those who already have the least power in society: the poor, people in insecure employment, as well as ethnic minorities and women.

“Businesses using AI need to be acutely aware of these risks and build in mechanisms to evaluate the use of AI and challenge bad decisions.”

Be alert, but not alarmed

Is AI useful for accountants? Absolutely, Salvin says.

“In particular, financial accountants can reap the rewards of AI by getting data governance right,” he says.

This relies on the data that feeds the AI being accurate, properly stored and cited, and easily traced and controlled.

“This is essential for accountants who are working with sensitive information, and who need to be able to cross-check information and prove that results add up,” Salvin says.

“Good quality data can help models show users exactly how the AI got to its answer.”

Subscribe to Financial Accountant

Receive the latest news, opinion and features directly to your inbox