Beware of the risk of bias when using AI
Finance professionals are urged to exercise caution and beware of the risk of unintended bias when relying on AI results.
Last year, Rory McIlroy finally got his hands on a green jacket after a considerable wait. As this year’s tournament gets underway at Augusta National my mind strays back to last year’s opening round when after starting brightly, Rory almost self-destructed on the back nine. A calamitous series of shots included a real clunker at the 14th hole when he hit his approach shot into the water. As someone who always looks to learn from leaders in their respective sports it was interesting to hear Rory’s post round reaction to this faux pas:
“It was a good reminder that you just really have to have your wits about you on every single golf shot.”
Of course, other words may have been swirling about in Rory’s head immediately after playing that shot, but his chosen words serve as a useful reminder to professional accountants when going about their daily professional activities, including when using AI applications; remain vigilant and exercise your inquiring mind when assessing both the quality of data input into, and the related output from, an AI application. Focusing on the former, one risk is the possibility of bias in relation to the data sets on which the AI application has been trained.
In 2016, ICAS published a series of ethical dilemmas for directors. The author of this publication was Niamh Brennan, Emeritus Full Professor of Corporate Governance at University College Dublin. Part of its content included a list of potential biases that a director or indeed, a professional accountant may be subject to. This content was subsequently considered and formed one of the sources for the bias content that was included in the International Ethics Standards Board for Accountants (IESBA) ‘Role and Mindset’ revisions which became applicable in the ICAS Code of Ethics from 1 January 2022. Interestingly, the key examples of potential bias to be aware of when exercising professional judgement were highlighted in the recent AICPA publication ‘Ethics Staff Insights - AI through an ethics lens' which include:
- Anchoring bias, which is a tendency to use an initial piece of information as an anchor against which subsequent information is inadequately assessed.
- Automation bias, which is a tendency to favour output generated from automated systems, even when human reasoning or contradictory information raises questions as to whether such output is reliable or fit for purpose.
- Overconfidence bias, which is a tendency to overestimate your own ability to make accurate assessments of risk or other judgements or decisions.
Given the increasing use of Gen AI and other AI tools in everyday professional activities, it’s worth keeping these respective types of bias at the forefront of your mind as you aim to reduce the unintended risk of any bias defects in the applicable data sets or undue reliance on the output – remember the possibility of hallucinations.
Applying the inquiring mind and keeping your wits about you, most certainly have a place when performing professional activities, indeed they should be par for the course.
Categories:
- AI & technology




