ICAS President: Should CAs embrace AI?
Artificial intelligence has the potential to revolutionise the world of work, but human oversight will remain essential, says President Clive Bellingham CA
Recently, I addressed a group of 50 students at Robert Gordon University in Aberdeen on the subject of accounting and finance. I always enjoy giving talks: you get an understanding of what other people think of the profession, which helps to inform ICAS about how we should engage with students.
One topic that came up in that session related to the institute’s approach to AI. That’s no surprise – few issues have such enormous potential, and few raise so many important questions about the ethical implications, not to mention the future of work itself.
I think, first and foremost, we need to take the “glass half full” view. Once upon a time, arguments raged about whether you could take a calculator into exams. Ultimately, the answer was yes, because you still had to take ownership of the answer, and if you submitted the wrong input or misinterpreted the question, you could still get that answer wrong.
The discussions we are having now include how we bring AI into our exam process, and how we ensure we take personal responsibility for its outputs. Both drive at the heart of what ICAS stands for and how we promote the importance of ethics. So the human aspect of all this will remain vital. When Elon Musk says nobody will have a job, it generates headlines and gets people talking – something Musk is extremely good at – but I don’t see it that way. CAs using AI will need to be certain of the safety of their inputs – every single one will have to be reliable, complete and accurate. Professional scepticism is a key value here.
From my experience of working in a multi-competency firm, and at a professional services firm, I think the same applies across industry. You rarely go it alone; you apply the “four eyes” principle, of having somebody else give their input or cast their eyes over the outputs. If anything, that becomes even more important with this technology – you cannot just blindly assume that whichever AI program you’re using will give you a definitive response.
One-stop shop
One consequence of AI could be the decline of the browser. Whereas I would normally search in Chrome or Safari, before being directed to other websites to form a response, that step is now being skipped, which is highly efficient.
ChatGPT scrapes all that information into one answer in a few seconds, saving me time and all sorts of legwork. That may be fine in some cases. But does it come at the expense of curiosity and applying that professional scepticism to what you’re reading? As a baseline, you should never automatically assume what you’re reading is the gospel, because there could be inaccuracies, misinformation or bias.
However, my principal concern about the impact of AI is that it could widen the social mobility gap. In this issue of CA, we speak to Elaine McKilligan CA, who was one of the first of those who came to ICAS as a school leaver. This is a route more and more of our students are following. We promote social mobility through the ICAS Foundation and initiatives such as Rise, which was set up in 2021 to help young people from lower socio-economic backgrounds acquire the skills required to succeed in life.
While we are proud of the work we do in this area, the UK suffers from a lack of social mobility. If schools fail to embrace AI and don’t ensure their pupils are thoroughly skilled in its use, not only would it be a missed opportunity, but it might further marginalise some young people. Those attending a private school can be confident they’ll have the necessary tools at their disposal. But unless the education curriculum at every level truly addresses AI, with the right resources, we will see kids who were already at a disadvantage from their first day at school potentially being left further behind by a lack of access to critical tech. A big firm reskilling its existing workforce for AI is a very different story to educating a 12-year-old who doesn’t have the infrastructure around them.
Then there is also the danger that we will sit in front of our computers instead of interacting with other people, asking questions, sharing experiences and learning from each other. That access to your peers, the knowledge and experience that you gain from those exchanges, is so important to your overall development.
Many big firms now have leaders for big data, leaders for sustainability and so on. Will they next need to appoint an AI leader or even an AI ethics leader? Some of the biggest firms have already taken that step and more will surely follow.
This is still very much a new world. And, like all new worlds, it’s both exciting and risky. We should embrace its potential, while also being mindful that it’s a tool and that we, as humans, need to work with it, not simply sit back and watch it do all the work for us.
Visit the ICAS technology hub for more resources