Will the future of accountancy be fuelled by AI?
With more than three-quarters of UK bosses hoping to embed AI in their businesses, could PwC’s ‘experimental’ adoption show the way? UK Innovation and Data Lead, Hassane Ferdaous, tells Christian Koch about the Big Four giant’s $1bn investment
2023 was the year artificial intelligence went mainstream, accelerating at such a breakneck speed even those who helped conceive it have been spooked. In 2023, AI – as it is known to all – also started coming for cushy, white-collar jobs, threatening to rattle industries from Hollywood scriptwriting to, yes, accountancy, sending shivers down the spines of many.
With nearly four-fifths (79%) of UK firms already using AI and automated technology (according to research by the Institute for the Future of Work, Imperial College London and Warwick Business School), businesses unsure where to go next might want to look at what’s happening at PwC for answers.
In April the accounting and consultancy firm announced it would invest $1bn (£800m) into AI (the UK arm will be spending £100m), also securing a partnership with Microsoft and ChatGPT creator OpenAI (which is 49% owned by Microsoft). It was followed by news in November that PwC will be training its workforce in AI, and giving each employee their own chatbot assistant, ChatPwC. The rest of the Big Four have also joined the AI arms race: KPMG has announced a multibillion-dollar investment in Microsoft’s generative AI/cloud services, EY has partnered with IBM to use AI in its HR, while Deloitte has also developed AI solutions for clients.
Where the Big Four lead, the rest of accountancy often follows. This leaves many CFOs and businesses in a quandary. Should they embrace AI or risk obsolescence? At a time when even AI’s key architects such as Geoffrey Hinton, who this year quit his role at Google to free himself to sound the alarm about the technology, and Elon Musk (“the danger of AI is much greater than the danger of nuclear warheads”) have suggested we should press pause, should anybody be dabbling with it at all?
CA sat down with Hassane Ferdaous, Digital Audit Partner, UK Innovation and Data Lead at PwC, who’s helping to drive the changes at its UK offices, to find out how the giant is using AI, its ethical question marks and what it believes your business should be doing to stay ahead of the game.
How is PwC currently using AI?
PwC started journeying with AI around 2018, when we developed GL.ai so our auditors could look at general ledgers from a risk perspective. We continued the journey with Cash.ai, which used AI to read bank statements and automate cash testing procedures. Generative AI took everybody by surprise. We partner with OpenAI and also use Harvey, a GenAI tool [built on OpenAI/ChatGPT tech], trained on legal data which helps extract data from specialist legal-related documents – which is useful when drafting and reviewing contracts.
You’re also giving each employee their own chatbot, which is, naturally, called ChatPwC…
It’s a secure version of ChatGPT and still being rolled out. It’ll help with day-to-day admin using safe, powerful Q&A activities from documents, summarising reports and content generation.
Any other AI developments?
We’re investing huge amounts of time into looking at clients’ text data, especially how we can interrogate that text by looking for patterns, such as transactions, and summarising the main topics. Take a 12,000-word document: AI can help identify the most important subjects. It can also do similar things with unstructured data, such as complaints.
We see AI’s real power as harnessing PwC’s IP [intellectual property] and proprietary knowledge with the technology. Yes, AI might mean new ways of working, but it’ll sit alongside areas where PwC already has expertise, so we can get faster, more accurate information/answers to our clients. If I’m reviewing an accounting paper, I could run it through an [AI] system that’ll tell me whether things should sit there or not. AI can give that great level of accuracy, but we’ll still need to augment it with our experts’ knowledge of that industry/client.
Will PwC employees be ultimately responsible for training the AI themselves?
Not every PwC employee will be training the AI. We might take the decision to roll out prompt engineering as generic training – in the same way we train all our auditors in data wrangling and visualisation. Prompting could become a basic skill, meaning you don’t need to be a coder to develop a GenAI that will help you execute your tasks. More complex, large-scale capabilities will need specialists, such as data scientists and engineers.
PwC also has plans to upskill its staff in AI too…
We haven’t started training yet, but we have a program coming down the line called My.AI which will allow people to get basic knowledge on generative AI. In each business unit, we also have a team focusing on generative AI, looking at use cases and focusing on the value for that business unit.
You’ll be advising clients on their use of AI too: what questions do they have?
So far, we haven’t seen a significant amount of [AI] investment [from businesses], apart from big tech companies. Organisations are still defining their strategies. Some companies don’t want the sandbox-style “Let’s start small” approach: they want to be disruptive. They’re saying, “We might not exist in three years’ time if somebody figures out how to do our business using this tech.” This is a bold strategy but it can pay big rewards.
At the other end of the spectrum, we have companies saying, “We would like to experiment first – can we try small use cases and build from there?” Both approaches are valuable… However, those businesses who want to disrupt and be pioneers are most likely to shape the future market, including in accounting.
What advice would you give the average CFO?
I think finance and accounting people should engage with this technology to truly understand it, because it’s going to start disrupting their business. For example, if you ask ChatGPT a question about how IFRS 17 relates to your business, you’ll get quick answers. I’m sure CFOs and CEOs would want their people to leverage this in their day-to-day business to get started.
How should businesses get started on their AI journeys?
If you’re working with software companies such as Oracle, SAP or Xero, ask them, “What are you bringing into these tools to augment its capability?” You also need to be thinking about how much AI will disrupt your team in terms of skills and capabilities. Because many simplified activities will be done by generative AI-led tech, maybe your employees need to go up a notch in their skills.
Also, get your data ready. Any data scientist will tell you they spend between half and three-quarters of their time fixing the data. Therefore, you need to think about building your data platform.
What are the biggest benefits for accountancy firms/finance teams?
When generative AI is coupled with automation, whether it’s robotics or tooling, it’ll take away much of the [administrative] burden and mundane work in accounting. But it’ll also be a powerful forecasting tool, allowing you to simulate growth, projects and costs.
Could we also see financial reporting half-generated by AI? Maybe. The work that goes into preparing financial statements is significant, and rightly so. But it’s a process that could be augmented by generative AI, allowing statements to be delivered to tight time frames.
AI is a rapidly evolving beast. What has PwC learned about the tech since starting to work with it?
Anybody who can give you an answer to this now is either overstating or doesn’t know. We are accelerating our journey. We won’t see the benefits now, because it requires a change in the behaviours and how people integrate AI into the way they work. Our focus is striking the balance between the benefits now, and ensuring we’ve done enough to address the risk and threat of these technologies. That’s the most important thing.
What has PwC discovered about these risks and threats?
As generative AI processes vast amounts of data, concerns arise around accuracy. AI may produce false or misleading information, leading to significant business and regulatory and reputational risks. Some tools lack transparency, making it difficult to assess, say, the level of bias built in. This is where responsible AI governance becomes key. Lastly, mishandling of sensitive information can result in data breaches and violations of privacy and loss of IP.
Does it concern you so many organisations are testing AI without it being regulated?
The regulators are watching what’s happening with the market. The recent AI safety summit has pushed for a regulatory environment that puts in safeguards, but also is meaningful for innovation. The new EU AI Act is also a good way forward.
Regulation also needs to protect consumers. When transacting with a business, customers deserve to know whether they’re talking with an AI engine. Companies should have an obligation to tell them that the pricing or insurance policy they’ve been quoted has been generated through AI or a human. One important thing is that smart people working on AI in start-ups or universities and labs don’t feel red-taped. In the next few years, we’ll see a flurry of technologies leveraging generative AI. These people should be given access to investment, working capabilities and computing power.
Computing power is a problematic point for AI, isn’t it?
This technology is running at a huge volume of computation in data centres – its environmental footprint is huge. How should businesses treat this responsibly? For example, should I use generative AI for a calculation, when it harms the environment more than my three employees doing the same thing on Python?
What’s next for PwC?
AI is an area where lots of people are exploring and learning. If you ask me these same questions in three months’ time, you can expect different answers. This environment is changing every day.
Glossary and notes
GL.ai: A bot using AI to detect anomalies and mistakes in a company’s or other organisation's general ledger.
Cash.ai: Another bot developed by AI.
Generative AI (or GenAI): Tools such as ChatGPT and Bard that produce convincing text or images, such as deepfake memes of Boris Johnson in a prison cell.
‘Training the AI themselves’: For AI to get better, people need to “train” it first, usually by feeding it with prompts and data from the internet.
OpenAI: The Silicon Valley company behind ChatGPT. Recently sacked its CEO, Sam Altman, then reinstated him five days later.
Prompt engineering: If you want AI such as ChatGPT or Dall-E to produce text, images or music, you will need to give it a “prompt” or instruction, such as “picture of Spider-Man eating pretzels in his underpants”. Prompt engineering is the more intricate end of this.
‘Haven’t seen a significant amount of investment’: At the start of 2023, 77% of UK CEOs said they plan on investing in tech including AI, yet only 26% have managed to deploy the tech at scale [Source: PwC research].
AI safety summit: Rishi Sunak’s Bletchley Park summit resulted in a declaration from the UK, US, EU and China that AI poses a potential risk to humanity, along with a multilateral agreement to test advanced AI models.
EU AI Act: The EU is currently in the process of passing the world’s first legislation on AI, which could give it power to shut down services deemed dangerous to society.
Environmental footprint: Tech analyst Gartner predicts the energy consumption of AI tools could be greater than that of the entire human workforce by 2025.
Python: The popular programming language used to develop many websites.
Visit the ICAS technology hub for more resources