How will AI disrupt the ethical landscape for CAs?
Read our summary of our “Ethical leadership: Will GenAI and other technological developments disrupt the ethical landscape?” webinar to discover how Artificial Intelligence (AI) will impact the ethical landscape for CAs.
Ethical leadership is the cornerstone of all we do at ICAS. As CAs, we are faced with decisions and choices of an ethical nature and each year you pledge to uphold the highest ethical standards. Will the choices and decisions you make change with the impact of new technological developments such as Generative Artificial Intelligence (GenAI) and robotics?
On 7 November 2023, ICAS hosted a virtual event “Ethical leadership: Will GenAI and other technological developments disrupt the ethical landscape?”
Clive Bellingham CA, ICAS President, was joined by Bruce Cartwright CA, Chief Executive of ICAS; Loree Gourley, Chair of the ICAS Ethics Board; and Jonny Jacobs CA, Finance Director, and ICAS Council Member. Clive asked them about the ethical behaviours and decisions that have shaped their careers, and for their thoughts on the fast-changing business and technological landscape, including GenAI, and the impact on ethical decision making.
Some highlights from the event are summarised below, and you can view the full webinar here.
What does ethical leadership mean to you?
Loree Gourley: There are five elements that make good ethical leadership in relation to AI:
- Literacy
This relates to education. All of us have a responsibility to have a reasonable baseline understanding of what GenAI really means, its capabilities and evolution, and how we contribute to it as humans. We have a responsibility to have a sense of curiosity to seek out what AI means for us in our roles every day at work in our companies, and how that relates to the business model. - Accountability
All AI tools need to be governed appropriately. Humans, leaders and companies, need to be held accountable for all stages of the AI lifecycle as it relates to establishing robust and effective controls, particularly from a risk register perspective. - Transparency
This is critical and crucial to preserve trust in business. It means understanding the data, the algorithms, the processes, and the decisions that are made, for the inputs as well as the outputs of AI. This needs to be disclosed and transparent in the spirit of trust. - Safety and confidentiality
We need to be sure that the information we give and receive, the inputs and outputs of AI, are safe and private, and the protective measures are robust as it relates to identifying, monitoring and mitigating risks. This includes ensuring that data is only used appropriately and that there are effective controls to mitigate any potential risks to the fundamental ethics principle of confidentiality. - Purpose
Organisations and leaders need to ensure that the public interest and societal implications remain at the forefront of all the decisions that we make as leaders and as business professionals. We need to be sure that the application of AI is designed for human benefit, enhancing societal progress, and perhaps even driving positive outcomes for the planet, versus this notion of potential harm for people and society.
How will this be brought to life at ICAS?
Bruce Cartwright: There are two sides to this. There is the education of the future CAs of today and tomorrow, and then enabling our members to carry on the journey – because it is a journey.
On the education side, it is about building AI philosophies into the education program. For example, in one of the examinations there is a piece of Chatbox, where the students are given a Chatbox script and are asked to interpret that script and ‘correct’ it. The learning being, don’t take it as gospel. We’re looking at AI as it’s clearly here to stay. You can’t un-invent something, therefore, we need to learn to how to manage it. We must embrace it, but we must be cautious.
We need to educate people to understand what AI is doing, but firstly to understand what the inputs are because AI learns from what it sees, and it’s the original algorithms that define its purpose. When they were first designing car safety belts, tests had certain assumptions and one of the assumptions was that it was a 40 to 50 year-old male driver - they didn’t build seatbelts for women or children. It’s all about the assumptions you make initially. The problem with AI is that if the ethical assumptions are wrong at the start, then the risks will be multiplied.
Professional accountants are very focused on ethics, trust and transparency. What specific roles can we play in our businesses, with our clients, to ensure that AI is used responsibly?
Jonny Jacobs: Ethics is at the heart of this and will always be a human function, and we as professional accountants have a role to play. CAs are a force for good in organisations, they can influence those around them, and can take a broad view on ethical leadership. For example, we might want to advocate for a code of ethics around this – against bias or ensuring that we have the right privacy and accountability in place. We might want to do something around monitoring and controlling the use of AI – looking at its algorithms and training, or monitoring potentially discriminatory outcomes, ensuring it’s not harmful.
There will also be a role for reporting. Could we report the use of AI, and the ethical considerations, to boards, to stakeholders, to regulators? There is also a lot more we can do around risk assessment. Currently, the EU is working on an AI Act. This gives a fascinating insight into where this might go. The direction of travel is looking at taking AI and segmenting it into different risk profiles. For example, how AI might pose an unacceptable risk? Is there AI that could adversely target humans and therefore you just shouldn’t use it. There could be AI that is high risk, that could be using predictive algorithms leading to decisions. That type of AI could be assured, and we could play a role in this assurance piece. And the other risk piece is what is limited risk, such as customer service chatbots or AI enabled games.
Accountants can play a big role in filling the trust gap both in a specific role, like the assurer or someone involved in risk management, controlling or reporting, but also being that of a general conscience in any organisation, with having ethics at the core of what we do.
ICAS Ethics Resources
Find out more about the ethics resources ICAS provides to support its members.