Use of AI systems by US government agencies
A research report published in February 2020 has highlighted the extent to which artificial intelligence (AI) is currently being used by US government agencies.
The report, ‘Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies’ by Engstrom, Ho, Cuéllar (all from Stanford University) and Sharkey (New York University), had a threefold purpose:
(i) to help inform the direction of AI use in US government by understanding whether, how, and why agencies are beginning to use such tools. A better understanding as to how such agencies are experimenting with AI will aid the development of more specific metrics in relation to agency use of AI tools and can assist policymakers identify opportunities for improvement and remedy deficiencies.
(ii) to highlight that such new tools raise new and challenging legal and policy questions concerning fairness, transparency and accountability, due process, and capacity building. Whilst the use of such tools offer the possibility of more informed and better government decisions, their opacity also creates countless legal puzzles because of the US administrative law’s core commitment to transparency and for providing the rationale as to why, when the government takes actions that affect rights.
(iii) if future research is to inform a robust understanding of how US federal agencies can better achieve the many (often contradictory) demands placed on them, such research must engage with the agencies’ actual practices and legal responsibilities.
Key Findings
The research identified 5 main findings:
1. 45% of the agencies who participated in the research have experimented with AI and related machine learning (ML) tools. The US government’s AI toolkit spans the full technical scope of AI techniques, from conventional machine learning to more advanced “deep learning” with natural language and image data. Such tools are already enhancing agency operations on matters such as:
- Enforcing regulatory mandates;
- Adjudicating on government benefits and privileges;
- Monitoring and analysing risks to public health and safety;
- Extracting usable information from government’s massive data streams; and
- Communicating with the public about its rights and obligations as welfare beneficiaries, taxpayers, etc.
2. Despite this wide agency embrace of AI, the US government still has a long way to go in harnessing the potential of AI.
3. Deep accountability challenges are posed by AI. In the US, decisions affecting the public’s rights, generally require an explanation as to why. However, by their very nature and structure, many of the more advanced AI tools are not, fully explainable.
Thus, it will be essential to be able to subject such tools to meaningful accountability and thus ensure their fidelity to legal norms of transparency, reason-giving, and non-discrimination. The report highlights that such challenges include that whilst open sourcing of technical details might be appropriate when agencies are allocating social welfare benefits, it can undermine agency use of valuable enforcement tools because of gaming by regulatory targets.
4. If US Government agencies are to make responsible and smart use of AI, then the technical capacity to do so must come from in-house. Currently, whilst many agencies rely on private contractors to build and implement AI capacity, 53% are the product of in-house efforts by agency technologists. This high proliferation of in-house development highlights the critical importance of internal agency capacity building. The advantages of such an approach include that in-house expertise promotes AI tools that are better tailored to complex governance tasks and more likely to be designed and implemented in lawful, policy-compliant, and accountable ways.
5. AI has the potential to raise distributive concerns and fuel political anxieties. Increasing use of AI by US Government agencies creates a risk that AI systems will be gamed by elite groups with the resources and know-how. If the public starts to believe that AI systems are rigged, then political support for a more effective and technology-savvy government will quickly evaporate.
Want to share your thoughts on this research, or discuss it further with fellow CAs? Join us on CA Connect – our exclusive online forum – today.