Artificial Intelligence in the Government and the Public Sector
Artificial Intelligence (AI) is already a part of how the Government and public bodies make decisions and deliver public services.
The use of AI in the public sector, like elsewhere, will increase exponentially over the coming years and the Government has already signified its intent for the UK to play a leading role in this industry through funding measures and other initiatives announced in the AI Sector Deal in 2018. Potentially, AI will transform the delivery of public services driving innovation and efficiency.
In recognition of the increased significance of AI, the Government instructed the Committee on Standards in Public Life to prepare a report providing recommendations on upholding public standards at a time when the public sector is actively implementing and using AI systems. The Report was published on 10 February 2020.
We already have a Code of Conduct for the public sector in the form of the Nolan Principles, which contain the seven principles of public life, produced in 1995. The principles most at risk with the increased use of AI by the Government and public sector are:
Objectivity – Holders of public office must act and take decisions impartially, fairly and on merit, using the best evidence and without discrimination.
Accountability – Holders of public office are accountable to the public for their decisions and actions and must submit themselves to the scrutiny necessary to ensure this.
Openness – Holders of public office should act and take decisions in an open and transparent manner. Information should not be withheld from the public unless there are clear and lawful reasons for doing so.
It is not difficult to see the inherent danger in the use of complex algorithms by the public sector to make decisions on public spending and the treatment of individuals, such as benefits and NHS spending.
One only has to look at the recent controversies which have engulfed Goldman Sachs and the Apple Credit Card over alleged gender bias in the algorithm that determines lending decisions which supposedly discriminated against women to predict the huge difficulties, not to mention claims, that the Government, local authorities and other public bodies could face if AI is not implemented properly and carefully monitored. If this does not happen, we will see distrust in the public sector increasing, with AI undermining accountability and preventing decisions based on sound, human reasoning.
The Committee’s investigations concluded that the public sector is currently not sufficiently transparent about its use of AI. It is also difficult to determine how and where AI is being used.
What is needed is a clear set of ethical principles and guidance to help the public sector in its procurement and use of AI, rather than the mismatch of the various overlapping and potentially inconsistent sets of rules that currently exist.
The Committee also recommended that the Government urgently establishes a set of guidelines for public bodies, setting out how they should provide information to the public about their use of AI systems.
The Report concluded that no specific ‘AI regulator’ was needed at this stage. However, existing regulators need to adapt to control the use of AI in their respective sectors and the Committee recommends that the Centre for Data Ethics and Innovation (CDEI) is given a regulatory assurance role to support them.