Trust can be earned in many ways, but when it comes to AI in public services, transparency and accountability matter above all else. Leaders across government increasingly recognise that trust cannot simply be assumed when automated systems influence decisions that affect people's lives; it has to be designed in from day one.
There is work to be done: results from a nationally representative survey of public attitudes to AI by the Ada Lovelace and Alan Turing Institutes published earlier this year show significant levels of concern, varying by type of use, which have generally risen since the previous survey two years earlier. As an example, for the use of AI in assessing welfare eligibility, 59% of respondents were concerned, up from 44%. For public authorities, including councils, whose services touch people at vulnerable moments, this concern is particularly relevant.
Citizens have historically trusted governments because of shared civic norms and expectations of fairness and public duty, but that societal trust has eroded over time and no longer provides a sufficient basis for trusting government use of AI. Another reason people can have trust is that there is an alignment of incentives or consequences. We know that discrimination based on protected characteristics (such as sex, religion, ethnic origin, sexual orientation, age) is illegal. We rely on this when an organisation makes a hiring decision, either with or without using AI. That law, and other laws, allow us to trust some uses of AI by the Government but some emerging uses of AI in public services are not adequately covered by existing laws.
Public trust is reinforced when employees themselves understand the AI tools they deploy. Training is therefore vital. For example, after the Local Government and Social Care Ombudsman mandated a four-hour in-person AI course for all staff, employee self-reported understanding rose from 20–30% to over 90%. Internal understanding strengthens external trust because confident, informed staff are better equipped to answer questions, escalate risks and support the public.
Sometimes we trust based on predictability or consistency. When we turn the key in the ignition of our car, we trust it will start because it usually does. This will not suffice as a basis for trusting AI because it is a new technology which continues to evolve, so what it can do and where it is used are constantly changing.
People often trust what they can observe and check directly - much like checking a rope is securely fixed before climbing. Operational transparency (signposting where AI is used), technical transparency (explaining how it works in plain English) and outcome transparency (indicating how decisions can be challenged) are all required to enable public scrutiny and informed consent. Transparency must be paired with accountability, with clear routes for redress when errors occur.
The final way we trust is by delegation: we trust because of our faith in a third party. This is where third-party assurance of AI systems can help. When we buy an electrical appliance, it carries a certification mark from the British Standards Institute confirming it has been manufactured to approved safety standards. The UK Government's ‘trusted third-party AI assurance roadmap' (September 2025) is a promising step towards an ecosystem of AI assurance providers. The sooner we align around a common set of AI standards to measure against and qualifications for assurance providers, the quicker organisations, and citizens alike will be able to trust AI systems because they carry a visible kite mark from a certified assurance provider.
The Government recognises the importance of transparency and accountability in its ‘Artificial Intelligence Playbook for the UK Government' (February 2025) which states: ‘You should be open with the public about where and how algorithms and AI systems are being used in official duties…You should also clearly identify any automated response to the public.' Similarly, central government departments and certain arm's-length bodies are required to publicly document their use of AI with the Algorithmic Transparency Recording Standard (ATRS).
Public trust is reinforced when employees themselves understand the AI tools they deploy. Training is therefore vital. For example, after the Local Government and Social Care Ombudsman mandated a four-hour in-person AI course for all staff, employee self-reported understanding rose from 20–30% to over 90%. Internal understanding strengthens external trust because confident, informed staff are better equipped to answer questions, escalate risks and support the public.
The UK Government is right to see AI as a driver of public service transformation and economic growth. But these benefits can only be realised if the public feels safe and respected when interacting with AI-enabled services. To help achieve this, councils and governments together should initiate a widespread public consultation and education program, similar to how the Warnock Commission was able to generate a consensus of support for the delicate question of embryo research and IVF treatment. Authorities can support this by explaining where and why AI is introduced and how concerns will be addressed. Transparency, clearly disclosing where and how AI is used, and accountability, providing accessible channels for complaints and redress, are not optional enhancements but essential design principles. Citizens should not be asked to trust government AI in the absence of transparency; trust must be earned through openness, evidence and the ability to challenge decisions.
Ray Eitel-Porter is the author of ‘Governing the Machine – how to navigate the risks of AI and unlock its true potential'. He serves on the AI Ethics and Governance Board for the Local Government and Social Care Ombudsman.
