Soon after the US constitution was ratified, the Bill of Rights added specific guarantees on freedom of expression and assembly, and rights to fair trials — aiming to set limits on the powers of the government that had just been created. This is the precedent scientific advisers at the Biden White House have invoked as they propose a new Bill of Rights that aims to protect citizens in the face of the transformative technology of artificial intelligence. It is an admirable initiative, but one that should extend globally, not just to Americans.
If the notion of a new Bill of Rights seems grandiose, consider the context. International and national protections of fundamental rights and against abuses and discrimination by governments and companies have made great strides since the second world war. But these are aimed at human actors.
For the first time, decisions crucial to humans’ wellbeing are being made in part or even wholly by machines — on everything from job applications to creditworthiness, to medical procedures or prison sentencing. And decision-making by algorithm turns out to be surprisingly prone to error or bias. Facial recognition technology can struggle with darker skin tones. What machines learn is influenced by the prejudices of those who program them, and the partial data sets they are given.
When things go awry, finding humans to take responsibility can be difficult. In the UK this month, a black former Uber driver whose account was deactivated after automated facial scanning software repeatedly failed to recognise him launched a claim at an employment tribunal.
The first task of an AI Bill of Rights, then, is to strengthen existing protections for an AI world. It should apply to algorithmic decision-making in legal or life-changing areas. And it should extend to data and privacy, enshrining individuals’ rights to know what data are held on them, how the information is being used, and to transfer it between providers.
AI decisions should not emerge from an unfathomable black box, but be “explainable”. A bill ought to guarantee an individual’s right to know when an algorithm is taking decisions about them, how it works, and what data are being used. The right to challenge decisions and obtain remedies should be guaranteed. Some human or corporate responsibility needs to be maintained, with managers accountable for errors or flawed decisions by systems they oversee, as for those by human staff.
But AI gives unscrupulous governments new capabilities to snoop on, control and potentially coerce their citizens. A bill should set out what technologies are permissible or not, and ground rules for their use.
America’s Bill of Rights initiative lags behind what Europe is doing. The EU General Data Protection Regulation already contains a right for citizens not to be subject without consent to decisions “based solely on automated processing”, though this is not being widely enforced. A proposed AI Act outlines a hierarchy of risks for technologies subject to varying safeguards. Some, such as “social scoring” — nodding to China’s social credit system that aims to assess behaviour and trustworthiness — would be banned.
The Biden administration should take up the EU’s invitation to work together on AI issues. But just as the UN’s 1948 Universal Declaration of Human Rights set out fundamental human rights to be universally protected, so a global AI charter is merited. Some countries would choose to go further; others, like China, might decline to sign up. But as in the cold war, superior protection for human rights, now against intrusive AI, could become a point of moral differentiation, and leverage, for democracies.