When Algorithms Decide Your Rights

Image for post
Image for post

November 2039. Brent, London, England.

Then she remembered her appointment and rushed out of bed: 9:30am at her local Citizens Advice Bureau. To get from Chalkhill Estate to the High Road she would need to run and have some luck to catch the bus on time.

In the bus on the way to her appointment, she felt like this journey had been in the making for a long time. Four years ago, her sister Chantell, who had cerebral palsy and heavily relied on support, had her home care visits dramatically cut from 56 to 32 hours a week. A new algorithm had reassessed the amount of care her sister would be given. Her sister had pleaded with the assessor, explaining how that simply wasn’t enough support, but neither the assessor nor her sister seemed to quite understand how the decision was reached by the computer to reduce the amount of care.

Her sister’s health situation hadn’t improved, but an invisible change had occurred that created this new result

Her sister’s health situation hadn’t improved, but an invisible change had occurred that created this new result. When the assessor entered the information about her health status, daily routines and needs for support into the computer, it ran through an algorithm that Brent council had recently approved, determining how many hours of help she would receive.

And then there was her younger brother Jordan who had been arrested and charged with burglary and petty theft for grabbing an unlocked bike and a scooter with his mate. When Jordan was booked into prison, a computer program spat out a score predicting the likelihood of him committing a future crime. Yes, Jordan had had issues before and a criminal record for misdemeanours committed when he was a juvenile. But how could he be classified at a high risk of re-offending? He had told her that so many other seasoned criminals with multiple convictions of armed robbery had been classified as low risk. But then those guys were white, and Jordan was black…

So now it was her turn. Yes, she was not the perfect mum, she was the first to admit that herself. She was struggling, not just because of her learning disability which made it hard to stay in a job, but also because she tried to help her sister after home visits were cut.

Unfortunately, her energy to support Selena and Brandon was often nil and they regularly missed school. So a few weeks ago a woman from the council had come by her house and had told her that her family was classified as high-risk and was being placed in a special programme of families being at risk of child sexual abuse and gang exploitation. She had been horrified to hear this and needed help. Her neighbour Sue had told her that Citizens Advice had launched a new service: AAS — the algorithm advice service.

A few weeks ago a woman from the council had come by her house and had told her that her family was classified as high-risk

Fred, the young Citizens Advisor, was a student training as a data scientist. He would help her to analyse which data points had triggered her high-risk classification and what rights she had to contest some of the data used by the council and the conclusions drawn.

The future in the story told above has not happened yet to one individual as far as I know. However, if you look at how algorithms get used by public authorities in the US today in judging re-offending risk or in re-assessing disability benefits, you can see that algorithms there already have a direct impact on the realisation of human rights.

In the UK, most public sector programmes like the one run by Brent council and IBM to identify children and families at risk are still in pilot stage today, but their potential impact on human rights is equally strong.

Artificial intelligence (AI) and machine learning (ML) have been around as a niche in the field of computer science for years without much public attention. However, in recent years, there has been exponential growth of practical use cases in government sectors like health, education and criminal justice that have triggered a lot of public debate on the risks and unintended consequences of it.

There has been exponential growth of practical use cases in government sectors like health, education and criminal justice

When you look at historical patterns of how societies have managed the change and challenges created by new technologies, I would argue there are three overlapping phases:

  1. The ethics and convention phase;
  2. The standards and regulation phase; and
  3. The campaigns and appeal phase.

1. The ethics and convention phase

In September 2016, the Partnership on AI launched with Amazon, Facebook, Google, DeepMind, Microsoft, and IBM as its founding members, and Apple joining in early 2017. By today, the partnership has grown beyond industry actors to include NGOs like Amnesty International and media organisations like the New York Times, as well as widening its reach geographically to China with Baidu becoming a member in October 2018.

In June 2017, the UK House of Lords established a Select Committee on Artificial Intelligence that published its recommendations in spring 2018. A lot of this activity during 2016 and 2017 raised the ethical implications and unintended consequences of different uses of algorithms, especially those used by the public sector and attempted to agree on shared overall ethical principles. For example, these five principles were identified in the Lords’ report:

  • Artificial intelligence should be developed for the common good and benefit of humanity;
  • Artificial intelligence should operate on principles of intelligibility and fairness;
  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities;
  • All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence;
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Also as part of this overall global debate, John C. Havens of Institute of Electrical and Electronics Engineers (IEEE) pointed out in this article that values and ethical principles lead design and development decisions in AI.

2. The standards and regulation phase

3. The campaigns and appeal phase

It will be crucial for the realisation of human rights that they can be challenged through due process, that they are open and accountable

I don’t believe that a lot of activity in this third phase will take place in the next few years, but someone, somewhere will need to start it. Especially in the US, where most algorithms have been deployed by state actors so far, it would be interesting to explore litigation against obscure use cases directly impacting human rights. With the Human Rights Act currently under threat in the UK it might be harder to start litigation there despite its success in the past. It will be interesting to see which country will be the host of human rights battles against algorithms in the future.

This article was originally published under the title “When algorithms decide your rights” as part of the Digital Freedom Fund’s “Future-Proofing Our Digital Right”s series.

Written by

The Digital Freedom Fund supports partners in Europe to advance digital rights through strategic litigation. https://digitalfreedomfund.org/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store