Book notes: Ethical Machines

Posted on Sep 9, 2022

Notes taken while reading Reid Blackman’s book on ethics in AI. I heard the author speaking on the Machine Ethics Podcast and appreciated his communication style - high signal to noise ratio, clear and simple communication. So I picked up his book. The following are personal pro memoria of the main points of each section.

The big three challenges of AI ethics

  • privacy violations
  • lack of explainability
  • bias

Ethics program elements

It’s important do distinguish between two elements of an ethical AI program in an org:

  • Structure: all the organisational stuff. The formal mechanisms for identifying and mitigating ethical risks: policies, processes, role-specific responsibilities etc.
  • Content: the bad stuff you want to avoid. Privacy violations, inexplicable decisions, bias.

But ethics is subjective, so what’s the point?

Blackman outlines three ‘Really Bad Reasons’ that ethics is subjective - common arguments that can kill discussions about AI ethics and even entire AI ethics programs in orgs - and how to rebuke them.

  1. ‘Ethics is subjective: people disagree about right and wrong, so there’s no truth to the matter.’

By this logic, anything that doesn’t have 100% consensus worldwide and throughout history has no base truth. This is obviously flawed. E.g. there’s disagreement about the earth being flat or spherical, but this doesn’t make the shape of the earth subjective.

  1. ‘Ethics is not science. We can’t scientifically measure and prove ethical conclusions to be true, so it’s not verifiable and there’s no truth to the matter.’

The author here refutes the claim by highlighting a logical paradox: the claim itself is not scientifically provable. It’s essentially saying ‘Only empirically measurable claims are true’, but that claim itself is not verifiable, so it is self-undermining.

Personally I find that, while this is a fine and direct argument, this rebuttal is a bit of a technicality. I would add that there are loads of things we know to be true but aren’t empirically provable. For example, you look in the mirror and see yourself. You know that’s you. Try to prove it empirically. That’s not so easy. You might move your arm and see the reflection mimicking you, but how can you empirically prove that it’s not an illusion?

Another example: you think about a holiday you have planned next month and feel excited. You know you feel that way and why. How do you prove it empirically? Sure you can take heart rate and dopamine measurements and MRI scans but they’re just numbers. You know how you feel and why and that’s the truth. The numbers are just a tablecloth that we throw over the raw experience of perception to infer its shape. But the experience is there and it’s valuable whether we know its shape or not.

Of course, these are subjective examples, but they are truths none-the-less.

Going back to the mirror example; a person with a neurological disorder might not recognise themself and assert that the reflection is in fact not them but some impostor. They know this to be true, but they can’t prove it either. As an outlier, this example shows why it’s dangerous for a single figure to decide ethical norms for everyone. Which brings us to the 3rd argument:

  1. ‘Ethics requires an authority figure to dictate what’s right or wrong otherwise it’s subjective and there’s no truth to the matter.’

Personally, I’m completely baffled that anyone would ever think or say this. Why would anyone want or need an authority figure to tell them what’s right or wrong when we can discuss and reason together as a society? Anyway the rebuttal is much like the first example: an authority figure saying the earth is flat doesn’t make it so.

My takeaway from this section is that while ethics isn’t an objective science, it does deal with facts, with our lived experience and perception of reality serving as the baseline truth. So it’s worth discussing carefully, giving it equal weight to more traditional business and technical metrics like KPIs, financial reports, technical efficiency etc.

Bias

There are a tonne of biased AI systems acting as flawed gatekeepers for mortgages, job interviews and even who should get soap dispensed in the bathroom. In each of these examples, a subsection of society was discriminated against by AI.

Why?

Reasons include:

  • The AI learned from existing, real-world discrimination. E.g. a culture of ‘we don’t hire women here.’
  • Minorities were under-sampled in the training data. E.g. A self-driving car that doesn’t recognise pedestrians from minority groups because there were none or few in the training data.
  • Proxy bias - data scientists can’t get hold of data reflecting the characteristic they want to study, so they use a proxy. E.g. Crime conviction data isn’t available, so they use data about those accused of a crime. There is real-world bias in the accusation rates between sub-populations.

and more…

How can we fix bias?

  • work bias-mitigation strategies in from the very start of the project, i.e. when deciding what the training data should look like.
  • carefully consider the technical details of the AI’s decision-making process, e.g. maximise false positives or minimise false negatives? should the thresholds be the same for all sub-populations? etc.
  • For this discussion, ensure there are a wide range of people in the room. I.e. as well as technical staff, people with ethical, legal and business expertise.
  • Ensure diversity in the development and engineering teams. If the soap dispenser team from the link above was more diverse, they would have noticed the bias in testing.