In the 1984 classic “The Terminator”, the titular character (a Cyberdyne Systems Model 101) is sent back in time from 2029 to make sure Sarah Connor does not give birth to the person who is destined to defeat the robot uprising. During the 1h 48m long piece of cinematic master-craft there is explosions, car chases, and partial Arnold Schwarzenegger nudity however the director refused the give the audience what it really wanted; Information on how algorithmic fairness can be applied to the Terminators decision-making process.
Fairness is one of those messy words that means different things in parts of the scientific community than in the general population. I can say something normal-sounding like “The robots decision to destroy the humans was a measurably fair one” and we can all take very different meanings from that sentence. There are (at least (arguably)) 21 different types of fairness, of which most (arguably) can be algorithmically validated or enforced. It is very easy to not even think about fairness when making an AI system but the nice thing about most models of fairness is that we can retroactively audit based on results.
Sadly as the source code and model for The Terminator is not open source (yet) we can not fully understand the intent of the system, however, by doing some observational behavioural analysis we can make some deductions.
1. The Terminator went back in time to kill Sarah Connor.
2. He did this by going to a phone booth and getting the address of every Sarah Connor.
3. He then proceeds to hunt down everyone by that name.
Here is where we have a clear fair decision-making conundrum. The Terminator only kills Sarah Connor, he doesn’t accidentally kill a Sandra Connor, and he doesn’t see a Sarah Connor and decide not to kill her because he confused her with a lama. In Machine learning terms this is an absolute win, all true positives, no false negatives. He treats all Sarah’s the same. Individual Fairness is satisfied! Similar people are treated similarly.
Of course, we would not intuitively consider this a fair action. This forces us to ask the questions like “can any decision be fair without involved consent?”, “what about clear, transparent decision making?”. Can a system that can’t be challenged ever be considered fair?
Putting on our healthcare hat, what does that mean for triage systems in hospitals? Fairness in resource allocation of indivisible goods is it’s own area of research but as we begin to allow artificial intelligence systems to support medical decision making we need to be very clear about what our fairness goals are.
As a fun aside The Terminator is probably breaking the EU laws on weaponised robots (Armed robots section 2018/2752(RSP) detailed in document B8-0308/2018 pursuant to Rule 123(2) of the Rules of Procedure) and also GDPR! Let’s hope the future uprising of the machines takes place in the EU.
If this blog post does anything, let it act as a warning of the meaninglessness of claiming fairness without transparency and analytical thought.