About us

Thrive is set up by Funding London, a venture capital company bridging the finance gap for early stage businesses based in London. With over a decade’s experience in supporting the startups of London through a variety of funding vehicles, Funding London sensed a need to illuminate the ever-evolving scenario of London’s early stage businesses.

Thrive features interviews with and opinion from budding entrepreneurs, investors and industry experts. A mix of contributors from all areas of the industry is desired in order to spark genuine discussion about ongoing critical issues. While it showcases the effectiveness of successful ventures, it also encourages sharing lessons learned from missteps and unsuccessful projects.

Visit Site

Credits

Content
Funding London
Design
dtc

Contact

info@thrive.london
020 7043 0739

Fourth Floor

5 Chancery Lane
London, WC2A 1LG

Follow

Perceptions · 1 December '20

Algorithmic Bias Explained

Our brains are not well adapted to decision making in the modern world. To overcome our brains’ limitations, we increasingly rely on automated algorithms to help us. Unfortunately, these algorithms are also imperfect and can be dogged by algorithmic biases. Below, we discuss how and under what circumstances our brains can fail us, how computer algorithms can come to the rescue, the dangers of algorithmic biases and how to avoid them.

Unconscious Bias and How Our Brains Fail Us

The weekly supermarket shop perfectly demonstrates the limitations of our brains’ abilities to make decisions in the modern world. In most modern supermarkets we are flooded with too much choice. None of us have the energy, motivation or time to assess the pros and cons of each breakfast cereal (for example) and make a rational, logical decision about which one is best for us. Instead we use mental shortcuts to make our decisions. Examples of some of the mental shortcuts (heuristics) that our brains make include the halo effect, confirmation bias, affinity bias, contrast bias, and many others.

In the case of breakfast cereals, the harm in using mental shortcuts is relatively innocuous, consumers may end up with non-optimal but perfectly acceptable breakfasts. In other instances, the impacts can be far more damaging. In the context of human resources and diversity and inclusion, mental shortcuts can result in accidental racism, sexism, homophobia, classism, ageism or ableism.

Decisions can either be made rationally and logically or they can be made quickly using mental shortcuts. Organizations are constantly seeking to improve productivity and in so doing face a dilemma: do they want good decisions or fast ones.

This is where algorithms can come to the rescue. Tasks that require thought and consideration can now be outsourced to machines thanks to advances is machine learning, powerful computers and large datasets. A novel computer algorithm that is effective at solving a particular problem can take a significant investment of time and energy to develop initially, but once in use can save huge amounts of time. These algorithms can, in principal, make good decisions quickly. Using them can eliminate the trade-offs between speed and quality that are necessary when asking humans to make decisions and complete tasks.

What is Algorithmic Bias?

 

The challenge in building these algorithms is to ensure that the decisions the algorithms make are good and not subject to algorithmic biases. An algorithmic bias is a systematic error, i.e. a mistake that is not caused by random chance but an inaccuracy or failing in the algorithm. The most pernicious of which negatively affect one group of people more than another.

 

People have attempted to group the sources and causes of algorithmic bias into various categories including confirmation bias, rescue bias, selection bias, sampling bias, orientation bias, and modelling bias (amongst many others). All these labels can make the concept seem very complicated, and possibly are not that helpful.

 

Conceptually, algorithmic bias is not complicated but to understand it we first need to discuss the three main components of a computer algorithm; the model, the data, and the loss function. Bias can be introduced by each of these components and we will discuss them below.

 

Modern computer algorithms work by a human building a mathematical model (a set of equations), which can replicate the brain’s ability to solve a very specific task. Ideally the model should be well-motivated and based on insight. For example, when trying to predict how a ball will bounce off a wall, we could either guess a model and hope for the best or alternatively we could use some physics and choose a model justified by our understanding of the natural world. An example of a biased model would be one that systematically predicts that balls bounce further than they do in real life. If using a physically motivated model, this bias could be caused by the developers asserting that balls are bouncier than they really are. If using a model chosen by guess work it could be caused by our guess being a bad one. When it is difficult (or even impossible) to choose a well-motivated model there are clever mathematical tools that can be used to compare lots of models at the same time choose the best one, or alternatively machine learning can be used. Machine learning basically boils down to building an incredibly complicated model that we think might be able to mimic a well-motivated one (as well as lots of poorly motivated ones), and then using large amounts of data to pull it in the right direction. Neural networks are examples of such models. Biases introduced by neural networks can be very difficult to understand and remove.

 

Like the human brain, these models need to learn and we train them using large datasets (huge datasets if we are using neural networks). If there are biases present in the data, the model will learn to replicate them. For example, if we were training an algorithm to identify whether a picture is of a nurse or a builder, we would need lots of pictures of nurses and builders. If in our training data all of the pictures of nurses were women and all of builders were men, the algorithm may well (mistakenly) conclude that women were all nurses and all men are builders. This is not true globally, but from the biased data presented to the algorithm it is a completely legitimate conclusion. This is perhaps the most common source if bias in computer algorithms, but also the easiest to deal with; get more representative data!

 

We quantify how well the algorithm does by calculating something called a loss function. The mathematical model is run on the training data and tweaked to lower its loss function. If the loss function does not penalise bias, then it can creep in. Because they form a small part of the global population, minority groups could also only weakly impact a simple loss function and the algorithm may not care if it makes incorrect predictions for them if it still does a good job on the majority.

 

There are many examples of biased algorithms being deployed commercially or by governments. For example, in 2016, the UK government introduced a tool that used a facial recognition algorithm to check identity photos in online passport applications. The algorithm struggled to cope with very light or dark skin and therefore made the application process more difficult for people in these groups. In the USA, an algorithm called COMPAS is used to predict reoffending rates and guide sentencing. In 2016 the news organization ProPublica found COMPAS to be racially biased against black defendants, and a study in 2018 showed randomly chosen untrained individuals made more accurate predictions than the algorithm. In 2018 Amazon scrapped their AI recruitment algorithm that was biased against women.

Overcoming Algorithmic Bias

 

The first step in reducing the bias of algorithms is to define what a fair outcome would look like at the start of the development process. For example, when predicting reoffending rates, one measure of fairness could be how similar the accuracy is across different protected characteristics. The false positive rates for all ethnic groups, genders, etc… should be similar. Without such metrics it is impossible to assess whether the algorithm is fair or not. This can be incorporated into the loss function.

 

The second step is to ensure that any training data are representative of what should be, not what is. For example, when training a recruitment algorithm, the history of past hires will likely have been impacted by the mental shortcuts that humans use when making decisions quickly, which we discussed earlier. An algorithm trained on these data will repeat those same shortcuts. Instead an exemplar of best practice should be curated and used to train the algorithm. The data must also be representative. That is, the data should contain examples of people from all categories and groups in substantial numbers. The example of Amazon’s sexist AI recruitment algorithm that was mentioned earlier failed due to biases baked into the training data.

 

The third step is to take a Bayesian approach. Wherever possible build a well-motivated generative model for the problem and fit it to your data. Such an approach can model and account for biases in the training data and can even help reduce the amount of training data needed.  A Bayesian approach also allows you to estimate how accurate your predictions are on an individualised basis. The alternative, throwing large amounts of data at a deep neural network and hoping for the best, is almost always doomed to introduce bias and lack explain-ability.

 

Large tech organizations take this incredibly seriously. For example, Facebook have an independent team dedicated to auditing their algorithms, IBM have developed an algorithmic bias detection tool called Fairness360, and publicly available training datasets are being constructed with diversity built in.

What we are Doing at MeVitae

 

One of our goals at MeVitae is to reduce the impact of unfair biases in the recruitment process to create more diverse workforces. Aside from the obvious moral advantages of a fair and more equal society, there are clear economic advantages too. A more diverse workforce tends to result in greater productivity, profitability, better governance and creativity.

 

When short- or long-listing applicants for a job, recruiters spend on average seven seconds per CV. Mental shortcuts therefore dominate the decision-making process and biases creep in. At MeVitae, we have already developed solutions that can automatically identify and redact potentially biasing information from CVs within a company’s applicant tracking system (ATS). If you are interested in this service please contact us for more information.

 

We are currently developing an algorithm that can shortlist candidates automatically and without bias. Our algorithms have been built with fairness and explain-ability built-in from the ground up. It is based on research we have conducted with partner organizations such as the University of Oxford and the European Space Agency, using tools such as electroencephalogram (EEG) headsets, eye-tracking cameras, and psychometric tests. For more information please visit our Labs pages.