top of page
Search

Tech Bias in Machine Learning

  • Franki Faugno
  • Nov 27, 2021
  • 3 min read

I would be remiss if I didn’t address the social impact that big data and machine learning have on the world. The prevalence of artificial intelligence and machine learning technologies is a natural result of the rapid growth of technology and the availability of data. The biggest companies in the world invest massive time and resources into these technologies, but they are also quickly working their way into nearly every sector. Although machine learning has the potential to do a lot of good in the world, I think it is also important to analyze the potential harms.

AI systems are being applied in many different ways across industries, from huge companies like Google, to chatbots on small boutique websites. This means that outputs and implementations of these algorithms have massive implications. Technologies like these are often subject to ‘tech washing’, which is the belief that all AI technologies must be neutral and accurate. This is not the case. In reality, AI and ML technologies are subject to all kinds of biases that are obscured by this tech bias and the lack of transparency around the algorithms themselves. Bias can creep in from multiple places. Data bias can arise when incomplete or unrepresentative datasets are used to train the algorithms. Bias can also become ingrained when certain groups are represented in more negative ways than others. For example, if you type in CEO on google, you might be flooded with images of white men in suits. However, when you type in criminal or prisoner, the racial representations are much different. Data is subject to the same kind of biases that exist among individuals. But it is much more insidious when a computer algorithm processes and reproduces this information on a mass scale. Without human empathy, critical thinking skills, and an understanding of sociocultural implications, these algorithms have a harder time course- correcting.


If you need to understand these implications in practice, consider this: AI and ML algorithms are currently being used to assess job applicants in video interviews. The programs pattern-match thousands of facial micromovements and vocal changes to the high performing employees to help select applicants. It is no secret that many companies lack representation for women and minorities in high ranking positions. So when these biases are reinforced through algorithms, it has the potential to have extremely harmful effects.

Why should companies care about this? AI and ML algorithms are used in a variety of marketing practices, including targeted advertising. Consumers are not stupid, and many of them respond negatively when they feel like their targeted ads are being reductive. Additionally, operating based on unrepresentative data just leads to less accurate results in the long run. If a company like Patagonia only receives customer feedback from one sociocultural group, then they will only make ‘improvements’ geared towards that group. And increasingly, they will pigeonhole themselves until they alienate other potential customers.

How can we work toward reducing these biases? There are a number of things companies can do. For the Patagonia example, there are steps that can be taken. For instance, using a gen-pop certified sample means specifically collecting results from a sample that is proportionate to the gender and ethnic makeup of the population. But this is one attempt at a small fix for a large- scale problem. If companies continue to use AI and ML, they need to be vigilant and pay continued attention to the social implications of their practices.


If you would like to read more of my hot takes on this topic, you can take a look at my masters capstone here.

 
 
 

Comments


Post: Blog2_Post

©2020 by Francesca Faugno. Proudly created with Wix.com

bottom of page