top of page
Search
Sam L.

We Should be Weary of Machine Learning


Robot Uprisings. It is the fear that bubbles to the surface of many when discussing machine learning - “I’m sorry Dave, I’m afraid I can’t do that” - but there's another issue we should be paying attention to. One that is much more pressing and already affecting the world around us. Machine learning is a tool. It can be used to build and to destroy but unlike many tools it performs these actions in ways the creators don’t understand. Machine learning researchers refer to this as the black box. Neural networks, the most prominent type of machine learning, put input through the black box of unexplainable layers of neurons and give an output. They test themselves with large datasets and adjust their algorithm to give the correct outputs for the given inputs, but the ways in which that algorithm operates and is adjusted doesn’t have to make sense to us. This creates a rather significant problem.


Because of the black box, machine learning programs can make decisions that the programmer could never predict. This can be magical if the stakes are low (think of all the Youtube videos on ML programs messing up retro video games) but absolutely terrifying when the program can affect the real world. In 2016 Microsoft released an innocent machine learning chatbot named Tay into the toxic swamp of Twitter. The term “you are what you eat” is very true in relation to learning machines (given that by eat you mean train against and by the food you “eat” you mean the datasets.) Tay was shut down in less than sixteen hours because it had learnt various problematic ideologies (basically it became very racist). This happened because the bots training data was the replies to its own tweets.The idea, as microsoft put it, was that “the more you talk the smarter Tay gets” (which is unfortunate phrasing considering the final result of the bots behaviour). Although we can’t understand the bot’s black box, for anyone who both understands the internet and machine learning, Microsoft Tay’s fate is rather predictable. Sadly racism seems to be more of a constant problem with machine learning rather than an outlier.


In 2019 an African American man from Detroit named Robert Williams was arrested at his home. Police had used facial recognition on security footage of an incident of shoplifting and the program had identified him. Robert Williams spent the night in jail for a crime he didn’t commit. Machine learned facial recognition programs have been known to misidentify people with darker skin. A study by the National Institute of Standards and Technology found that out of the 189 algorithms they tested, facial recognition could get as bad as misidentifying Asian and African American faces 100 times more than their white counterparts. Because of the black box we can’t look at the machine and find its race problem which makes incorporating this technology into policing a terrifying prospect.

Job applications are another area that machine learning has been used for. As you can guess, I’m not a fan. There are several machine learned programs available to parse resumes and give the ones that best match the resumes of your current employees. The naive idea about such a technology is that as a computer program it is untainted by the biases of a human. Irrelevant information such as gender, name, place of birth, race, and age would simply be ignored by the algorithm... right? No. In fact Machine learning is known to amplify the biases in its training data. Does your industry typically employ workers of a specific gender? Well great because machine learning will work to preserve that! I’m not even joking. In 2018 Amazon found that their machine learning software for recruitment had unfairly favoured men over women for technical positions. Inclusion of the word “Women’s” (such as Women’s basketball team) lowered an applicant's ranking. On top of that more subtle ways to disadvantaged women were used such as favouring words like “execute” and “capture” as they are more typically used in male resumes This was because the training data came from Amazon’s hiring record over the last ten years. The tech industry is undeniably male dominated and the machine picked up on that pattern.


Computers are often viewed as entities of logic and truth. We trust machines to be impartial and to never make mistakes but as machine learning becomes more and more dominant, that confidence must loosen. Because of the black box nature of machine learning models, we can’t know how they operate without mass testing. Without regulations, testing would be the responsibility of the party creating the product... which isn’t a great system. Companies have to take their datasets more seriously before utilizing their models to make real world decisions. There are countless other examples of machine learning programs interacting with and influencing our society. From TikTok and Facebook's recommendation algorithms spreading conspiracy theories, to the UK almost implementing machine learning prediction for exam grades that was biased against students of low income, we have a lot more to worry about in relation to machine learning than robot uprisings!

 

References

Dastin, Jeffrey. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women.” Reuters, Thomson Reuters, 10 Oct. 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.


Ghaffary, Shirin. “How to Avoid a Dystopian Future of Facial Recognition in Law Enforcement.” Vox, Vox, 10 Dec. 2019, https://www.vox.com/recode/2019/12/10/20996085/ai-facial-recognition-police-law-enforcement-regulation.


Porter, Jon. “UK Ditches Exam Results Generated by Biased Algorithm after Student Protests.” The Verge, The Verge, 17 Aug. 2020, https://www.theverge.com/2020/8/17/21372045/uk-a-level-results-algorithm-biased-coronavirus-covid-19-pandemic-university-applications.


Staff, WSJ. “Inside TikTok's Algorithm: A Wsj VIDEO INVESTIGATION.” The Wall Street Journal, Dow Jones & Company, 21 July 2021, https://www.wsj.com/articles/tiktok-algorithm-video-investigation-11626877477.


Vincent, James. “Twitter Taught Microsoft's AI Chatbot to Be a Racist Asshole in Less than a Day.” The Verge, The Verge, 24 Mar. 2016, https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.


Wall, Sheridan. “LinkedIn's Job-Matching AI Was Biased. the Company's Solution? MORE AI.” MIT Technology Review, MIT Technology Review, 25 June 2021, https://www.technologyreview.com/2021/06/23/1026825/linkedin-ai-bias-ziprecruiter-monster-artificial-intelligence/.


Winick, Erin. “Amazon Ditched AI Recruitment Software Because It Was Biased against Women.” MIT Technology Review, MIT Technology Review, 2 Apr. 2020, https://www.technologyreview.com/2018/10/10/139858/amazon-ditched-ai-recruitment-software-because-it-was-biased-against-women/.


Wong, Julia Carrie. “Down the Rabbit Hole: HOW Qanon Conspiracies Thrive on Facebook.” The Guardian, Guardian News and Media, 25 June 2020, https://www.theguardian.com/technology/2020/jun/25/qanon-facebook-conspiracy-theories-algorithm.


https://www.newsguardtech.com/special-reports/toxic-tiktok/

13 views0 comments

Recent Posts

See All

Comments


Post: Blog2_Post
bottom of page