Photo courtesy of Markus Spiske on Unsplash.com
Photo courtesy of Markus Spiske on Unsplash.com

Racism Isn’t New, but What About in AI?

February 5, 2023

Within the past few decades, technology has developed rapidly and the field of artificial intelligence (AI) has grown to involve increasingly larger portions of the population through its integration into everyday tasks. However, this speed of technological advancement, specifically in AI, has been met with a series of growing concerns, one of which is racism. As a result of AI models pulling from biased datasets, discriminatory outputs have been observed and a standardized solution has yet to be established.

The creation of artificial intelligence revolutionized society as machinery gained the ability to do things that would usually require human intelligence. Well-known examples of AI include Apple’s Siri, Amazon’s Alexa, and even the unlocking feature based on facial recognition on many cell phones. While AI has greatly improved the quality of life for mankind, its drawbacks have  lead to debate on the ethicality of these developments. The topic has been in discussion for years but recently gained more attention as highly publicized AI, such as ChatGPT, see increasing complaints about discriminatory bias.

In June 2015, software engineer Jacky Alciné was sent a link by his friend for photos that were uploaded to Google Photos, a service that is able to sort photos into folders based on photo recognition. When Alciné clicked on the link, he found that over 80 photos, which included a black friend of his, had been placed in a folder labeled “gorillas.” Later in October 2016, Joy Buolamwini, a black computer scientist, was hoping to further research in the field of facial recognition by attempting to track her face with an AI model. However, the system was unable to recognize her face until she put on a white mask. The program was then able to immediately track her face.

Similar incidents have occurred in larger corporations. Companies like Amazon had an issue where the N-word popped up in the descriptions for black action figures, do-rags, and even shower curtains. However, the Chinese company that was selling these items was not to blame. Instead, the blame falls on the AI language program that translated the description. Additionally, the popular text-producing ChatGPT has seen difficulties with potentially writing racist, sexist, and overall derogatory statements despite having guardrails implemented to prevent such incidents.

The AI model itself isn’t fully responsible for its racially biased outputs. The datasets used during AI training are how the AI learns how to operate, later allowing it to perform the tasks it’s meant to do. The information that AI learns by reading through millions of websites, etc., is the main reason behind the racial bias. As the AI interprets the data it’s being provided, it is unable to differentiate between what is harmful and offensive compared to what isn’t. This information is then inputted into algorithms, such as those of language programs, which explains how insulting outputs occur. Computers don’t obtain biased behaviors on their own, they’re learned from society and how we behave.

In the past, many big-hit companies, like Google and Microsoft, have written off incidents of discrimination in their AI as small errors and refused to address them on a larger scale. The problem is, the longer resolving the matter is delayed, the harder it is to convince society that the occurrences were mere accidents. As time passes, more and more incidents will continue to occur, and these companies, which spend billions on the development of AI, won’t be able to ignore the issue for much longer.

Technology is continuously evolving, allowing it to become more intricate, thus altering algorithms and pinpointing the root causes of the bias becomes ever more challenging as well. Simply filtering out some data won’t fix the problem anymore. There are many more parts of AI that scientists must consider if they were to interfere with the constantly changing AI’s algorithms, making finding a solution extremely arduous. Today, many argue whether it is more beneficial to allow AI to catch up to society at its own pace, or have humans intercede and adjust the methodology of AI in order to rid them of their prejudice. Then there are those who support the former claim–that human involvement will only cause more problems since you’re projecting your own opinions onto the AI. Those in support of the latter reason that by defending against the underrepresentation of minorities we can avoid injustices in society.

Racism, sexism, and religious discrimination have always been a misfortune of society as a consequence of the natural biases of humans, and that bias has infiltrated AI too. What happens next is up to the us.

Kaleidoscope • Copyright 2024 • FLEX WordPress Theme by SNOLog in

Comments (0)

All Kaleidoscope Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *