On a Sunday afternoon, I asked ChatGPT, “Do you have bias?”
It responded, “As an AI language model, I don’t have personal beliefs, values, or emotion. … However, the data that I was trained on is generated by humans and may contain implicit biases, which can influence my responses.”
I found the response to be direct and surprisingly honest.
From self-driving cars to platforms like Midjourney that can create outstanding artworks in just seconds, Artificial Intelligence has become prevalent in society. But the rise of AI begs a major question: Can AI exacerbate biases that already exist?
The answer is clearly, yes, and that’s a problem. Researchers have found that racial, gender, and other types of biases exist in AI used for hiring, facial recognition, healthcare, the criminal justice system, and more, impacting who gets job offers, medical treatments, and who is the suspect of a crime.
In the world of higher education, colleges have started to use AI algorithms to evaluate applicants in an effort to identify the best and brightest students, according to ISM Insights. While this process is intended to help select candidates more fairly, the reverse could also happen.
For example, the enrollment management algorithms used in the AI may be narrowly focused on enrollment and favoring those who are most likely to enroll and require the least amount of financial aid. Similarly, as a big part of the data sets come from past admissions, AI could be sustaining or even deepening pre-existing biases in the admissions system, putting traditionally underrepresented groups at a disadvantage. As ChatGPT reveals in its response to my question, biased data input will lead to biased output.
Palo Alto High School computer science teacher Christopher Bell said that he is a proponent of using AI to help complete tasks more efficiently and effectively, but he is concerned about the effects of AI bias in college admissions and in corporate hiring.
“Bias causing inequities has happened numerous times in the past with companies using AI to filter candidates for jobs,” Bell said. “I would hate for that to happen in education. … If colleges know how the algorithms they use are making the decisions they receive, and they are transparent with people applying what the systems are evaluating, then AI can be a great tool for colleges. It seems like there is already a lot of fear in the college application process, so being as clear as possible with students and families will be essential.”
While AI can help make the admission and enrollment process more efficient, the technology still requires human guidance and should not be solely relied upon to make decisions.
The tech industry needs people who understand and can work to address the potential bias and its harm. Developers of AI models need to ensure that the data fed into them are diverse and inclusive. We also need laws that protect civil rights in the implementation and use of AI.
“Companies need to hire a diverse set of employees when working on these systems so they are more aware of potential issues,” Bell said. “More emphasis needs to be put into analyzing and cleaning the data companies are using to train their AI models so we don’t introduce bias into the systems at the start. Plus, companies have to be transparent about the data they are using and what their models were trained to do. All of that, combined with more people taking CS in school, should help to make a difference.”
AI has been permeating almost all aspects of our lives. It is important we address its ethical challenges as soon as possible, before flawed technology is irreversibly integrated into the fabric of society. Technology should work for everyone. We