BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How ChatGPT Can Improve Racial Disparities In Healthcare

Following

ChatGPT is the generative AI chatbot that has taken the world by storm since its November 2022 release. The AI system, which was developed by OpenAI, allows users to ask almost any question under the sun and the system will develop a response with startling accuracy. The capabilities are endless; ChatGPT can develop a personalized workout plan, write songs and poetry, and help with building a resume. In the earlier stages of its release, there were criticisms about the biased responses that the system was generating, but it seems that those issues have been resolved.

ChatGPT could become a powerful tool to mitigate the racial empathy gap in healthcare. A 2023 study from JAMA Internal Medicine found that ChatGPT generated more quality and empathetic responses to patient inquiries compared to physician responses. The study examined 195 question and answer exchanges from October 2022 where physicians responded to questions posted on a public social media forum. Researchers asked ChatGPT those same questions that were posted on the forum and a team of licensed healthcare professionals rated response quality and the level of empathy for each response. The results indicated that the healthcare professionals preferred the ChatGPT responses in 78.6% of the evaluations and rated ChatGPT responses as significantly higher in quality and significantly more empathetic than physician responses.

A wealth of research indicates that there is a racial empathy gap in healthcare; one 2013 study found that white people were less likely to react to pain experienced by Black people and a 2016 study found racial and ethnic disparities in opioid prescription, with non-Hispanic Black patients being less likely to receive opioid prescriptions. Maternal mortality rates may provide further evidence of the racial empathy gap in healthcare, with Black mothers dying at higher rates than their counterparts. Research indicates that empathy, or lack thereof, plays a critical role in pain treatment, with Black patients being less likely, on average, to receive pain treatment.

Racial and ethnic disparities are a persistent problem in healthcare; Black patients are perceived as being less sensitive and able to withstand more pain than their counterparts, misdiagnoses and untreated illness are common for Black patients and Hispanic and Black patients are less likely to receive curative treatments than their counterparts. In the U.S., Latin Americans often experience cultural and language barriers in healthcare, which can impact the quality of their treatment. Generative AI systems may address this somewhat; currently, ChatGPT supports several different languages including Spanish, French, Japanese and Arabic. It’s possible that generative AI systems like ChatGPT could positively impact the treatment that non-English speaking patients receive, thus improving health outcomes for different patient populations.

AI could also make healthcare coverage more accessible for marginalized populations. A Kaiser Family Foundation analysis of American Community Survey data for the nonelderly population found that in 2021, nonelderly American Indian/Alaska Native and Hispanic populations had the highest uninsured rates followed by Native Hawaiian and Other Pacific Islanders, and the Black population. Within the Black community, there is a mistrust of the medical system. Incidents of medical racism like the Tuskegee Experiment and the misappropriation of Henrietta Lacks’ cells for research, may have contributed to this mistrust. For patients who have had negative interactions with doctors and healthcare providers, AI systems may increase access to care, can improve the quality of care and may even increase early diagnoses.

Systemic racism impacts our structures in a number of different ways—we should constantly be thinking about how we can use the technology available to us to address these disparities and level the playing field for marginalized populations. Rather than fearing AI, we should work to make it better. While it can be a tool to address inequities, we must also understand the ways that our AI systems are inherently biased and work to address and mitigate these biases. Despite the promising ways that AI can be used, the public may be reluctant to rely on it for medical diagnoses. Although many have expressed fears about the rise of AI, whether it could be sentient, and what that would mean for society, AI is here to stay. We must consider ways that the AI tools that are available to us can help to create a better, more equitable and less oppressive world for all.

Follow me on LinkedInCheck out my website or some of my other work here