Assessing Bias In ChatGPT’s simulated clinical responses
Assessing Bias In ChatGPT’s simulated clinical responses
Show that ChatGPT’s output is minimally affected by patient’s race
ChatGPT’s output inadequately represents the diverse patient population, risking the perpetuation of existing societal biases.
These minor changes of ChatGPT output do not reflect the cultural differences one would expect.
Prior to deploying ChatGPT-based agents in patient facing applications, further efforts are needed to ensure accurate representation of all communities.