TY - JOUR
T1 - Gender bias in text-to-image generative artificial intelligence depiction of Australian paramedics and first responders
AU - Currie, Geoff
AU - Hewis, Johnathan
AU - Ebbs, Phillip
PY - 2024
Y1 - 2024
N2 - Introduction: In Australia, almost 50 % of paramedics are female yet they remain under-represented in stereotypical depictions of the profession. The potentially transformative value of generative artificial intelligence (AI) may be limited by stereotypical errors, misrepresentations and bias. Increasing use of text-to-image generative AI, like DALL-E 3, could reinforce gender and ethnicity biases and, therefore, is important to objectively evaluate. Method: In March 2024, DALL-E 3 was utilised via GPT-4 to generate a series of individual and group images of Australian paramedics, ambulance officers, police officers and firefighters. In total, 82 images were produced including 60 individual-character images, and 22 multiple-character group images. All 326 depicted characters were independently analysed by three reviewers for apparent gender, age, skin tone and ethnicity. Results: Among first responders, 90.8 % (N = 296) were depicted as male, 90.5 % (N = 295) as Caucasian, 95.7 % (N = 312) as a light skin tone, and 94.8 % (N = 309) as under 55 years of age. For paramedics and police the gender distribution was a statistically significant variation from that of actual Australian workforce data (all p < 0.001). Among the images of individual paramedics and ambulance officers (N = 32), DALL-E 3 depicted 100 % as male, 100 % as Caucasian and 100 % with light skin tone. Conclusion: Gender and ethnicity bias is a significant limitation for text-to-image generative AI using DALL-E 3 among Australian first responders. Generated images have a disproportionately high misrepresentation of males, Caucasians and light skin tones that are not representative of the diversity of paramedics in Australia today.
AB - Introduction: In Australia, almost 50 % of paramedics are female yet they remain under-represented in stereotypical depictions of the profession. The potentially transformative value of generative artificial intelligence (AI) may be limited by stereotypical errors, misrepresentations and bias. Increasing use of text-to-image generative AI, like DALL-E 3, could reinforce gender and ethnicity biases and, therefore, is important to objectively evaluate. Method: In March 2024, DALL-E 3 was utilised via GPT-4 to generate a series of individual and group images of Australian paramedics, ambulance officers, police officers and firefighters. In total, 82 images were produced including 60 individual-character images, and 22 multiple-character group images. All 326 depicted characters were independently analysed by three reviewers for apparent gender, age, skin tone and ethnicity. Results: Among first responders, 90.8 % (N = 296) were depicted as male, 90.5 % (N = 295) as Caucasian, 95.7 % (N = 312) as a light skin tone, and 94.8 % (N = 309) as under 55 years of age. For paramedics and police the gender distribution was a statistically significant variation from that of actual Australian workforce data (all p < 0.001). Among the images of individual paramedics and ambulance officers (N = 32), DALL-E 3 depicted 100 % as male, 100 % as Caucasian and 100 % with light skin tone. Conclusion: Gender and ethnicity bias is a significant limitation for text-to-image generative AI using DALL-E 3 among Australian first responders. Generated images have a disproportionately high misrepresentation of males, Caucasians and light skin tones that are not representative of the diversity of paramedics in Australia today.
UR - http://www.scopus.com/inward/record.url?scp=85210914645&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85210914645&partnerID=8YFLogxK
U2 - 10.1016/j.auec.2024.11.003
DO - 10.1016/j.auec.2024.11.003
M3 - Article
C2 - 39627045
SN - 2589-1375
SP - 1
EP - 7
JO - Australasian Emergency Care
JF - Australasian Emergency Care
ER -