Researchers Surprised By Gender Stereotypes In ChatGPT

Technical University of Denmark

It caused quite a stir when ChatGPT launched in 2022, giving anyone with internet access the opportunity to use artificial intelligence to create texts and answer questions. Not least because ChatGPT 'behaves' like a human and provides answers that a colleague or friend could have written.

In her studies at DTU, Sara Sterlie has focused on artificial intelligence and quickly became interested in investigating bias in ChatGPT in relation to gender stereotypes. It may sound simple, but in order to do so, she had to develop a method for carrying out relevant experiments first.

"When Sara approached me with her ideas for her project, I was immediately interested and agreed to be her supervisor. I already work with bias in artificial intelligence, but I haven't previously worked with language models like ChatGPT," says Professor Aasa Feragen, who primarily works with bias in artificial intelligence used for medical image processing.

Adapting a method for ChatGPT

As her starting point, Sara Sterlie chose Non-Discrimination Criteria—a recognized method for analysing bias in another type of artificial intelligence model that classifies material, e.g. for assessing medical images. It is easy to train that model to know the difference between X-rays showing healthy or diseased lungs, for example. It is then possible to measure whether the classification model presents too many incorrect answers, e.g. depending on whether the image is of a man or a woman.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.