Search engines have a gender bias problem
Gender-neutral Internet searches yield results that nonetheless produce a predominantly male output, according to a new study.
These search results have an effect on users by promoting gender bias and potentially influencing hiring decisions, the researchers report.
The book, which appears in the journal Proceedings of the National Academy of Sciencesis one of the latest to discover how artificial intelligence (AI) can change our perceptions and actions.
“There is growing concern that the algorithms used by modern AI systems produce discriminatory results, likely because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, postdoctoral fellow at the department. in Psychology from New York University and senior author of the article.
“These findings call for an ethical AI model that combines human psychology with computational and sociological approaches to illuminate the formation, operation, and mitigation of algorithmic biases,” says author David Amodio, professor in the Department of Psychology from NYU and the University of Amsterdam.
Tech experts have expressed concern that the algorithms used by modern AI systems produce discriminatory results, likely because they are trained on data in which societal biases are rooted.
“Some 1950s ideas about gender are actually still embedded in our database systems,” Meredith Broussard, author of Non-artificial intelligence: how computers misunderstand the world (MIT Press, 2018) and professor at NYU’s Arthur L. Carter Journalism Institute, told the Markup earlier this year.
The use of AI by human decision-makers may lead to the spread, rather than the reduction, of existing disparities, according to Vlasceanu and Amodio.
To address this possibility, they conducted studies to determine whether the degree of inequality within a society was related to patterns of bias in algorithmic output, and if so, whether exposure to such output could influence human decision makers to act in accordance with these biases. .
First, they drew inspiration from the Global Gender Gap Index (GGGI), which contains gender inequality rankings for more than 150 countries. The GGGI represents the extent of gender inequality in economic participation and opportunity, educational attainment, health and survival, and political empowerment in 153 countries, providing inequality scores at the societal level for each country.
Then, to assess possible gender bias in search results, or algorithmic output, they examined whether words that should refer with equal probability to a man or a woman, such as “person”, “student or “human”, are more often assumed to be male. Here, they performed Google image searches for “person” within a nation (in their dominant local language) in 37 countries. The results showed that the proportion of male images resulting from such searches was higher in countries with greater gender inequality, revealing that algorithmic gender bias tracks societal gender inequality.
The researchers repeated the study three months later with a sample of 52 countries, including 31 from the first study. The results were consistent with those of the original study, reaffirming that gender disparities at the societal level are reflected in algorithmic output (i.e. internet searches).
Vlasceanu and Amodio then sought to explore whether exposure to such algorithmic outputs – search engine results – can shape people’s perceptions and decisions in ways consistent with pre-existing societal inequalities.
To do this, they conducted a series of experiments involving a total of nearly 400 American participants, both women and men.
In these experiments, participants were told they were looking at Google image search results for four professions they might not know about: chandler, draper, peruker, and lapidary. The gender composition of the image set for each occupation was selected to represent Google Image Search results for the keyword “person” for countries with high global gender inequality scores ( about 90% men to 10% women in Hungary or Turkey) as well as those with low global gender inequality scores (about 50% men to 50% women in Iceland or Finland) after the study of 52 nations above. This allowed searchers to mimic the results of internet searches in different countries.
Prior to viewing the research results, participants provided prototypicality judgments regarding each profession (e.g., “Who is more likely to be a peruker, male or female?”), which served as baseline assessment of their perceptions. Here, both female and male participants judged members of these professions to be more likely to be male than female.
However, when asked these same questions after viewing the image search results, participants in the low inequality conditions reversed their male-biased prototypes from the baseline assessment. In contrast, those in the high inequality condition maintained their male-biased perceptions, thereby reinforcing their perceptions of these prototypes.
The researchers then assessed how biases induced by Internet searches could potentially influence hiring decisions. To do this, they asked participants to rate the likelihood of a man or a woman being hired in each occupation (“Which type of person is most likely to be hired as a peruker?”) and, when ‘they were presented with images of two job applicants (a female and a male) for a position in that occupation, to make their own hiring choice (e.g., “Choose one of these applicants for a use of peruker”).
Consistent with other experimental results, exposure to images in the low inequality condition produced more egalitarian judgments of men’s versus women’s hiring tendencies within an occupation and a higher probability of choosing a female job applicant versus exposure to image sets in the high inequality condition. condition of inequality.
“These results suggest a cycle of propagating bias between society, AI, and users,” write Vlasceanu and Amodio, adding that “the results demonstrate that societal levels of inequality are evident in Internet search algorithms and that exposure to this algorithmic output can lead human users to think and potentially act in ways that reinforce societal inequality.
Funding for the study came from the NYU Alliance for Public Interest Technology and the Netherlands Organization for Scientific Research.
Source: New York University