Your Next Favorite Spot is Waiting! Explore top-rated listings in our directory now.
Skip to content

The Stereotypes Lurking in Our Language

The Stereotypes Lurking in Our Language

BlueSky Thinking Summary

As biases of all types increasingly are recognized in academic studies, a new tool devised by Tessa Charlesworth and her colleagues brings a new perspective on looking at intersectional stereotypes.

The so-called FISE procedure-a Flexible Intersectional Stereotype Extraction-probes large volumes of text for how such biases, based on these cross-sections of traits like race, gender, and class, reveal themselves in language.

FISE is able to show, through the investigation of associations between descriptive terms and social categories, that historically privileged groups-especially rich and white ones-dominate the positive descriptors while negative traits are generally linked with marginal groups.

This will be a very powerful tool in understanding and addressing the issue of intersectional biases, as it is very good at reflecting real-world data and can monitor changes over time.

As AI technologies increasingly pervade our lives, it will be of paramount importance that these biases are recognized and mitigated to develop systems that are at least much fairer and much more equitable.

How do the insights from FISE shape the future of bias detection and correction in AI?