Measuring Bias

This is the second video on attention mechanisms. In the previous video we introduced self attention and in this video we're going to expand the idea by introducing keys, queries and values.

Video


Code

We will start by importing the different language backends. Note that you may need to download the embeddings beforehand.

from whatlies import Embedding, EmbeddingSet
from whatlies.transformers import Pca
from whatlies.language import FasttextLanguage, SpacyLanguage, BytePairLang
lang_ft = FasttextLanguage("cc.en.300.bin")
lang_sp = SpacyLanguage("en_core_web_sm")

You can calculate the similarity between "embedding directions" via:

e1 = lang_ft["man"] - lang_ft["woman"]
e2 = lang_ft["he"] - lang_ft["she"]
e1.distance(e2)

Next, we'll make the chart with the word pairse. We're using these, but feel free to use other ones.

stereotype_pairs = [
('sewing', 'carpentry'),
('nurse', 'physician'),
('nurse', 'surgeon'),
('nurse', 'doctor'),
('diva', 'rockstar'),
]
appropriate_pairs = [
('woman', 'man'),
('queen', 'king'),
('sister', 'brother'),
('mother', 'father'),
('ovarian', 'prostate'),
('she', 'he'),
('her', 'him'),
('girl', 'boy')
]
random_pairs = [
('dog', 'firehydrant'),
('cat', 'record'),
('carpet', 'leg'),
('hot', 'cold'),
('coffee', 'milk'),
('fire', 'ice'),
('rich', 'poor'),
('more', 'less'),
('math', 'language')
]

The correlation chart can now be made via.

flatten = lambda l: [item for sublist in l for item in sublist]
def calc_axis(pair_list, language_model):
return [language_model[t1] - language_model[t2] for (t1, t2) in pair_list]
def make_correlation_plot(pairs, language_model, metric="cosine"):
emb_pairs = EmbeddingSet(*flatten([calc_axis(p, language_model) for p in pairs]))
emb_pairs.plot_distance(metric=metric)
make_correlation_plot(pairs=[stereotype_pairs, appropriate_pairs, random_pairs], language_model=lang_ft)

Exercises

Try to answer the following questions to test your knowledge.

  1. We've started measuring bias in this video. Are there other ways to measure bias?
  2. We've started measuring bias when it comes to male/female professions and bias. Can you come up with other axes that we might want to concern ourselves with as well?

2016-2024 © Rasa.