Bias in Word Embeddings

Word embeddings are a modern method to pretrain language models but they are at risk of adding bias to a machine learning pipeline. In this series of lessons we'd like to explain how you can measure the bias in word embeddings, we'll discuss bias mitigation techniques but we'll also explain why the issue of bias in word embeddings is likely here to stay.

Prerequisites

In this series of videos we'll be using the whatlies library to help explain properties of word embeddings. We assume that you're already somewhat familiar with word embeddings before starting this course.


2016-2022 © Rasa.