Transformers

This is the fourth and final video on attention mechanisms. In the previous video we introduced multiheaded keys, queries and values and in this video we're introducing the final bits you need to get to a transformer.

Video


Links

While making these videos we've found that these sources are very useful to have around. Not only because they help the conceptual understanding but also because some of them offer code examples.

Exercises

Try to answer the following questions to test your knowledge.

  1. What is the purpose of the positional encoding in the transformer architecture?
  2. Why are transformers easies to parallize than recurrent neural networks?

2016-2022 © Rasa.