Back to All Events

Transformer Feedforward Layers Are Key-Value Memories

A few weeks ago we discussed transformers, looking at interpretability results for the attention layers in the model. Much more rarely explored are the feedforward layers that connect the attention layers together; feedforward layers have less internal structure, which makes them less amenable to human interpretation. This 2021 paper from Geva et al. takes an interesting approach, treating the feedforward layers as key-value stores recalling specific human-interpretable patterns in the training data. Blaine Rogers will lead a seminar diving deep into their methodology to see how accurate their interpretation is, and whether we could even know from the experiments they carry out.

Feed-forward layers constitute two-thirds of a transformer model's parameters, yet their role in the network remains under-explored. We show that feed-forward layers in transformer-based language models operate as key-value memories, where each key correlates with textual patterns in the training examples, and each value induces a distribution over the output vocabulary. Our experiments show that the learned patterns are human-interpretable, and that lower layers tend to capture shallow patterns, while upper layers learn more semantic ones. The values complement the keys' input patterns by inducing output distributions that concentrate probability mass on tokens likely to appear immediately after each pattern, particularly in the upper layers. Finally, we demonstrate that the output of a feed-forward layer is a composition of its memories, which is subsequently refined throughout the model's layers via residual connections to produce the final output distribution.

— http://arxiv.org/abs/2012.14913 

Previous
Previous
12 April

Safety For Lawyers

Next
Next
26 April

Othello: Transformers Have (Emergent) World Models