Comment on page
Attention
- 1.
- 6.Lilian weng on attention, self, soft vs hard, global vs local, neural turing machines, pointer networks, transformers, snail, self attention GAN.
- 10.
- 12.
- 13.Augmented rnns - including turing / attention / adaptive computation time etc. general overview, not as clear as the one below.
- 2.Medium on comparing cnn / rnn / han - will change on other data, my impression is that the data is too good in this article
- 3.
- 4.Mastery on attention - this makes the whole process clear, scoring encoder vs decoder input outputs, normalizing them using softmax (annotation weights), multiplying score and the weight summed on all (i.e., context vector), and then we decode the context vector.
- 1.Soft (above) and hard crisp attention
- 2.Dropping the hidden output - HAN or AB BiLSTM
- 3.Attention concat to input vec
- 4.Global vs local attention
- 5.Mastery on attention with lstm encoding / decoding - a theoretical discussion about many attention architectures. This adds make-sense information to everything above.
- 1.Encoder: The encoder is responsible for stepping through the input time steps and encoding the entire sequence into a fixed length vector called a context vector.
- 2.Decoder: The decoder is responsible for stepping through the output time steps while reading from the context vector.
- 3.A problem with the architecture is that performance is poor on long input or output sequences. The reason is believed to be because of the fixed-sized internal representation used by the encoder.
- 1.Enc-decoder
- 2.Recursive
- 3.Enc-dev with recursive
- 6.Code on GIT:
- 3.
- 4.
- 5.
note: word level then sentence level embeddings.
figure= >
BERT/ROBERTA
- 1.Do attention heads in bert roberta track syntactic dependencies? - tl;dr: The attention weights between tokens in BERT/RoBERTa bear similarity to some syntactic dependency relations, but the results are less conclusive than we’d like as they don’t significantly outperform linguistically uninformed baselines for all types of dependency relations. In the case of MAX, our results indicate that specific heads in the BERT models may correspond to certain dependency relations, whereas for MST, we find much less support “generalist” heads whose attention weights correspond to a full syntactic dependency structure.
In both cases, the metrics do not appear to be representative of the extent of linguistic knowledge learned by the BERT models, based on their strong performance on many NLP tasks. Hence, our takeaway is that while we can tease out some structure from the attention weights of BERT models using the above methods, studying the attention weights alone is unlikely to give us the full picture of BERT’s strength processing natural language.
- 1.TRANSFORMERS
- 2.
- 3.
- 5.
- 8.
- 9.
- 10.
- 12.Large memory layers with product keys - This memory layer allows us to tackle very large scale language modeling tasks. In our experiments we consider a dataset with up to 30 billion words, and we plug our memory layer in a state-of-the-art transformer-based architecture. In particular, we found that a memory augmented model with only 12 layers outperforms a baseline transformer model with 24 layers, while being twice faster at inference time.
- 13.
α-entmax: a differentiable generalization of softmax that allows low-scoring words to receive precisely zero weight. Moreover, we derive a method to automatically learn the
α parameter -- which controls the shape and sparsity of
α-entmax -- allowing attention heads to choose between focused or spread-out behavior. Our adaptively sparse Transformer improves interpretability and head diversity when compared to softmax Transformers on machine translation datasets.
- 3.
- 11.
- 13.
- 15.