Physics of Language Models: Part 1, Context-Free Grammar
Zeyuan Allen-Zhu Zeyuan Allen-Zhu
3.66K subscribers
9,310 views
201

 Published On Nov 6, 2023

How to interpret the inner workings of transformers? Note, "induction head" only explains shallow language tasks such as sequence copying. To make the interpretation go deeper, we reverse engineer how GPTs learn certain context-free grammars (CFGs) --- which are synthetically constructed grammar trees with hierarchical logic structures. It uncovers that GPTs learn to do dynamic programming (and more)!

https://arxiv.org/abs/2305.13673

Timecodes
0:00 - Prelude
4:51 - Motivations & interpretability of LLMs
8:54 - Definitions
13:46 - Synthetic CFGs
21:44 - Result 1: transformer learns such CFGs
28:48 - Result 1.2: generation diversity
35:43 - Result 1.3: generation distribution
38:24 - Result 1: Q&As
41:16 - Result 2: how do transformers learn CFGs
52:08 - Result 2.1: transformer learns NT ancestors and NT boundaries
55:53 - Result 2.2: transformer learns NT ancestors at NT boundaries
59:13 - Result 2: more probing results
1:02:02 - Result 3: how do transformers learn such NTs
1:14:00 - Corollary: transformer learns to do dynamic programming
1:22:18 - Extension 1: implicit CFGs
1:25:04 - Extension 2: robust / corrupted CFGs
1:31:47 - Extension 3: comparison to English CFGs
1:34:30 - Extension 4: other synthetic CFGs

show more

Share/Embed