Skip to content

Commit 9321ca0

Browse files
authored
Release 2.0
2 parents f4287b6 + 43b67a4 commit 9321ca0

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

70 files changed

+3250
-22029
lines changed

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22
When opening your PR, please make sure to only request a merge to `main` when you have found a bug in the currently released version of TransformerLens. All other PRs should go to `dev` in order to keep the docs in sync with the currently released version.
33
44
Please also make sure the branch you are attempting to merge from is not named `main`, or `dev`. Branches with these names from a different remote cause conflicting name issues when we periodically attempt to bring your PR up to date with the current stable TransformerLens source.
5+
6+
If your PR is primarily affecting docs, make sure has the string "docs" in its name. Building docs is disabled by default to avoid CI time, but the job has been configured to run whenever a branch with the word "docs" in it is being merged.
57
-->
68
# Description
79

.github/workflows/checks.yml

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ on:
44
push:
55
branches:
66
- main
7+
- dev*
78
paths:
89
- "**" # Include all files by default
910
- "!.devcontainer/**"
@@ -15,6 +16,7 @@ on:
1516
pull_request:
1617
branches:
1718
- main
19+
- dev*
1820
paths:
1921
- "**"
2022
- "!.devcontainer/**"
@@ -125,8 +127,7 @@ jobs:
125127
- "Exploratory_Analysis_Demo"
126128
# - "Grokking_Demo"
127129
# - "Head_Detector_Demo"
128-
# - "Hooked_SAE_Transformer_Demo"
129-
# - "Interactive_Neuroscope"
130+
- "Interactive_Neuroscope"
130131
# - "LLaMA"
131132
# - "LLaMA2_GPU_Quantized"
132133
- "Main_Demo"
@@ -165,7 +166,7 @@ jobs:
165166
# When running on merge to main, it builds the docs and then another job deploys them
166167
name: 'Build Docs'
167168
runs-on: ubuntu-latest
168-
if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/dev')
169+
if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/dev') || contains(github.head_ref, 'docs')
169170
needs: code-checks
170171
steps:
171172
- uses: actions/checkout@v4

README.md

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,15 @@ A Library for Mechanistic Interpretability of Generative Language Models.
1515
[![Read the Docs
1616
Here](https://img.shields.io/badge/-Read%20the%20Docs%20Here-blue?style=for-the-badge&logo=Read-the-Docs&logoColor=white&link=https://TransformerLensOrg.github.io/TransformerLens/)](https://TransformerLensOrg.github.io/TransformerLens/)
1717

18+
| :exclamation: HookedSAETransformer Removed |
19+
|-----------------------------------------------|
20+
21+
Hooked SAE has been removed from TransformerLens 2.0. The functionality is being moved to
22+
[SAELens](http://github.com/jbloomAus/SAELens). For more information on this release, please see the
23+
accompanying
24+
[announcement](https://transformerlensorg.github.io/TransformerLens/content/news/release-2.0.html)
25+
for details on what's new, and the future of TransformerLens.
26+
1827
This is a library for doing [mechanistic
1928
interpretability](https://distill.pub/2020/circuits/zoom-in/) of GPT-2 Style language models. The
2029
goal of mechanistic interpretability is to take a trained model and reverse engineer the algorithms
@@ -24,8 +33,6 @@ TransformerLens lets you load in 50+ different open source language models, and
2433
activations of the model to you. You can cache any internal activation in the model, and add in
2534
functions to edit, remove or replace these activations as the model runs.
2635

27-
The library also now supports mechanistic interpretability with SAEs (sparse autoencoders)! With [HookedSAETransformer](https://colab.research.google.com/github/TransformerLensOrg/TransformerLens/blob/main/demos/Hooked_SAE_Transformer_Demo.ipynb), you can splice in SAEs during inference and cache + intervene on SAE activations. We recommend [SAELens](https://github.com/jbloomAus/SAELens) (built on top of TransformerLens) for training SAEs.
28-
2936
## Quick Start
3037

3138
### Install
@@ -51,7 +58,6 @@ logits, activations = model.run_with_cache("Hello World")
5158
* [Introduction to the Library and Mech
5259
Interp](https://arena-ch1-transformers.streamlit.app/[1.2]_Intro_to_Mech_Interp)
5360
* [Demo of Main TransformerLens Features](https://neelnanda.io/transformer-lens-demo)
54-
* [Demo of HookedSAETransformer Features](https://colab.research.google.com/github/TransformerLensOrg/TransformerLens/blob/main/demos/Hooked_SAE_Transformer_Demo.ipynb)
5561

5662
## Gallery
5763

demos/Config_Overhaul.ipynb

Lines changed: 251 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,251 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Overview\n",
8+
"\n",
9+
"The current way configuration is designed in TransformerLens has a lot of limitations. It does not\n",
10+
"allow for outside people to pass through configurations that are not officially supported, and it\n",
11+
"is very bug prone with something as simple as typo potentially giving you a massive headache. There\n",
12+
"are also a number of hidden rules that are not clearly documented, which can go hidden until\n",
13+
"different pieces of TransformerLens are activated. Allowing to pass in an optional object of configuration\n",
14+
"with no further changes does solve a couple of these problems, but it does not solve the bigger\n",
15+
"issues. It also introduces new problems with users potentially passing in architectures that are not\n",
16+
"supported without having a clear way to inform the user what isn't supported.\n",
17+
"\n",
18+
"My proposal for how all of these problems can be resolved is to fundamentally revamp the\n",
19+
"configuration to allow for something that I like to call configuration composition. From a technical\n",
20+
"perspective, this involves creating a centralized class that describes all supported configurations\n",
21+
"by TransformerLens. This class would then be used to construct specific configurations for all models\n",
22+
"that are currently supported, and it would then allow anyone to easily see in a single place all\n",
23+
"configuration features supported by TransformerLens while also being able to read the code to\n",
24+
"understand how they can create their own configurations for the purpose of either submitting new\n",
25+
"models into TransformerLens, or configuring an unofficially supported model by TransformerLens,\n",
26+
"when TransformerLens already happens to support all of the architectural pieces separately.\n",
27+
"\n",
28+
"This could simple be an overhaul of the existing HookedTransformerConfig. Everything I am\n",
29+
"describing here could be made compatible with that class to give it a more usable interface that is\n",
30+
"then directly interacted with by the end user. At the moment, that class is not really built to be\n",
31+
"interacted with, and is instead used as a wrapper around spreading configured anonymous objects.\n",
32+
"Overhauling this class to do what I am about to describe is a viable path, but keeping it as it is,\n",
33+
"and making a new class as something meant to be used by the end user would be a way to maintain\n",
34+
"compatibility, avoid refactors, and keep model configuration only focused on putting together\n",
35+
"configuration for models, as opposed to configuring full settings needed by HookedTransformer, which\n",
36+
"includes checking the available environment.\n",
37+
"\n",
38+
"A very unscientific basic example of how this would look in code by the end user can be seen\n",
39+
"immediately below. I will delve into details of each piece in this document."
40+
]
41+
},
42+
{
43+
"cell_type": "code",
44+
"execution_count": null,
45+
"metadata": {},
46+
"outputs": [],
47+
"source": [
48+
"config = ModelConfig(\n",
49+
" d_model=4096,\n",
50+
" d_head=8192 // 64,\n",
51+
" n_heads=64,\n",
52+
" act_fn=\"silu\"\n",
53+
" # Other universally required properties across all models go here in the constructor\n",
54+
")\n",
55+
"# Enabling specific features not universal among all models\n",
56+
"config.enabled_gated_mlp()\n",
57+
"# Customizing optional attributes\n",
58+
"config.set_positional_embedding_type(\"alibi\")\n",
59+
"\n",
60+
"# and so on, until the full configuration is set\n"
61+
]
62+
},
63+
{
64+
"cell_type": "markdown",
65+
"metadata": {},
66+
"source": [
67+
"## The constructor\n",
68+
"\n",
69+
"The first piece of this I want to talk about is what will be injected into the constructor. It\n",
70+
"should basically take everything absolutely required by all models. This keeps the code easy for\n",
71+
"someone to understand, without adding too much clutter. All fields should be required, and if there\n",
72+
"is ever an idea that a field should be in the constructor as an option, then that is probably an\n",
73+
"indication that there is a good case to add a function to configure that variable in a different\n",
74+
"point in the class. An example of what this would look like can be seen below..."
75+
]
76+
},
77+
{
78+
"cell_type": "code",
79+
"execution_count": null,
80+
"metadata": {},
81+
"outputs": [],
82+
"source": [
83+
"# make it easy for someone to see what activation functions are supported, this would be moved from\n",
84+
"# HookedTransformerConfig\n",
85+
"ActivationFunction = \"silu\" | \"gelu\"\n",
86+
"\n",
87+
"class ModelConfig:\n",
88+
" def __init__(\n",
89+
" self,\n",
90+
" d_model: int,\n",
91+
" eps: int,\n",
92+
" act_fn: ActivationFunction,\n",
93+
" remaining_required_attributes,\n",
94+
" ):\n",
95+
" self.d_model = d_model\n",
96+
" self.eps = eps\n",
97+
" self.act_fn = act_fn\n",
98+
" # Set defaults for any remaining supported attributes that are not required here \n",
99+
" self.gated_mlp = False\n"
100+
]
101+
},
102+
{
103+
"cell_type": "markdown",
104+
"metadata": {},
105+
"source": [
106+
"## Boolean Variables\n",
107+
"\n",
108+
"Within TransformerLens config, anything that is a boolean variable is essentially a feature flag.\n",
109+
"This means that all features at the time of construction would have default values, most likely set\n",
110+
"to false. They then get toggled on with an `enable_feature` function call on the config object.\n",
111+
"Having these functions will make very clear for someone less familiar with TransformerLens what\n",
112+
"features are available. It also allows us to decorate these calls, which is very important. There\n",
113+
"are some instances where if a boolean is true, a different one cannot be true, but this requirement\n",
114+
"is not clear anywhere without analyzing code. Decorating these functions allows us to make sure\n",
115+
"these sort of bugs are not possible. I will use `gated_mlp` as an example here, but it is not\n",
116+
"meant to be a real implementation."
117+
]
118+
},
119+
{
120+
"cell_type": "code",
121+
"execution_count": null,
122+
"metadata": {},
123+
"outputs": [],
124+
"source": [
125+
"def enabled_gated_mlp(self: ModelConfig) -> ModelConfig:\n",
126+
" self.gated_mlp = True\n",
127+
" # Configure any side effects caused by enabling of a feature\n",
128+
" self.another_feature = False\n",
129+
" # Returning self allows someone to chain together config calls\n",
130+
" return self\n",
131+
"\n",
132+
"ModelConfig.enabled_gated_mlp = enabled_gated_mlp"
133+
]
134+
},
135+
{
136+
"cell_type": "markdown",
137+
"metadata": {},
138+
"source": [
139+
"## Additional Options\n",
140+
"\n",
141+
"Any other options would similarly have their own functions to configure. This allows for similar\n",
142+
"decoration as with feature flags, and it also in a way documents the architectural capabilities of\n",
143+
"TransformerLens in a single place. If there are groups of options that are also always required\n",
144+
"together, this then gives us a way to require all of those options as opposed to having them all be\n",
145+
"configured at the root level. This also allows us to make changes to other attributes that may be\n",
146+
"affected as a side affect of having some values set, which again makes it both harder for people to\n",
147+
"introduce bugs, and also creates code that documents itself. Another off the cuff example of\n",
148+
"something like this can be seen below."
149+
]
150+
},
151+
{
152+
"cell_type": "code",
153+
"execution_count": null,
154+
"metadata": {},
155+
"outputs": [],
156+
"source": [
157+
"def set_rotary_dim(self: ModelConfig, rotary_dim: int) -> ModelConfig:\n",
158+
" self.rotary_dim = rotary_dim\n",
159+
" # Additional settings that seem to be present whenever rotary_dim is set\n",
160+
" self.positional_embedding_type = \"rotary\"\n",
161+
" self.rotary_adjacent_pairs = False\n",
162+
" return self\n",
163+
"\n",
164+
"ModelConfig.set_rotary_dim = set_rotary_dim"
165+
]
166+
},
167+
{
168+
"cell_type": "markdown",
169+
"metadata": {},
170+
"source": [
171+
"## Config Final Thoughts\n",
172+
"\n",
173+
"The best way to describe this idea is configuration composition. The reason being is that the user is\n",
174+
"essentially composing a model configuration by setting the base, and then combining various options\n",
175+
"from predefined functions. Doing it like this has a lot of advantages. One of those advantages being\n",
176+
"that there would need to be a lot less memorization on how architectures should be combined. e.g.\n",
177+
"maybe it's not that hard to remember that `rotary_adjacent_pairs` should be False when `rotary_dim`\n",
178+
"is set, but these sorts of combinations accumulate. Having it interfaced out gives everyone a\n",
179+
"place to look to see how parts of configuration work in isolation without the need to memorize a\n",
180+
"large amount of rules.\n",
181+
"\n",
182+
"This would also allow us to more easily mock out fake configurations and enable specific features in\n",
183+
"order to test that functionality in isolation. This also should make it easier for someone to at a\n",
184+
"glance understand all model compatibilities with TransformerLens, since there would be a single file\n",
185+
"where they would all be listed out and documented. It will also allow for people to see\n",
186+
"compatibility limitations at a glance.\n",
187+
"\n",
188+
"As for compatibility, this change would be 100% compatible with the existing structure. The objects\n",
189+
"I am suggesting are abstractions of the existing configuration dictionaries for the purpose of\n",
190+
"communication and ease of use. This means that they can be passed around just like the current\n",
191+
"anonymous dictionaries.\n",
192+
"\n",
193+
"## Further Changes\n",
194+
"\n",
195+
"With this, there are a number of changes that I would like to make to the actual\n",
196+
"`loading_from_pretrained` file in order to revise it to be ready for the possibility of rapidly\n",
197+
"supporting new models. The biggest change in this respect would be to break out what is now a\n",
198+
"configuration dictionary for every model into having its own module where one of these configuration\n",
199+
"objects would be constructed. That object would then be exposed, so that it can be imported into\n",
200+
"`loading_from_pretrained`. We would then create a dictionary where the official name of the\n",
201+
"model would have the configuration object as its value, thus completely eliminating that big giant\n",
202+
"if else statement, and replacing it with a simple return from the dictionary. The configurations\n",
203+
"themselves would then live in a directory structure like so...\n",
204+
"\n",
205+
"config/ <- where the ModelConfig file lives\n",
206+
"config/meta-llama/ <- directory for all models from the group\n",
207+
"config/meta-llama/Llama-2-13b.py <- name matching hugging face to make it really easy to find the\n",
208+
" configuration\n",
209+
"\n",
210+
"## Impact on Testing\n",
211+
"\n",
212+
"This change, would allow us to directly interact with these configuration objects to allow us to\n",
213+
"more easily assert that configurations are set properly, and to also allow us to more easily access\n",
214+
"these configurations in tests for the purposes of writing better unit tests. \n",
215+
"\n",
216+
"## Summary\n",
217+
"\n",
218+
"This change should solve a lot of problems. It may be a big change at first from what currently\n",
219+
"exists, but in time I think most people will find it more elegant, and easier to understand. "
220+
]
221+
},
222+
{
223+
"cell_type": "code",
224+
"execution_count": null,
225+
"metadata": {},
226+
"outputs": [],
227+
"source": []
228+
}
229+
],
230+
"metadata": {
231+
"kernelspec": {
232+
"display_name": ".venv",
233+
"language": "python",
234+
"name": "python3"
235+
},
236+
"language_info": {
237+
"codemirror_mode": {
238+
"name": "ipython",
239+
"version": 3
240+
},
241+
"file_extension": ".py",
242+
"mimetype": "text/x-python",
243+
"name": "python",
244+
"nbconvert_exporter": "python",
245+
"pygments_lexer": "ipython3",
246+
"version": "3.11.9"
247+
}
248+
},
249+
"nbformat": 4,
250+
"nbformat_minor": 2
251+
}

0 commit comments

Comments
 (0)