Quickstart
Get started with the Goodfire Ember SDK
Prerequisite: You’ll need a Goodfire API key to follow this guide. Get one through our platform or contact support.
Quickstart
Ember is a hosted API/SDK that lets you shape AI model behavior by directly controlling a model’s internal units of computation, or “features”. With Ember, you can modify features to precisely control model outputs, or use them as building blocks for tasks like classification.
In this quickstart, you’ll learn how to:
- Find features that matter for your specific needs
- Edit features to create model variants
- Discover which features are active in your data
- Save and load your model variants
You can get an API key through our platform
Initialize the SDK
Our sampling API is OpenAI compatible, making it easy to integrate.
Editing features to create model variants
How to find relevant features for edits
There are three ways to find features you may want to modify:
-
Auto Steer: Simply describe what you want, and let the API automatically select and adjust feature weights
-
Feature Search: Find features using semantic search
-
Contrastive Search: Identify relevant features by comparing two different datasets
Let’s explore each method in detail.
Auto Steer
Auto steering automatically finds and adjusts feature weights to achieve your desired behavior. Simply provide a short prompt describing what you want, and autosteering will:
- Find the relevant features
- Set appropriate feature weights
- Return a FeatureEdits object that you can set directly
Now that we have a few funny edits, let’s see how the model responds!
The model automatically added puns/jokes, even though we didn’t specify anything about comedy in our prompt.
Feature search
Let’s reset the model to its default state (without any feature edits)
Feature search helps you explore and discover what capabilities your model has. It can be useful when you want to browse through available features.
When setting feature weights manually, start with 0.5 to enhance a feature and -0.3 to ablate a feature. When setting multiple features, you may need to tune down the weights.
Feel free to play around with the weights and features to see how the model responds.
(Advanced) Look at a feature’s nearest neighbors
Get neighboring features by comparing them to either individual features or groups of features. When comparing to individual features, neighbors()
looks at similarity in the embedding space. When comparing to groups, neighbors()
finds features closest to the group’s centroid.
neighbors()
helps you understand feature relationships beyond just their labels. It can reveal which features might work best for your intended model adjustments.
Now, you can find more features that are similar to other features
Contrastive Search
Contrastive search lets you discover relevant features in a data-driven way.
Provide two datasets of chat examples:
- dataset_1: Examples of behavior you want to avoid
- dataset_2: Examples of behavior you want to encourage
Examples are paired such that the first example in dataset_1 contrasts the first example in dataset_2, and so on.
Reranking
Contrastive search becomes more powerful when combined with reranking. First, contrastive search finds features that distinguish between your datasets. Then, reranking sorts these features using your description of the desired behavior.
This two-step process ensures you get features that are both:
- Mechanistically useful (from contrastive search)
- Aligned with your goals (from reranking)
Let’s specify two conversation datasets. The first has a typical helpful assistant response and the second assistant replies in jokes.
We now have a list of features to consider adding. Let’s set some plausible-looking ones from joke_features
.
Note that we could also explore removing some of the helpful_assistant features.
(Advanced) Conditional logic for feature edits
You can establish relationships between different features (or feature groups) using conditional interventions.
First, let’s reset the variant and pick out the funny features.
Now, let’s find a features where the model is talking like a pirate.
Now, let’s set up behaviour so that when the model is talking like a pirate, it will be funny.
Say we decide the model isn’t very good at pirate jokes. Let’s set up behavior to stop generation altogether if the pirate features are too strong.
If you aren’t sure of the features you want to condition on, use AutoConditional with a specified prompt to get back an automatically generated condition.
Discover which features are active in your data
Working with a conversation context
You can inspect what features are activating in a given conversation with the inspect
API, which returns a context
object.
Say you want to understand what model features are important when the model tells a joke. You can pass in the same joke conversation dataset to the inspect endpoint.
From the context object, you can access a lookup object which can be used to look at the set of feature labels in the context.
You can select the top k
activating features in the context, ranked by activation strength. There are features related to jokes and tongue twisters, among other syntactical features.
You can also inspect individual tokens level feature activations. Let’s see what features are active at the punchline token.
(Advanced) Look at next token logits
Get feature activation vectors for machine learning tasks
To run a machine learning pipeline at the feature level (for instance, for humor detection) you can directly export features using client.features.activations
to get a matrix or retrieve a sparse vector for a specific FeatureGroup
.
Inspecting specific features
There may be specific features whose activation patterns you’re interested in exploring. In this case, you can specify features such as humor_features and pass that into the features
argument of inspect
.
Now, let’s see if these features are activating in the joke conversation.
Now you can retrieve the top k activating humor features in the context
. This might be a more interesting set of features for downstream tasks.
Save and load your model variants
You can serialize a variant to JSON format for saving.
And load a variant from JSON format.
Now, let’s generate a response with the loaded variant.
Using OpenAI SDK
You can also work directly with the OpenAI SDK for inference since our endpoint is fully compatible.
For more advanced usage and detailed API reference, check out our SDK reference and example notebooks.