hero-image
HOME
hero-image
project-highlight-image

Complementary Protein Property Control from Weight and Activation Spaces Using ML techniques

hero-image
George Mogilnikov (1011277798)

Project Timeline

May 2025 - Current

OVERVIEW

This project explores how to steer protein language models toward specific biochemical properties using unsupervised edits in weight and activation spaces. Note: The code and model's details cannot be shown under NDA.

HighlightS

NeurlIPS 2025 submission

SKILLS

Python (NumPy, pandas, PyTorch)GitLaTeXResearch

SUPPORTING MATERIALS

Additional Details

Problem Statement

Protein language models (PLMs) have demonstrated strong generative capabilities, but steering them toward precise biochemical properties remains challenging. While existing editing techniques can shift model behavior, it is unclear which editing space—weight space or activation space—is most effective for controlling specific protein properties such as charge, hydrophobicity, aromaticity, instability, molecular weight, and isoelectric point.

Thus, the core problem is:

How can we achieve reliable, interpretable, and property-specific control over protein sequence generation using unsupervised editing methods in weight and activation spaces across multiple PLMs?

Procedure

We compared two unsupervised model-editing methods—Task Arithmetic (TA) and Sparse Autoencoders (SAE)—across three major PLMs.

Task Arithmetic performs edits directly in weight space by subtracting a non-example task vector.
SAE performs edits in activation space by activating property-aligned latent units during inference.

For each edit type and model, we generated 500 protein sequences of length 100 and evaluated them using established biochemical scoring tools.
Six properties were assessed: charge at pH 7, hydrophobicity, aromaticity, instability index, molecular weight, and isoelectric point.

The performance of TA and SAE edits was compared against both the unedited base model and a fine-tuned model trained on each target property. Statistical significance and direction of property shifts were recorded to determine which method produced the most consistent and effective steering.


Results

Our findings show that weight-space and activation-space edits perform differently and complement each other, rather than serving as interchangeable techniques.
Task Arithmetic performed best for compositional properties with strong signals, such as increasing charge or isoelectric point—particularly in ProGen2-Large. Sparse Autoencoders were most effective for more distributed properties like hydrophobicity and often outperformed TA in ESM3 and ProLaLMA.
Aromaticity and the instability index were difficult to control for all methods, with instability frequently increasing regardless of the editing strategy. Fine-tuned baselines could shift properties but offered less interpretability and required more computational resources. Molecular weight control in ESM3 was inconsistent and treated as inconclusive. Overall, these results reinforce that the most effective editing approach depends on the interplay between the target property, model architecture, and editing technique.


Conference Poster


Where to Edit  Complementary Protein Property Control from Weight and Activation Spaces (1).png


Summary

I gained experience in model editing, representation analysis, and statistical evaluation of biochemical properties, which are skills directly needed to design ML-driven systems. It strengthened my ability to connect abstract ML methods to real-world biological constraints and convert gigabytes of raw data into plausible conclusions.

lowinertia
Portfolio Builder for Engineers
Created by Aram Lee
© 2025 Low Inertia. All rights reserved.