Show HN: Skill capsules" for LLMs, a "poor man's continual learning"

"Continual learning" is considered one of the "blockers" for LLMs: they can't learn on the job, don't improve over time, etc. In particular, Dwarkesh Patel describes it as a number of problem which has to be solved to get to AGI.

Many academic article propose some kind of a memory system for LLM which might be considered a form of "continual learning". But most evals focus on memorizing facts which is just not very useful (it's better to fetch facts via tool use than to store it in neural memory) and these proposals might not fit well into common LLM API use patterns.

In this article I'm proposing a "new" method called "skill capsules" which is highly pragmatic, easy to understand and evaluate and might integrate well into existing tooling.

Skill capsule is a concrete object - it's a bunch of vectors, basically. You can insert it somewhere into a middle of LLM context and it improves performance on a particular skill, e.g. get tool calls more reliable, use particular writing style, coding style, etc. In theory, it can be used to patch any LLM inadequacy. A capsule can include knowledge (e.g. how to call a particular API or write code involving particular library).

Skill capsule can be produced using a single forward pass from a _single example_, not gradients or "fine-tuning" is required. So it might allow LLM to "learn on the job" - i.e. a single demonstration of how to perform something correctly can be used to create a capsule.

You might ask - why is a "Show HN" and not an academic article? Because researchers already know the method - it's known as "soft prompts", "hypernetworks", "steering vectors", prefix tuning, etc. All these terms are horrible and do not convey possibilities of this method. I just want more people to know that LLMs can be improved on the fly. And a better term -- "skill capsules" -- might help people to think how to apply these techniques (I hope).

Another reasons it's "Show HN" is that:

  * it shows one can do a kinda cool ML experiment in 
    a few days using Claude Code and few dollars to pay for GPUs
  * a somewhat-interesting story of how I got there

URL: github.com
1 comments

A bit of backstory:

I got really interested in LLMs in 2020 after GPT-3 release demonstrated in-context learning. But I tried running a LLM a year before: trying out AI Dungeon 2 (based on GPT-2).

Back in 2020 people were discussing how transformer-based language model are limited in all sorts of ways (operating on a tiny context, etc). But as I learned about how transformers work, I got really excited: it's possible to use raw vectors as input, not just text. So I got this idea that all kinds of modules can be implemented on top of pre-trained transformers via adapters which translate any data into representations of a particular model. E.g. you can make a new token representing some command, etc.

A lack of memory was one of hot topics, so I did a little experiment: since KV cache has to encode 'run-time' memory, I tried transplanting parts of KV cache from one model forward pass into another - and apparently only few mid layers were sufficient to make model recall a name from prior pass. But I didn't go further as it was too time consuming for a hobby project. So that's where I left it.

Over the years, academic researchers got through same ideas as I had and gave them names:

* arbitrary vectors injected in place of fixed token embeddings are called a "soft prompt" * custom KV-prefix added before normal context is called "prefix tuning" * "soft prompt" to generate KV prefix which encodes a memory is called "gisting" * KV prefix encoding a specific collection of documents was recently called "cartridge"

Opus 4.5 running in Claude Code can pretty much run an experiment of this kind on its own, starting from a general idea. But it still needs some help - to make sure we use prompts and formats which actually make sense, look for best data set, etc.