AI's Memory,
Compressed 6.4x

An educational guide to KV cache compression — the key to running long-context AI on your laptop.

6.4x
KV Compression
3%
Quality Cost
59%
Attention Speedup
16K
Lines of C

What You'll Learn

The Real Bottleneck in AI

When you chat with an AI, it needs to remember everything you've said. This memory is called the KV cache. The shocking truth:

Memory Usage: Model vs KV Cache
AI Model (Llama 3.2 3B)4.0 GB
4.0 GB
KV Cache (32K context, FP16)8.0 GB
8.0 GB — larger than the model!
KV Cache with quant.cpp (6.4x)1.3 GB
1.3 GB

The KV cache grows with every token in the conversation. At 32K context, it's 2x larger than the model itself. This is why your laptop runs out of memory during long conversations — not because the model is too big, but because its memory is.

What is KV Cache?

In a Transformer, every token "attends" to all previous tokens. To do this, each token creates a Key (what am I?) and a Value (what do I contain?). These are stored so future tokens can look back at them.

How Attention Works (Simplified)
Current token creates: Query = "What am I looking for?"
Each past token stored: Key = "This is what I am"   Value = "This is my content"
Attention = softmax(Q × KT) × V
→ "Look at all past tokens, focus on the relevant ones, blend their values"

Why It's Expensive

For every layer and every token position, we store a Key vector and a Value vector. A typical model has 16-32 layers. At 32K context:

KV Cache Growth
1K tokens8K16K32K64K128K

Every doubling of context = doubling of KV cache. And the attention cost is O(n) per token — at 1000 tokens, attention is already 35% of total compute time.

The Key Insight

Not all memories are equally important. AI attention, like human attention, concentrates on what matters.

Recent tokens matter most

~70% of attention weight falls on the last 128 tokens. Old tokens rarely get looked at.

🔑

Keys are more sensitive than Values

Key errors get amplified by softmax (nonlinear). Value errors propagate linearly — much more forgiving.

A few tokens carry most information

Attention follows a power law: "heavy hitter" tokens get high attention across all queries.

📊

Deep layers attend sharply

Layer 11 entropy = 1.84 bits (~4 tokens). Layer 1 entropy = 6.29 bits (~78 tokens). Deep layers need less KV.

These four observations correspond to four orthogonal compression dimensions. Because they're independent, their effects multiply:

Four Dimensions of Compression

Progressive
Time dimension
🔑
K/V Asymmetry
Tensor dimension
H2O Eviction
Token dimension
📊
PyramidKV
Layer dimension
🎯
6.4x + 59% faster
+3% PPL

1. Progressive Compression (Time Dimension)

Keep the last 128 tokens' Keys at full precision (FP32). Compress everything else to 4-bit. The attention mechanism naturally focuses on recent tokens, so the compressed old tokens barely affect output quality.

KV Cache Layout: Progressive k128
FP32 (recent 128) 4-bit (older tokens)
Result: 2.9x compression at +1.3% PPL. Context-length invariant — works at 4K, 32K, or 128K.

2. K/V Asymmetric Quantization (Tensor Dimension)

Key errors pass through softmax(Q × KT) — a nonlinear function that amplifies small errors exponentially. Value errors are simply multiplied by attention weights — a linear operation with no amplification.

Error Propagation: Key vs Value
Key Error Path
K + error
↓ Q × (K + error)T
softmax ← nonlinear amplification!
↓ wrong attention distribution
cascading output error
Value Error Path
V + error
↓ attention_weights × (V + error)
linear sum ← no amplification
↓ small output perturbation
bounded, predictable error
Result: K=4bit + V=4bit + k128 = 6.4x compression at +3.0% PPL. Adding V=Q4 on top of k128 costs only +1.7pp.

3. H2O Token Eviction (Token Dimension)

Not all tokens contribute equally to attention. The Heavy-Hitter Oracle (H2O) tracks cumulative attention weight per token and evicts the ones that consistently receive near-zero attention.

Token Importance Distribution (Power Law)
← Sink tokens (always kept)Heavy hittersLow attention → evict
Result: Attention cost reduced by 59% at budget=128. Output quality preserved — evicted tokens had near-zero attention anyway.

4. PyramidKV (Layer Dimension)

Different layers have vastly different attention patterns. Early layers attend broadly (high entropy), deep layers attend sharply (low entropy). Allocating uniform KV budget wastes memory on layers that only look at 4 tokens.

Attention Entropy by Layer (Llama 3.2 1B, measured)
Pyramid budget: Layer 0 gets 256 KV entries, Layer 15 gets 64. Deep layers with 1.84-bit entropy need only ~4 tokens — giving them 256 is pure waste.

Benchmarks

All measurements on Llama 3.2 1B Instruct (Q8_0 GGUF), Apple M1 Pro, 8 threads.

Compression vs Quality

ConfigurationPPLvs FP32CompressionAttention
FP32 baseline151.21.0x100%
K=4b + V=FP16 + k128153.2+1.3%2.9x100%
K=4b + V=Q4 + k128155.7+3.0%6.4x100%
+ PyramidKV (b=256)~same~same6.4x+41%
K=3b + V=Q4 + k128166.0+9.8%7.1x100%
K=4b + V=Q2 + k128306.1+102%8.0xfailed

vs llama.cpp KV compression

Same 4-bit budget, 3.5x less quality degradation:

PPL Degradation at 4-bit (lower is better)
llama.cpp Q4_0 KV+10.6%
quant.cpp K=4b + V=Q4 + k128+3.0%

When to use which?

llama.cpp is excellent. The difference is integration scope, not capability:

Scenarioquant.cppllama.cpp
WASM browser demo192 KB binaryTensor graph too large
Microcontroller / RTOS#include onlyNeeds build system
Game engine pluginDrop one .h file250K LOC build
Learn in an afternoon16K LOC250K+ LOC
GPU throughputBasicFull Metal/CUDA
Model coverage7 architectures100+

Use llama.cpp for speed on a workstation. Use quant.cpp when you need to ship LLM inference inside something.

Context Length on 8GB Mac

ContextFP32 KVProgressive (2.9x)Aggressive (6.4x)+ Eviction
4KOKOKOKOK (fastest)
16KborderlineOKOKOK
32KOOM5.5 GB2.5 GB~1.5 GB
64KOOMOOM5.0 GB~3 GB
128KOOMOOM16GB Mac~5 GB

Research Foundations

Each technique in quant.cpp is grounded in peer-reviewed research:

TurboQuant: Redefining AI Efficiency with Extreme Compression
ICLR 2026 · Google Research · arXiv:2504.19874
Random Hadamard Transform (RHT) normalizes activation distributions before Lloyd-Max codebook quantization. Foundation of our turbo_kv_* types.
KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache
ICML 2024 · arXiv:2402.02750
Key insight: per-channel quantization for Keys, per-token for Values. K and V have fundamentally different error sensitivity due to softmax nonlinearity.
H2O: Heavy-Hitter Oracle for Efficient Generative Inference
NeurIPS 2023 · arXiv:2306.14048
Attention follows a power law. Keep "sink" tokens + "heavy hitters" (high cumulative attention) + recent window. Evict the rest for O(1) KV budget.
PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling
Dec 2024 · arXiv:2406.02069
Attention entropy decreases with layer depth. Allocate larger KV budgets to early (high-entropy) layers, smaller to deep (low-entropy) layers.
PolarQuant & QJL
Polar decomposition for vector quantization; Johnson-Lindenstrauss random projection for 1-bit sign hashing. Both used in our hybrid turbo types.

Glossary

KV Cache
Key-Value cache. Stores the Key and Value vectors for all past tokens, so they don't need to be recomputed. Grows linearly with sequence length.
Attention
The mechanism by which each token decides how much to "look at" each past token. Computed as softmax(Q × KT) × V. Cost is O(n) per token where n is sequence length.
Perplexity (PPL)
Measures how well the model predicts the next token. Lower is better. PPL=100 means the model is "100-ways confused" on average. A +3% increase means barely noticeable quality change.
Softmax
Converts raw scores into a probability distribution. Small changes in input can cause large changes in output (nonlinear amplification), which is why Key quantization errors are more damaging than Value errors.
Quantization
Reducing the number of bits per value. FP32 (32-bit) → FP16 (16-bit) → 4-bit → 2-bit. Each halving saves 50% memory but introduces approximation error.
RHT (Random Hadamard Transform)
A mathematical rotation that spreads out the distribution of values, making them more uniform and easier to quantize without large errors. Used in TurboQuant.
Progressive Compression
Keep recent tokens at full precision, compress older tokens aggressively. Inspired by how human memory works: recent events are vivid, old memories are fuzzy but sufficient.
Heavy Hitter
A token that consistently receives high attention weight from many queries. These tokens are informationally critical and should never be evicted.
Attention Entropy
Measures how spread out the attention distribution is. Low entropy = sharp focus on few tokens. High entropy = diffuse attention across many tokens. Measured in bits.
GGUF
The standard file format for quantized LLM model weights, created by the llama.cpp project. quant.cpp loads GGUF models directly.

Try It Yourself

Python one-liner or C single-header. No GPU, no API key, no setup.

Python
pip install quantcpp

from quantcpp import Model
m = Model.from_pretrained("Llama-3.2-1B")
print(m.ask("What is gravity?"))
C (single header)
#include "quant.h"

int main() {
    quant_model* m = quant_load("model.gguf");
    quant_generate(quant_new(m, NULL),
        "Hello!", print_token, NULL);
}
// cc app.c -lm -lpthread

GitHub PyPI WASM Demo