TurboQuant on llama.cpp uses a two-stage pipeline to compress KV cache by ~5.3x. Stage 1 (Rotation): A randomized Fast Walsh-Hadamard Transform (FWHT) rotates the KV vectors to normalize their ...