The technique reduces the memory required to run large language models as context windows grow, a key constraint on AI deployment.