About us

Categories

Tags

Follow us on: X, LinkedIn

Initiated and Officially Supported by Tensormesh

About us

LMCache is Caching Knowledge for your LLM, Accelerating the Future of AI, One Cache at a Time

LMCache was started by a team of researchers from UChicago focusing on understanding and improving KV Cache in transformer models. It is the first open-source library that accelerates open-weights LLM applications up to 8x faster, at 8x lower cost.

Contact us at [email protected]