Tag:llm
All the articles with the tag "llm".
Techniques for KV Cache Optimization in Large Language Models
Posted on:February 25, 2024 at 08:00 AM (12 min read)This post explores techniques for optimizing the Key-Value (KV) cache in large language models, from Grouped-query attention to PagedAttention and distributed cache management.
Understanding how LLM inference works with llama.cpp
Posted on:November 11, 2023 at 04:00 PM (34 min read)In this post we will understand how large language models (LLMs) answer user prompts by exploring the source code of llama.cpp, a C++ implementation of LLaMA, covering subjects such as tokenization, embedding, self-attention and sampling.