D
Doc and Tell
Glossary/ai/ml
ai/ml

Context Window

The maximum number of tokens a language model can process in a single request, including both input and output.

Context windows range from 4K tokens in older models to over 1M tokens in newer ones. The window must accommodate the system prompt, retrieved passages, conversation history, and the generated response. Exceeding the limit causes truncation or errors.

For document intelligence, context window size determines how many retrieved chunks can be included in a single query. Larger windows allow more evidence to be presented to the model, but cost scales linearly with token usage. Effective chunking and retrieval reduce the need for massive context windows.

Analyze Documents Related to Context Window

Upload any document and get AI-powered analysis with verifiable citations.

Start Free