Large Language Model (LLM)
A neural network trained on massive text corpora that can understand and generate human language.
LLMs like GPT, Claude, and Gemini are transformer-based models with billions of parameters trained on internet-scale text data. They excel at summarization, question answering, translation, and reasoning tasks when given appropriate context.
For document intelligence, LLMs serve as the generation component in RAG pipelines. They synthesize retrieved passages into coherent, human-readable answers. Model selection involves balancing cost, latency, context window size, and accuracy for the specific domain.
More ai/ml Terms
Retrieval-Augmented Generation (RAG)
An AI architecture that combines information retrieval with text generation to produce answers grounded in source documents.
Vector Embedding
A numerical representation of text as a high-dimensional vector, enabling semantic similarity comparisons between passages.
BM25
A probabilistic keyword-ranking algorithm that scores documents by term frequency and inverse document frequency.
Chunking
The process of splitting large documents into smaller, overlapping segments optimized for retrieval and embedding.
Hallucination
When an AI model generates plausible-sounding but factually incorrect or fabricated information.
Fine-Tuning
The process of further training a pre-trained model on domain-specific data to improve its performance on targeted tasks.
Analyze Documents Related to Large Language Model (LLM)
Upload any document and get AI-powered analysis with verifiable citations.
Start Free