What is Temperature (LLM)?
Temperature (LLM) is a sampling parameter that affects how random or deterministic model outputs are. Higher Temperature (LLM) generally increases variability, while lower Temperature (LLM) generally increases consistency.
Quick definition
Temperature (LLM) controls how much an AI model varies its answers.
How Temperature (LLM) works
- Temperature (LLM) adjusts the probability distribution used during token selection.
- Temperature (LLM) influences output diversity for the same prompt.
- Temperature (LLM) can change citation behavior when retrieval and selection are involved.
- Temperature (LLM) is commonly controlled in AI prompt testing.
Why Temperature (LLM) matters
Temperature (LLM) matters because monitoring results depend on output stability.
Temperature (LLM) also affects the risk of hallucinations (AI) when the model explores less likely tokens.
Example use cases
- Running a prompt with low temperature to measure a stable baseline.
- Running a prompt with higher temperature to explore possible answer variations.
- Standardizing temperature across a prompt set for comparability.