Temperature

The fine-tuning parameter Temperature refers to the probability distribution generated by the language generation function of the model.

Temperature affects the probabilities of the generated tokens. A lower temperature value leads to more conservative predictions, with the generated sequence following the “safe” or most common tokens, while a higher temperature value leads to the model being more creative and innovative, by generating less common tokens and using more unusual combinations of words.

In the OpenAI API, you can adjust the temperature parameter to control the degree of “creativity” of the generated output. A lower value (e.g., 0.1) leads to very conservative predictions, while a higher value (e.g., 1.0) can lead to more unusual and creative predictions.

A temperature of 0.7 is a good value you can work with.

Last updated