Parameter used in LLM to control the likelihood of word (tokens) repetition in general context of the generated text. It discourages the model from using the same words or phrases repeatedly, promoting more diverse and varied output.
The Frequency Penalty is a value that adjusts the model's behavior to reduce or increase the repetition of words (or tokens) based on their frequency in the general context of the generated text.
This parameter ranges from -2.0 to 2.0, with positive values decreasing the likelihood of repeating the same words and negative values increasing it.
Low Penalty:
"The big dog saw the big cat and made a big noise."
High Penalty:
"The large canine spotted the hefty feline and created a thunderous racket."
By using the Frequency Penalty, the model generates more varied and engaging text, enhancing the overall quality and readability of the content.
Presence Penalty vs. Frequency Penalty:
These penalties are crucial for enhancing the quality of generated text and preventing redundancy or incoherence. By limiting excessive repetition of tokens, they encourage greater lexical variety and a more natural, diverse text structure.
The presence penalty targets the repetition of specific tokens in a generated text, while the frequency penalty addresses how often certain tokens appear in the overall context. Both measures work together to improve the quality and consistency of the generated text.
Manage, test, and deploy all your prompts & providers in one place. All your devs need to do is copy&paste one API call. Make your app stand out from the crowd - with Promptitude.