Inspiration: Desire to learn some more about LLM outputs and the 1:1 temperature mapping to the warming globe was a salient analogy for things melting down
Description: I repeatedly asked an LLM to complete the sentence “Climate change is _____ at +[0 - 2.5]°C". And each time, I increased the LLM's response temperature (increases randomness) from 0 to 2.5.
This maps to the global warming temperatures above pre-industrial levels from 0°C to 2.5°C, a projected temperature we'll likely hit by the end of the century. As of 2024, we were around 1.55°C above those levels.
Method: I wanted to play with LLM outputs so I ended up using ollama with llama3 locally to have unfettered access to parameters like temperature, top_k and top_p. Temperature alone didn't drastically vary outputs for such a simple prompt. So I also increased top_k from 40 to 10,000 to capture unusual words and increased top_p from 0.9 to .99 to make sure the unusual responses had a chance of being returned.
I found the simple 1-word fill-in-the-blank outputs easy to compare as the model repsonses melt down under increasing temperatures. I'm interested in the odd long-tail outputs of these models. The fabricated portmanteau of “grimmanent" (grim + immanent) was fascinating for instance.
Also a reminder that we, both end users and model builders, barely scratch the surface of possible outputs. There is an art to what LLMs can present us and no guarantee it's responding optimally per prompt.
Sources