Code and samples from the paper ["Language Models are Unsupervised Multitask Learners"](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf).
For now, we have only released a smaller (117M parameter) version of GPT-2.
See more details in our [blog post](https://blog.openai.com/better-language-models/).
While we have not yet released GPT-2 itself, you can see some unconditional samples (with default settings of temperature 1 and no truncation) in `gpt2-samples.txt`.
## Future work
We may release code for evaluating the models on various benchmarks.
We are still considering release of the larger models.