GPT-3: Language Models are Few-Shot Learners - GitHub GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic
GitHub - openai gpt-2: Code for the paper Language Models are . . . gpt-2 Code and models from the paper "Language Models are Unsupervised Multitask Learners" You can read about GPT-2 and its staged release in our original blog post, 6 month follow-up post, and final post We have also released a dataset for researchers to study their behaviors