2020: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks Introduction Large pre-trained language models like BERT and GPT had shown they could store factual knowledge in their parameters and achieve state-of-the-art results on many NLP tasks. However, their...
2022: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models Introduction In January 2022, researchers at Google Brain led by Jason Wei published a paper titled "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models" in the proceedings of NeurIP...
2021 : Zero-Shot Text-to-Image Generation Introduction In February 2021, OpenAI published "Zero-Shot Text-to-Image Generation," introducing DALL-E, a groundbreaking neural network that creates images directly from textual descriptions. Unlike...
2020 : Language Models are Few-Shot Learners Introduction In 2020, researchers at OpenAI published "Language Models are Few-Shot Learners," introducing GPT-3, a massive 175-billion-parameter language model that fundamentally changed how we think...