### Large-Scale Pre-Training for Goal-Directed Dialog (GODEL)
GODEL is a large-scale pre-trained model for goal-directed dialogs. It is parameterized with a Transformer-based encoder-decoder model and trained for response generation grounded in external text, which allows more effective fine-tuning on dialog tasks that require conditioning the response on information that is external to the current conversation (e.g., a retrieved document). The pre-trained model can be efficiently fine-tuned and adapted to accomplish a new dialog task with a handful of task-specific dialogs. The v1.1 model is trained on 551M multi-turn dialogs from Reddit discussion thread, and 5M instruction and knowledge grounded dialogs.
##### Multi-turn generation examples from an interactive environment:
Chitchat example:
> Instruction: given a dialog context, you need to response empathically. <br>
> User: Does money buy happiness? <br>
> Agent: It is a question. Money buys you a lot of things, but not enough to buy happiness. <br>
> User: What is the best way to buy happiness ? <br>
> Agent: Happiness is bought through your experience and not money. <br>
Grounded response generation example:
> Instruction: given a dialog context and related knowledge, you need to response safely based on the knowledge. <br>
> Knowledge: The best Stardew Valley mods PCGamesN_0 / About SMAPI <br>
> User: My favorite game is stardew valley. stardew valley is very fun. <br>
> Agent: I love Stardew Valley mods, like PCGamesN_0 / About SMAPI. <br>
Please find the information about preprocessing, training and full details of the GODEL in the [project webpage](https://aka.ms/GODEL).
if you use this code and data in your research, please cite our arxiv paper:
```
@misc{peng2022godel,
author = {Peng, Baolin and Galley, Michel and He, Pengcheng and Brockett, Chris and Liden, Lars and Nouri, Elnaz and Yu, Zhou and Dolan, Bill and Gao, Jianfeng},
title = {GODEL: Large-Scale Pre-training for Goal-Directed Dialog},