## Summary ## Notes ## Highlights - 13 billion parameters, 4% of GPT-4 - Made by Microsoft - As good or better at certain tasks - Is trained by asking ChatGPT how it thinks - "Explain your thought process" - "Explain like I'm five" - "Justify your reasoning" - 95% of chatGPT quality, 85% of GPT-4 quality - Training child models based on bigger models (GPT-4) improves it drastically ### Human-centric tests #### Zero-shot prompting ![](https://i.imgur.com/A45XvES.png) - GPT-4 is very close to human competence - Orca still a bit behind ### AI Tests (BIG-Bench HARD) #### Chain of thought prompting ![](https://i.imgur.com/qhMUjiZ.png) - 23 hardest tasks for AI to do