## LLama
- LLama 3 seems to be on part with GPT-4 on some aspects
- 8gb recommended for 16GB M2
- quant version GGUF, llama.cpp, or olalama
- https://www.reddit.com/r/LocalLLaMA/comments/1cal17l/llm_comparisontest_llama_3_instruct_70b_8b/
- 70B still best but 8 still not bad
### Setup
- Download [[LM Studio]]
- Search up `lmstudio llama 3`
- Download the one that has the green badge for GPU offloading
- Wait for download
## References
- [Run Llama 3 locally](https://www.youtube.com/watch?v=KGzF60KERZ4)