A general-purpose model ranging from 3 billion parameters to 70 billion, suitable for entry-level hardware.
250.9K Pulls Updated 14 months ago
Updated 14 months ago
14 months ago
f184c0860491 · 39GB
Readme
Orca Mini is a Llama and Llama 2 model trained on Orca Style datasets created using the approaches defined in the paper, Orca: Progressive Learning from Complex Explanation Traces of GPT-4. There are two variations available. The original Orca Mini based on Llama in 3, 7, and 13 billion parameter sizes, and v3 based on Llama 2 in 7, 13, and 70 billion parameter sizes.
Usage
CLI
Open the terminal and run ollama run orca-mini
API
Example:
curl -X POST http://localhost:11434/api/generate -d '{
"model": "orca-mini",
"prompt":"Why is the sky blue?"
}'
Memory requirements
- 7b models generally require at least 8GB of RAM
- 13b models generally require at least 16GB of RAM
- 70b models generally require at least 64GB of RAM
Reference
3b parameters original source: Pankaj Mathur
7b parameters original source: Pankaj Mathur
13b parameters original source: Pankaj Mathur
Orca Mini v3 source on Ollama
13b parameters original source: Pankaj Mathur
70b parameters source: Pankaj Mathur
Orca: Progressive Learning from Complex Explanation Traces of GPT-4