Code generation model based on Code Llama.
34b
76K Pulls Updated 10 months ago
Updated 10 months ago
10 months ago
bcfdda4b8adc · 24GB
model
archllama
·
parameters33.7B
·
quantizationQ5_K_M
24GB
license
LLAMA 2 COMMUNITY LICENSE AGREEMENT
Llama 2 Version Release Date: July 18, 2023
"Agreement" means
7.0kB
Readme
Phind CodeLlama is a code generation model based on CodeLlama 34B fine-tuned for instruct use cases. There are two versions of the model: v1
and v2
. v1
is based on CodeLlama 34B and CodeLlama-Python 34B. v2
is an iteration on v1
, trained on an additional 1.5B tokens of high-quality programming-related data.
Usage
CLI
Open the terminal and run ollama run phind-codellama
API
Example
curl -X POST http://localhost:11434/api/generate -d '{
"model": "phind-codellama",
"prompt":"Implement a linked list in C++"
}'
Memory requirements
- 34b models generally require at least 32GB of RAM