Code generation model based on Code Llama.
34b
76.1K Pulls Updated 11 months ago
Updated 11 months ago
11 months ago
eb9f2c7a1c66 · 24GB
model
archllama
·
parameters33.7B
·
quantizationQ5_K_M
24GB
system
You are an intelligent programming assistant.
46B
params
{"stop":["### System Prompt:","### User Message:","### Assistant:"]}
69B
template
{{- if .System }}
### System Prompt
{{ .System }}
{{- end }}
### User Message
{{ .Prompt }}
### As
108B
license
LLAMA 2 COMMUNITY LICENSE AGREEMENT
Llama 2 Version Release Date: July 18, 2023
"Agreement" means
7.0kB
Readme
Phind CodeLlama is a code generation model based on CodeLlama 34B fine-tuned for instruct use cases. There are two versions of the model: v1
and v2
. v1
is based on CodeLlama 34B and CodeLlama-Python 34B. v2
is an iteration on v1
, trained on an additional 1.5B tokens of high-quality programming-related data.
Usage
CLI
Open the terminal and run ollama run phind-codellama
API
Example
curl -X POST http://localhost:11434/api/generate -d '{
"model": "phind-codellama",
"prompt":"Implement a linked list in C++"
}'
Memory requirements
- 34b models generally require at least 32GB of RAM