Code generation model based on Code Llama.

34b

76.8K 11 months ago

Readme

Phind CodeLlama is a code generation model based on CodeLlama 34B fine-tuned for instruct use cases. There are two versions of the model: v1 and v2. v1 is based on CodeLlama 34B and CodeLlama-Python 34B. v2 is an iteration on v1, trained on an additional 1.5B tokens of high-quality programming-related data.

Usage

CLI

Open the terminal and run ollama run phind-codellama

API

Example

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "phind-codellama",
  "prompt":"Implement a linked list in C++"
 }'

Memory requirements

  • 34b models generally require at least 32GB of RAM

References

Beating GPT-4 on HumanEval with a Fine-Tuned CodeLlama-34B

HuggingFace