Great code generation model based on Llama2.
35K Pulls Updated 14 months ago
Updated 14 months ago
14 months ago
e9a4c68b3484 · 6.9GB
Readme
CodeUp was released by DeepSE. It is based on Llama 2 from Meta, and then fine-tuned for better code generation. This allows it to write better code in a number of languages..
Get started with CodeUp
The model used in the example below is the CodeUp model, with 13b parameters, which is a code generation model.
API
- Start Ollama server (Run
ollama serve
) - Run the model
curl -X POST http://localhost:11434/api/generate -d '{
"model": "codeup",
"prompt":"Write a C++ code to find the longest common substring in two strings."
}'
CLI
- Install Ollama
- Open the terminal and run
ollama run codeup
Note: The ollama run
command performs an ollama pull
if the model is not already downloaded. To download the model without running it, use ollama pull codeup
Memory requirements
- 13b models generally require at least 16GB of RAM
If you run into issues with higher quantization levels, try using the q4 model or shut down any other programs that are using a lot of memory.
Model variants
By default, Ollama uses 4-bit quantization. To try other quantization levels, please try the other tags. The number after the q represents the number of bits used for quantization (i.e. q4 means 4-bit quantization). The higher the number, the more accurate the model is, but the slower it runs, and the more memory it requires.
Aliases |
---|
latest, 13b, 13b-llama2, 13b-llama2-chat, 13b-llama2-chat-q4_0 |
Model source
CodeUp source on Ollama
13b parameters source: DeepSE