A strong, economical, and efficient Mixture-of-Experts language model.
16b
236b
73.2K Pulls Updated 6 months ago
Updated 6 months ago
6 months ago
d36b0e4f46c1 · 7.5GB
model
archdeepseek2
·
parameters15.7B
·
quantizationQ3_K_S
7.5GB
params
{
"stop": [
"User:",
"Assistant:"
]
}
32B
template
{{ if .System }}{{ .System }}
{{ end }}{{ if .Prompt }}User: {{ .Prompt }}
{{ end }}Assistant:{{ .
111B
license
DEEPSEEK LICENSE AGREEMENT
Version 1.0, 23 October 2023
Copyright (c) 2023 DeepSeek
Section I: PR
14kB
Readme
Note: this model requires Ollama 0.1.40.
DeepSeek-V2 is a a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.
Note: this model is bilingual in English and Chinese.
The model comes in two sizes:
- 16B Lite:
ollama run deepseek-v2:16b
- 236B:
ollama run deepseek-v2:236b