OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.
7b
13b
4,316 Pulls Updated 6 days ago
Updated 6 days ago
6 days ago
c5cd17f69ca0 · 27GB
model
archolmo2
·
parameters13.7B
·
quantizationF16
27GB
system
You are OLMo 2, a helpful and harmless AI Assistant built by the Allen Institute for AI.
88B
template
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
<|{{ .Role }}|>
{
218B
license
Apache License
Version 2.0, January 2004
11kB
Readme
Note: this model requires Ollama 0.5.5
OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.