OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.
7b
13b
4,297 Pulls Updated 6 days ago
Updated 6 days ago
6 days ago
4208d3b406db · 4.5GB
model
archolmo2
·
parameters7.3B
·
quantizationQ4_K_M
4.5GB
system
You are OLMo 2, a helpful and harmless AI Assistant built by the Allen Institute for AI.
88B
template
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
<|{{ .Role }}|>
{
218B
license
Apache License
Version 2.0, January 2004
11kB
Readme
Note: this model requires Ollama 0.5.5
OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.