OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.
7b
13b
4,363 Pulls Updated 6 days ago
803b5adc3448 · 218B
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
<|{{ .Role }}|>
{{ .Content }}{{ if not $last }}
{{ end }}
{{- if and (ne .Role "assistant") $last }}<|assistant|>
{{ end }}
{{- end }}