OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.
7b
13b
4,297 Pulls Updated 6 days ago
9 Tags
4208d3b406db • 4.5GB •
6 days ago
4208d3b406db • 4.5GB •
6 days ago
6c279ebc980f • 8.4GB •
6 days ago
c5cd17f69ca0 • 27GB •
6 days ago
6c279ebc980f • 8.4GB •
6 days ago
54d0ec72e884 • 15GB •
6 days ago
fa483f2d5cc7 • 15GB •
6 days ago
4208d3b406db • 4.5GB •
6 days ago
e75d0b293717 • 7.8GB •
6 days ago