OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.

7b 13b

375.3K 2 months ago

9 Tags
4208d3b406db • 4.5GB • 2 months ago
4208d3b406db • 4.5GB • 2 months ago
6c279ebc980f • 8.4GB • 2 months ago
c5cd17f69ca0 • 27GB • 2 months ago
6c279ebc980f • 8.4GB • 2 months ago
54d0ec72e884 • 15GB • 2 months ago
fa483f2d5cc7 • 15GB • 2 months ago
4208d3b406db • 4.5GB • 2 months ago
e75d0b293717 • 7.8GB • 2 months ago