A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

671b

197.6K 3 weeks ago

Readme

Note: this model requires Ollama 0.5.5 or later.

DeepSeek-V3 achieves a significant breakthrough in inference speed over previous models. It tops the leaderboard among open-source models and rivals the most advanced closed-source models globally.

References

GitHub

Paper