EXAONE Deep exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research.
2.4b
7.8b
32b
5,495 Pulls Updated yesterday
Updated yesterday
yesterday
37d6b495c16c · 1.6GB
model
archexaone
·
parameters2.67B
·
quantizationQ4_K_M
1.6GB
params
{
"repeat_penalty": 1,
"stop": [
"[|endofturn|]"
],
"temperature": 0.6,
77B
template
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{ if eq .Role "s
375B
license
EXAONE AI Model License Agreement 1.1 - NC
This License Agreement (“Agreement”) is entered into
14kB
Readme
EXAONE Deep exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research.
Evaluation results show that:
- EXAONE Deep 2.4B outperforms other models of comparable size
- EXAONE Deep 7.8B outperforms not only open-weight models of comparable scale but also a proprietary reasoning model OpenAI o1-mini
- EXAONE Deep 32B demonstrates competitive performance against leading open-weight models.