A LLaVA model fine-tuned from Llama 3 Instruct with better scores in several benchmarks.
vision
8b
220.3K Pulls Updated 6 months ago
Updated 6 months ago
6 months ago
44c161b1f465 · 5.5GB
model
archllama
·
parameters8.03B
·
quantizationQ4_K_M
4.9GB
projector
archclip
·
parameters312M
·
quantizationF16
624MB
params
{"num_ctx":4096,"num_keep":4,"stop":["\u003c|start_header_id|\u003e","\u003c|end_header_id|\u003e","
124B
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .P
254B
Readme
llava-llama3
is a LLaVA model fine-tuned from Llama 3 Instruct and CLIP-ViT-Large-patch14-336 with ShareGPT4V-PT and InternVL-SFT by XTuner.