Phi-4-mini brings significant enhancements in multilingual support, reasoning, and mathematics, and now, the long-awaited function calling feature is finally supported.
69.2K Pulls Updated 2 weeks ago
Updated 2 weeks ago
2 weeks ago
78fad5d182a7 · 2.5GB
Readme
Note: this model requires Ollama 0.5.13 or later.
Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures.
Primary use cases
The model is intended for broad multilingual commercial and research use. The model provides uses for general purpose AI systems and applications which require:
- Memory/compute constrained environments
- Latency bound scenarios
- Strong reasoning (especially math and logic).
- The model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.