Ollama is now available as an official Docker image

October 5, 2023

Ollama is now available as an official Docker image

We are excited to share that Ollama is now available as an official Docker sponsored open-source image, making it simpler to get up and running with large language models using Docker containers.

With Ollama, all your interactions with large language models happen locally without sending private data to third-party services.

On the Mac

Ollama handles running the model with GPU acceleration. It provides both a simple CLI as well as a REST API for interacting with your applications.

To get started, simply download and install Ollama.

On the Mac, please run Ollama as a standalone application outside of Docker containers as Docker Desktop does not support GPUs.

On Linux

Ollama can run with GPU acceleration inside Docker containers for Nvidia GPUs.

To get started using the Docker image, please use the commands below.

CPU only

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Nvidia GPU

  1. Install the Nvidia container toolkit.
  2. Run Ollama inside a Docker container
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Run a model

Now you can run a model like Llama 2 inside the container.

docker exec -it ollama ollama run llama2

More models can be found on the Ollama library.

Join Ollama’s Discord to chat with other community members, maintainers, and contributors.

Follow Ollama on Twitter for updates.