Running DeepSeek-R1 Locally Using Ollama and Open WebUI in Docker

DeepSeek

Artificial Intelligence (AI) is revolutionizing how we interact with technology, and conversational AI models such as ChatGPT and DeepSearch have led that change.

DeepSeek Artificial Intelligence Co., Ltd. (also called “DeepSeek”) is a Chinese company that was founded in 2023. Its mission is to make Artificial General Intelligence (AGI) a reality. AGI refers to highly independent systems that can do better than humans at most important jobs. DeepSeek focuses on improving AI technologies, including natural language processing, machine learning, and other AI-driven solutions.

DeepSeek is an advanced open-source AI model designed for understanding and generating language. With DeepSeek-R1 (1.5B model), organizations can use local AI to build secure, efficient, and customized solutions without relying on external cloud APIs.

This is particularly valuable for enterprises looking to,

  • Keep data on-premises for compliance and security. Fine-tune AI models using proprietary data.
  • Reduce operational costs by using self-hosted infrastructure.
  • DeepSeek can be used on your computer using Ollama and managed via Open WebUI, which makes it accessible and manageable through a web-based interface.

In this article, we will create an isolated docker container to run Deepseek locally using Ollama and Open WebUI. Let’s commence.

Ollama

Prerequisites

  • Docker: Ensure Docker is installed on your system.
  • Ollama: We’ll use Ollama to manage and run the DeepSeek-R1 model.
  • Open WebUI

You can install the docker from https://www.docker.com/products/docker-desktop/.

Ensure the Docker is installed.

Docker --version

Command Prompt

Check if the Docker desktop is running.

Deploying DeepSeek-R1 with Ollama & Open WebUI on Docker

The steps to run DeepSeek-R1 inside a Docker container with Ollama and Open WebUI are given below.

Step 1. Pull the Required Docker Images.

First, we need to download the Docker images for Ollama (for running AI models) and Open WebUI (for easy access).

Run the following commands.

docker pull ollama/ollama:latest
docker pull ghcr.io/open-webui/open-webui:main

Pull

Docker Pull

Step 2. Start the Ollama Container.

Ollama is the core system that will manage and execute the DeepSeek model. Let’s start it in detached mode.

docker run -d --name ollama --restart unless-stopped -p 11434:11434 ollama/ollama:latest

Check if it’s running.

docker ps

You can see the result as shown below.

Result

Step 3. Access the Ollama Container and Download the DeepSeek Model.

Before downloading the DeepSeek-R1 model, we need to access the Ollama container to ensure we are inside the environment where the model will be installed.

Step 3.1. Access the Ollama Container.

Run the following command to enter the Ollama container.

docker exec -it ollama bash

Once inside the container, we should see a shell prompt like this.

root@<container-id>:/#

Shell Prompt

Step 3.2. Download the DeepSeek-R1 Model.

Now, within the container, download the DeepSeek-R1 model.

To more about deep seek-r1 models: https://www.ollama.com/library/deepseek-r1.

DeepSeek-R1 model

We will be using the deepseek-r1:1.5b model for this article, which is lightweight, the smallest model, and can run with a basic CPU and minimum RAM.

ollama pull deepseek-r1:1.5b

RAM

ollama run deepseek-r1:1.5b

Bash

Now, we can write some prompts to check the responses.

Responses

This assures that deepseek-r1 is installed and you interact with any prompts.

Additionally, you can very with the below command.

ollama list

List

Step 4. Deploy Open WebUI for Easy Access.

Now, let’s set up Open WebUI, which provides a web-based interface for interacting with our DeepSeek AI.

Run this command

docker run -d --name open-webui -p 3100:8080 -e OLLAMA_API_BASE_URL=http://host.docker.internal:11434 --restart always ghcr.io/open-webui/open-webui:main

We can open http://localhost:3100 in your local browser to access Deepseek-r1 locally.

We need to provide the auth for the first time.

Auth

Finally, our private Deepseek is ready to use. You can write prompts and get answers.

Private Deepseek

Here is a sample result.

Sample result

Congratulations! We now have our own private AI running locally!

Why Use DeepSeek-R1 for Enterprise AI?

DeepSeek-R1 is highly beneficial for enterprises because.

  • Self-hosted & secure: No reliance on external APIs, keeping sensitive data in-house.
  • Customizable & trainable: Can be fine-tuned with local business data.
  • Cost-effective: Eliminates cloud AI API costs.
  • Versatile use cases: Chatbots, document summarization, coding assistance, and more.

Conclusion

In summary, combining Ollama with the Open Web UI offers a strong and adaptable solution for businesses that want to use AI-based tools in-house while keeping control over their data and operations. By setting up a local development environment with Docker containers, we ensure a smooth deployment process and can train models using specific company data. The Open Web UI, with its easy-to-use interface, makes it simple to interact with AI models, providing useful insights and increasing productivity. This setup is good for internal projects or customer-facing applications. It is scalable and secure. It can be customized to meet various business needs. This environment can be run locally. This allows enterprises to adhere to data privacy policies and customize the AI system to suit their unique requirements. Furthermore, with Ollama’s robust API, businesses can easily integrate and enhance their AI capabilities.

By following the setup process described in this article, organizations can unlock the full potential of AI, enhance operational efficiency, and stay competitive in today’s fast-paced technological landscape.

References

Up Next
    Ebook Download
    View all
    Learn
    View all