Step-by-Step Guide: Installing Ollama and Open WebUI on VMware Fusion (Debian 12, M1 Mac)

💡 Setup Guide

🖥️ System Requirements

ResourceRecommended
RAM16GB (Minimum: 8GB)
vCPUs4 (Minimum: 2)
Disk Space50GB+
GPUNot required (runs on CPU)
OSDebian 12

1️⃣ Create a Debian 12 VM on VMware Fusion

  1. Download Debian 12 ISO from official Debian website.
  2. Create a new VM in VMware Fusion:
    • Choose “Install from disk or image” → Select the Debian 12 ISO.
    • Set CPU: 4 cores (Recommended: 4, Minimum: 2).
    • Set RAM: 16GB (Minimum: 8GB).
    • Set Disk size: 50GB+.
    • Set Network to “Bridged” (so the VM gets its own IP like 192.168.2.43).
    • Click Finish to create the VM.
  3. Install Debian 12:
    • Select “Standard system utilities”.
    • Choose SSH server (if you want remote access).
    • Install sudo (important for permissions).
  4. Update system after installation:
    sudo apt update && sudo apt upgrade -y

2️⃣ Install Docker & Portainer

  1. Install required packages:
    sudo apt install -y curl ca-certificates gnupg
  2. Install Docker:
    curl -fsSL https://get.docker.com | sudo bash
  3. Enable Docker to start on boot:
    sudo systemctl enable --now docker
  4. Install Portainer:
    docker volume create portainer_datadocker run -d \  --name portainer \  --restart=always \  -p 8000:8000 -p 9000:9000 \  -v /var/run/docker.sock:/var/run/docker.sock \  -v portainer_data:/data \  portainer/portainer-ce:latest
  5. Access Portainer Web UI:
    Open http://192.168.2.43:9000 in a browser and set up an admin password.

3️⃣ Deploy Ollama + Open WebUI in Portainer

Create a Stack in Portainer

  1. Go to Portainer UI (http://192.168.2.43:9000).
  2. Navigate to “Stacks” → “Add stack”.
  3. Enter a name (deepseek-coder).
  4. Paste the following docker-compose.yml:version: “3.8”

    version: “3.8”

    networks:
    deepseek_ollama-net:
    external: true

    services:
    ollama:
    image: ollama/ollama:latest
    container_name: ollama
    restart: unless-stopped
    ports:
    – “11434:11434”
    networks:
    – deepseek_ollama-net
    volumes:
    – ollama_data:/root/.ollama
    environment:
    – OLLAMA_HOST=0.0.0.0 # Erlaube Verbindungen von außen
    logging:
    driver: “json-file”
    options:
    max-size: “10m”
    max-file: “3”

    webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: ollama-webui
    restart: unless-stopped
    ports:
    – “3000:8080”
    networks:
    – deepseek_ollama-net
    volumes:
    – webui_data:/app/backend/data
    environment:
    – OLLAMA_API_BASE_URL=http://ollama:11434
    depends_on:
    – ollama
    logging:
    driver: “json-file”
    options:
    max-size: “10m”
    max-file: “3”

    volumes:
    ollama_data:
    webui_data:

  5. Click “Deploy the stack” and wait for it to start.

4️⃣ Verify & Configure Open WebUI

Check if the containers are running

Run:

docker ps

Expected output:

CONTAINER ID   IMAGE                                STATUS          PORTSxxxxxxxxxx     ollama/ollama:latest                 Up (running)    0.0.0.0:11434->11434/tcpxxxxxxxxxx     ghcr.io/open-webui/open-webui:main   Up (running)    0.0.0.0:3000->8080/tcp

Load DeepSeek-Coder-v2 into Ollama

docker exec -it ollama ollama pull deepseek-coder-v2

Wait for the download to complete. Then check if it’s available:

docker exec -it ollama ollama list

Expected output:

deepseek-coder-v2

Restart Open WebUI

docker restart ollama-webui

Open WebUI in browser

Visit:
👉 http://192.168.2.43:3000


5️⃣ Test the AI Model

Option 1: API Test via cURL

curl http://192.168.2.43:11434/api/generate -d '{  "model": "deepseek-coder-v2",  "prompt": "Write a Python function to check for prime numbers.",  "stream": false}'

Option 2: Use Open WebUI

  1. Go to Open WebUI (http://192.168.2.43:3000).
  2. Go to “Settings” → “Connections”.
  3. Set API URL to:
    http://192.168.2.43:11434
  4. Save & refresh the page.
  5. Try asking a question!

6️⃣ Troubleshooting

Check if containers are running

docker ps

Check if Ollama has models

docker exec -it ollama ollama list

If empty, run:

docker exec -it ollama ollama pull deepseek-coder-v2

Check if Ollama API is working

curl http://192.168.2.43:11434/api/tags

Expected output:

{"models":["deepseek-coder-v2"]}

Check Open WebUI logs

docker logs ollama-webui

🎯 Summary

Debian 12 installed on VMware Fusion
Docker & Portainer installed
Ollama & Open WebUI deployed via Portainer
DeepSeek-Coder-v2 installed & tested
Web interface working on http://192.168.2.43:3000

Now you have a fully working LLM environment on Debian 12! 🎉🚀
Let me know if you need any refinements! 😊