|
|
| (9 intermediate revisions by one other user not shown) |
| Line 1: |
Line 1: |
| Ollama is a tool that allows users to run large language models (LLMs) directly on their own computers, making powerful AI technology accessible without relying on cloud services. It provides a user-friendly way to manage, deploy, and integrate LLMs, offering greater control, privacy, and customization compared to traditional cloud-based solutions. | | Ollama is a tool that allows users to run large language models (LLMs) directly on their own computers, making powerful AI technology accessible without relying on cloud services. It provides a user-friendly way to manage, deploy, and integrate LLMs, offering greater control, privacy, and customization compared to traditional cloud-based solutions. |
|
| |
|
| [https://www.ycombinator.com/companies/ollama Ollama] was funded by [https://www.ycombinator.com/people/jared-friedman Jared Friedman] out of Y Combinator (YC). Founders Jeffrey Morgan and Michael Chiang wanted an easier way to run LLMs than having to do it in the cloud. In fact, they were previously founders of a startup project named Kitematic which was the early UI for Docker. Acquired by Docker, it was the precursor to [[Docker Desktop]]. | | [https://www.ycombinator.com/companies/ollama Ollama] was funded by [https://www.ycombinator.com/people/jared-friedman Jared Friedman] out of Y Combinator (YC). Founders Jeffrey Morgan and [https://2024.allthingsopen.org/speakers/michael-chiang Michael Chiang] wanted an easier way to run LLMs than having to do it in the cloud. In fact, they were previously founders of a startup project named Kitematic which was the early UI for Docker. Acquired by Docker, it was the precursor to [[Docker Desktop]]. |
| | |
| | Ollama will enable you to get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models. |
|
| |
|
| == Installing it == | | == Installing it == |
| Visit https://ollama.com/download and use the installer shell script.
| |
|
| |
| {{Collapsible
| |
| |visible_text=Here is the install shell script at the time of writing (2025-06-18)
| |
| |collapsed_content=
| |
| <syntaxhighlight lang="shell" line="1">
| |
| #!/bin/sh
| |
| # This script installs Ollama on Linux.
| |
| # It detects the current operating system architecture and installs the appropriate version of Ollama.
| |
|
| |
| set -eu
| |
|
| |
| red="$( (/usr/bin/tput bold || :; /usr/bin/tput setaf 1 || :) 2>&-)"
| |
| plain="$( (/usr/bin/tput sgr0 || :) 2>&-)"
| |
|
| |
| status() { echo ">>> $*" >&2; }
| |
| error() { echo "${red}ERROR:${plain} $*"; exit 1; }
| |
| warning() { echo "${red}WARNING:${plain} $*"; }
| |
|
| |
| TEMP_DIR=$(mktemp -d)
| |
| cleanup() { rm -rf $TEMP_DIR; }
| |
| trap cleanup EXIT
| |
|
| |
| available() { command -v $1 >/dev/null; }
| |
| require() {
| |
| local MISSING=''
| |
| for TOOL in $*; do
| |
| if ! available $TOOL; then
| |
| MISSING="$MISSING $TOOL"
| |
| fi
| |
| done
| |
|
| |
| echo $MISSING
| |
| }
| |
|
| |
| [ "$(uname -s)" = "Linux" ] || error 'This script is intended to run on Linux only.'
| |
|
| |
| ARCH=$(uname -m)
| |
| case "$ARCH" in
| |
| x86_64) ARCH="amd64" ;;
| |
| aarch64|arm64) ARCH="arm64" ;;
| |
| *) error "Unsupported architecture: $ARCH" ;;
| |
| esac
| |
|
| |
| IS_WSL2=false
| |
|
| |
| KERN=$(uname -r)
| |
| case "$KERN" in
| |
| *icrosoft*WSL2 | *icrosoft*wsl2) IS_WSL2=true;;
| |
| *icrosoft) error "Microsoft WSL1 is not currently supported. Please use WSL2 with 'wsl --set-version <distro> 2'" ;;
| |
| *) ;;
| |
| esac
| |
|
| |
| VER_PARAM="${OLLAMA_VERSION:+?version=$OLLAMA_VERSION}"
| |
|
| |
| SUDO=
| |
| if [ "$(id -u)" -ne 0 ]; then
| |
| # Running as root, no need for sudo
| |
| if ! available sudo; then
| |
| error "This script requires superuser permissions. Please re-run as root."
| |
| fi
| |
|
| |
| SUDO="sudo"
| |
| fi
| |
|
| |
| NEEDS=$(require curl awk grep sed tee xargs)
| |
| if [ -n "$NEEDS" ]; then
| |
| status "ERROR: The following tools are required but missing:"
| |
| for NEED in $NEEDS; do
| |
| echo " - $NEED"
| |
| done
| |
| exit 1
| |
| fi
| |
|
| |
| for BINDIR in /usr/local/bin /usr/bin /bin; do
| |
| echo $PATH | grep -q $BINDIR && break || continue
| |
| done
| |
| OLLAMA_INSTALL_DIR=$(dirname ${BINDIR})
| |
|
| |
| if [ -d "$OLLAMA_INSTALL_DIR/lib/ollama" ] ; then
| |
| status "Cleaning up old version at $OLLAMA_INSTALL_DIR/lib/ollama"
| |
| $SUDO rm -rf "$OLLAMA_INSTALL_DIR/lib/ollama"
| |
| fi
| |
| status "Installing ollama to $OLLAMA_INSTALL_DIR"
| |
| $SUDO install -o0 -g0 -m755 -d $BINDIR
| |
| $SUDO install -o0 -g0 -m755 -d "$OLLAMA_INSTALL_DIR/lib/ollama"
| |
| status "Downloading Linux ${ARCH} bundle"
| |
| curl --fail --show-error --location --progress-bar \
| |
| "https://ollama.com/download/ollama-linux-${ARCH}.tgz${VER_PARAM}" | \
| |
| $SUDO tar -xzf - -C "$OLLAMA_INSTALL_DIR"
| |
|
| |
| if [ "$OLLAMA_INSTALL_DIR/bin/ollama" != "$BINDIR/ollama" ] ; then
| |
| status "Making ollama accessible in the PATH in $BINDIR"
| |
| $SUDO ln -sf "$OLLAMA_INSTALL_DIR/ollama" "$BINDIR/ollama"
| |
| fi
| |
|
| |
| # Check for NVIDIA JetPack systems with additional downloads
| |
| if [ -f /etc/nv_tegra_release ] ; then
| |
| if grep R36 /etc/nv_tegra_release > /dev/null ; then
| |
| status "Downloading JetPack 6 components"
| |
| curl --fail --show-error --location --progress-bar \
| |
| "https://ollama.com/download/ollama-linux-${ARCH}-jetpack6.tgz${VER_PARAM}" | \
| |
| $SUDO tar -xzf - -C "$OLLAMA_INSTALL_DIR"
| |
| elif grep R35 /etc/nv_tegra_release > /dev/null ; then
| |
| status "Downloading JetPack 5 components"
| |
| curl --fail --show-error --location --progress-bar \
| |
| "https://ollama.com/download/ollama-linux-${ARCH}-jetpack5.tgz${VER_PARAM}" | \
| |
| $SUDO tar -xzf - -C "$OLLAMA_INSTALL_DIR"
| |
| else
| |
| warning "Unsupported JetPack version detected. GPU may not be supported"
| |
| fi
| |
| fi
| |
|
| |
| install_success() {
| |
| status 'The Ollama API is now available at 127.0.0.1:11434.'
| |
| status 'Install complete. Run "ollama" from the command line.'
| |
| }
| |
| trap install_success EXIT
| |
|
| |
| # Everything from this point onwards is optional.
| |
|
| |
| configure_systemd() {
| |
| if ! id ollama >/dev/null 2>&1; then
| |
| status "Creating ollama user..."
| |
| $SUDO useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama
| |
| fi
| |
| if getent group render >/dev/null 2>&1; then
| |
| status "Adding ollama user to render group..."
| |
| $SUDO usermod -a -G render ollama
| |
| fi
| |
| if getent group video >/dev/null 2>&1; then
| |
| status "Adding ollama user to video group..."
| |
| $SUDO usermod -a -G video ollama
| |
| fi
| |
|
| |
| status "Adding current user to ollama group..."
| |
| $SUDO usermod -a -G ollama $(whoami)
| |
|
| |
| status "Creating ollama systemd service..."
| |
| cat <<EOF | $SUDO tee /etc/systemd/system/ollama.service >/dev/null
| |
| [Unit]
| |
| Description=Ollama Service
| |
| After=network-online.target
| |
|
| |
| [Service]
| |
| ExecStart=$BINDIR/ollama serve
| |
| User=ollama
| |
| Group=ollama
| |
| Restart=always
| |
| RestartSec=3
| |
| Environment="PATH=$PATH"
| |
|
| |
| [Install]
| |
| WantedBy=default.target
| |
| EOF
| |
| SYSTEMCTL_RUNNING="$(systemctl is-system-running || true)"
| |
| case $SYSTEMCTL_RUNNING in
| |
| running|degraded)
| |
| status "Enabling and starting ollama service..."
| |
| $SUDO systemctl daemon-reload
| |
| $SUDO systemctl enable ollama
| |
|
| |
| start_service() { $SUDO systemctl restart ollama; }
| |
| trap start_service EXIT
| |
| ;;
| |
| *)
| |
| warning "systemd is not running"
| |
| if [ "$IS_WSL2" = true ]; then
| |
| warning "see https://learn.microsoft.com/en-us/windows/wsl/systemd#how-to-enable-systemd to enable it"
| |
| fi
| |
| ;;
| |
| esac
| |
| }
| |
|
| |
| if available systemctl; then
| |
| configure_systemd
| |
| fi
| |
|
| |
| # WSL2 only supports GPUs via nvidia passthrough
| |
| # so check for nvidia-smi to determine if GPU is available
| |
| if [ "$IS_WSL2" = true ]; then
| |
| if available nvidia-smi && [ -n "$(nvidia-smi | grep -o "CUDA Version: [0-9]*\.[0-9]*")" ]; then
| |
| status "Nvidia GPU detected."
| |
| fi
| |
| install_success
| |
| exit 0
| |
| fi
| |
|
| |
| # Don't attempt to install drivers on Jetson systems
| |
| if [ -f /etc/nv_tegra_release ] ; then
| |
| status "NVIDIA JetPack ready."
| |
| install_success
| |
| exit 0
| |
| fi
| |
|
| |
| # Install GPU dependencies on Linux
| |
| if ! available lspci && ! available lshw; then
| |
| warning "Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies."
| |
| exit 0
| |
| fi
| |
|
| |
| check_gpu() {
| |
| # Look for devices based on vendor ID for NVIDIA and AMD
| |
| case $1 in
| |
| lspci)
| |
| case $2 in
| |
| nvidia) available lspci && lspci -d '10de:' | grep -q 'NVIDIA' || return 1 ;;
| |
| amdgpu) available lspci && lspci -d '1002:' | grep -q 'AMD' || return 1 ;;
| |
| esac ;;
| |
| lshw)
| |
| case $2 in
| |
| nvidia) available lshw && $SUDO lshw -c display -numeric -disable network | grep -q 'vendor: .* \[10DE\]' || return 1 ;;
| |
| amdgpu) available lshw && $SUDO lshw -c display -numeric -disable network | grep -q 'vendor: .* \[1002\]' || return 1 ;;
| |
| esac ;;
| |
| nvidia-smi) available nvidia-smi || return 1 ;;
| |
| esac
| |
| }
| |
|
| |
| if check_gpu nvidia-smi; then
| |
| status "NVIDIA GPU installed."
| |
| exit 0
| |
| fi
| |
|
| |
| if ! check_gpu lspci nvidia && ! check_gpu lshw nvidia && ! check_gpu lspci amdgpu && ! check_gpu lshw amdgpu; then
| |
| install_success
| |
| warning "No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode."
| |
| exit 0
| |
| fi
| |
|
| |
| if check_gpu lspci amdgpu || check_gpu lshw amdgpu; then
| |
| status "Downloading Linux ROCm ${ARCH} bundle"
| |
| curl --fail --show-error --location --progress-bar \
| |
| "https://ollama.com/download/ollama-linux-${ARCH}-rocm.tgz${VER_PARAM}" | \
| |
| $SUDO tar -xzf - -C "$OLLAMA_INSTALL_DIR"
| |
|
| |
| install_success
| |
| status "AMD GPU ready."
| |
| exit 0
| |
| fi
| |
|
| |
| CUDA_REPO_ERR_MSG="NVIDIA GPU detected, but your OS and Architecture are not supported by NVIDIA. Please install the CUDA driver manually https://docs.nvidia.com/cuda/cuda-installation-guide-linux/"
| |
| # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#rhel-7-centos-7
| |
| # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#rhel-8-rocky-8
| |
| # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#rhel-9-rocky-9
| |
| # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#fedora
| |
| install_cuda_driver_yum() {
| |
| status 'Installing NVIDIA repository...'
| |
|
| |
| case $PACKAGE_MANAGER in
| |
| yum)
| |
| $SUDO $PACKAGE_MANAGER -y install yum-utils
| |
| if curl -I --silent --fail --location "https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo" >/dev/null ; then
| |
| $SUDO $PACKAGE_MANAGER-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo
| |
| else
| |
| error $CUDA_REPO_ERR_MSG
| |
| fi
| |
| ;;
| |
| dnf)
| |
| if curl -I --silent --fail --location "https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo" >/dev/null ; then
| |
| $SUDO $PACKAGE_MANAGER config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo
| |
| else
| |
| error $CUDA_REPO_ERR_MSG
| |
| fi
| |
| ;;
| |
| esac
| |
|
| |
| case $1 in
| |
| rhel)
| |
| status 'Installing EPEL repository...'
| |
| # EPEL is required for third-party dependencies such as dkms and libvdpau
| |
| $SUDO $PACKAGE_MANAGER -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-$2.noarch.rpm || true
| |
| ;;
| |
| esac
| |
|
| |
| status 'Installing CUDA driver...'
| |
|
| |
| if [ "$1" = 'centos' ] || [ "$1$2" = 'rhel7' ]; then
| |
| $SUDO $PACKAGE_MANAGER -y install nvidia-driver-latest-dkms
| |
| fi
| |
|
| |
| $SUDO $PACKAGE_MANAGER -y install cuda-drivers
| |
| }
| |
|
| |
| # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#ubuntu
| |
| # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#debian
| |
| install_cuda_driver_apt() {
| |
| status 'Installing NVIDIA repository...'
| |
| if curl -I --silent --fail --location "https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-keyring_1.1-1_all.deb" >/dev/null ; then
| |
| curl -fsSL -o $TEMP_DIR/cuda-keyring.deb https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-keyring_1.1-1_all.deb
| |
| else
| |
| error $CUDA_REPO_ERR_MSG
| |
| fi
| |
|
| |
| case $1 in
| |
| debian)
| |
| status 'Enabling contrib sources...'
| |
| $SUDO sed 's/main/contrib/' < /etc/apt/sources.list | $SUDO tee /etc/apt/sources.list.d/contrib.list > /dev/null
| |
| if [ -f "/etc/apt/sources.list.d/debian.sources" ]; then
| |
| $SUDO sed 's/main/contrib/' < /etc/apt/sources.list.d/debian.sources | $SUDO tee /etc/apt/sources.list.d/contrib.sources > /dev/null
| |
| fi
| |
| ;;
| |
| esac
| |
|
| |
| status 'Installing CUDA driver...'
| |
| $SUDO dpkg -i $TEMP_DIR/cuda-keyring.deb
| |
| $SUDO apt-get update
| |
|
| |
| [ -n "$SUDO" ] && SUDO_E="$SUDO -E" || SUDO_E=
| |
| DEBIAN_FRONTEND=noninteractive $SUDO_E apt-get -y install cuda-drivers -q
| |
| }
| |
|
| |
| if [ ! -f "/etc/os-release" ]; then
| |
| error "Unknown distribution. Skipping CUDA installation."
| |
| fi
| |
|
| |
| . /etc/os-release
| |
|
| |
| OS_NAME=$ID
| |
| OS_VERSION=$VERSION_ID
| |
|
| |
| PACKAGE_MANAGER=
| |
| for PACKAGE_MANAGER in dnf yum apt-get; do
| |
| if available $PACKAGE_MANAGER; then
| |
| break
| |
| fi
| |
| done
| |
|
| |
| if [ -z "$PACKAGE_MANAGER" ]; then
| |
| error "Unknown package manager. Skipping CUDA installation."
| |
| fi
| |
|
| |
| if ! check_gpu nvidia-smi || [ -z "$(nvidia-smi | grep -o "CUDA Version: [0-9]*\.[0-9]*")" ]; then
| |
| case $OS_NAME in
| |
| centos|rhel) install_cuda_driver_yum 'rhel' $(echo $OS_VERSION | cut -d '.' -f 1) ;;
| |
| rocky) install_cuda_driver_yum 'rhel' $(echo $OS_VERSION | cut -c1) ;;
| |
| fedora) [ $OS_VERSION -lt '39' ] && install_cuda_driver_yum $OS_NAME $OS_VERSION || install_cuda_driver_yum $OS_NAME '39';;
| |
| amzn) install_cuda_driver_yum 'fedora' '37' ;;
| |
| debian) install_cuda_driver_apt $OS_NAME $OS_VERSION ;;
| |
| ubuntu) install_cuda_driver_apt $OS_NAME $(echo $OS_VERSION | sed 's/\.//') ;;
| |
| *) exit ;;
| |
| esac
| |
| fi
| |
|
| |
| if ! lsmod | grep -q nvidia || ! lsmod | grep -q nvidia_uvm; then
| |
| KERNEL_RELEASE="$(uname -r)"
| |
| case $OS_NAME in
| |
| rocky) $SUDO $PACKAGE_MANAGER -y install kernel-devel kernel-headers ;;
| |
| centos|rhel|amzn) $SUDO $PACKAGE_MANAGER -y install kernel-devel-$KERNEL_RELEASE kernel-headers-$KERNEL_RELEASE ;;
| |
| fedora) $SUDO $PACKAGE_MANAGER -y install kernel-devel-$KERNEL_RELEASE ;;
| |
| debian|ubuntu) $SUDO apt-get -y install linux-headers-$KERNEL_RELEASE ;;
| |
| *) exit ;;
| |
| esac
| |
|
| |
| NVIDIA_CUDA_VERSION=$($SUDO dkms status | awk -F: '/added/ { print $1 }')
| |
| if [ -n "$NVIDIA_CUDA_VERSION" ]; then
| |
| $SUDO dkms install $NVIDIA_CUDA_VERSION
| |
| fi
| |
|
| |
| if lsmod | grep -q nouveau; then
| |
| status 'Reboot to complete NVIDIA CUDA driver install.'
| |
| exit 0
| |
| fi
| |
|
| |
| $SUDO modprobe nvidia
| |
| $SUDO modprobe nvidia_uvm
| |
| fi
| |
|
| |
| # make sure the NVIDIA modules are loaded on boot with nvidia-persistenced
| |
| if available nvidia-persistenced; then
| |
| $SUDO touch /etc/modules-load.d/nvidia.conf
| |
| MODULES="nvidia nvidia-uvm"
| |
| for MODULE in $MODULES; do
| |
| if ! grep -qxF "$MODULE" /etc/modules-load.d/nvidia.conf; then
| |
| echo "$MODULE" | $SUDO tee -a /etc/modules-load.d/nvidia.conf > /dev/null
| |
| fi
| |
| done
| |
| fi
| |
|
| |
| status "NVIDIA GPU ready."
| |
| install_success
| |
| </syntaxhighlight>
| |
| }}
| |
|
| |
| === Simply running it as a Docker Image ===
| |
|
| |
| Although you can download or install it from the repo on GitHub https://github.com/ollama/ollama, you can also run it as a docker image <tt>ollama/ollama</tt><ref>https://hub.docker.com/r/ollama/ollama</ref>
| |
|
| |
| However, I ran into multiple issues, and decided to go the straight install route instead.
| |
|
| |
| ====First Issue: accommodate NVidia GPU ====
| |
|
| |
| Because I have a [[PC_Build_2024#Video_Card_(GPU)|GeForce RTX 4060 NVidia GPU]], I had to install the NVidia Container Toolkit, and configure Docker to use the NVidia driver
| |
|
| |
| <pre>
| |
| curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
| |
| | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
| |
| curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| |
| | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
| |
| | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
| |
| sudo apt-get update
| |
| </pre>
| |
|
| |
| ==== Super User Problems ====
| |
| The docs advise to
| |
|
| |
| <pre>(sudo) docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama</pre>
| |
|
| |
| which clearly runs as root, however my Docker (Desktop) is running as non-root user, so although I previously fetched the image through Docker Desktop, the CLI command couldn't find it and downloaded another copy. And spit out additional errors:
| |
|
| |
| <pre>
| |
| Unable to find image 'ollama/ollama:latest' locally
| |
| latest: Pulling from ollama/ollama
| |
| 13b7e930469f: Pull complete
| |
| 97ca0261c313: Pull complete
| |
| 2ace2f9dde9e: Pull complete
| |
| 41ea4d361810: Pull complete
| |
| Digest: sha256:50ab2378567a62b811a2967759dd91f254864c3495cbe50576bd8a85bc6edd56
| |
| Status: Downloaded newer image for ollama/ollama:latest
| |
| 40be284dab1709b74fa68d513f75c10239d7234a21d65aac1e80cbd743515498
| |
| docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running prestart hook #0: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
| |
| nvidia-container-cli: initialization error: nvml error: driver/library version mismatch: unknown
| |
| </pre>
| |
|
| |
| The important part seems to be '<tt>Auto-detected mode as legacy</tt>'
| |
|
| |
| Running the image from Docker Desktop, with setting optons for ports and volumes, and copying the 'run' command spits out:
| |
|
| |
|
| <code>docker run --hostname=3f50cd4183bd --mac-address=02:42:ac:11:00:02 --env=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --env=LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 --env=NVIDIA_DRIVER_CAPABILITIES=compute,utility --env=NVIDIA_VISIBLE_DEVICES=all --env=OLLAMA_HOST=0.0.0.0:11434 --volume=ollama:/root/.ollama --network=bridge -p 11434:11434 --restart=no --label='org.opencontainers.image.ref.name=ubuntu' --label='org.opencontainers.image.version=20.04' --runtime=runc -d ollama/ollama:latest</code>
| | I had some problems getting off the ground with Ollama. Some details are in [[Ollama/install]] The problems were described better in [[Nvidia on Ubuntu]] where we cover one of the greatest challenges of Linux computing: graphics drivers. |
|
| |
|
| http://localhost:11434/ just reveals 'Ollama is running' in 'hello world' style.
| | == Docs == |
| | The [https://github.com/ollama/ollama/blob/main/docs/linux.md docs] tell you how you can customize and update or uninstall the environment. |
|
| |
|
| In the docker-compose file there is supposed to be another container image accessible at port 3000 providing a webUI. This didn't happen.
| | Looking at the logs with <code>journalctl -e -u ollama</code> told me what my new generated public key is, but also that it could not load a compatible GPU so I spent time fixing that. |
|
| |
|
| Clearly, the full ollama setup is supposed to be run as 'root'. It is not designed to be run as a regular user who has ''docker'' or ''sudo'' / ''adm'' group membership.
| | Start with the [https://github.com/ollama/ollama/blob/main/README.md README] for an intro. |
|
| |
|
| | == Interface == |
| | Although you can instantly begin using a model from the command line with something like |
|
| |
|
| {{References}} | | <code>ollama run gemma3</code> <ref>This will download and run a 4B parameter model.</ref> there are many User Interfaces or front-ends that can be coupled with Ollama such as [https://github.com/open-webui/open-webui Open-Webui].{{References}} |
| [[Category:Artificial Intelligence]] | | [[Category:Artificial Intelligence]] |
Ollama is a tool that allows users to run large language models (LLMs) directly on their own computers, making powerful AI technology accessible without relying on cloud services. It provides a user-friendly way to manage, deploy, and integrate LLMs, offering greater control, privacy, and customization compared to traditional cloud-based solutions.
Ollama was funded by Jared Friedman out of Y Combinator (YC). Founders Jeffrey Morgan and Michael Chiang wanted an easier way to run LLMs than having to do it in the cloud. In fact, they were previously founders of a startup project named Kitematic which was the early UI for Docker. Acquired by Docker, it was the precursor to Docker Desktop.
Ollama will enable you to get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.
Installing it
I had some problems getting off the ground with Ollama. Some details are in Ollama/install The problems were described better in Nvidia on Ubuntu where we cover one of the greatest challenges of Linux computing: graphics drivers.
Docs
The docs tell you how you can customize and update or uninstall the environment.
Looking at the logs with journalctl -e -u ollama told me what my new generated public key is, but also that it could not load a compatible GPU so I spent time fixing that.
Start with the README for an intro.
Interface
Although you can instantly begin using a model from the command line with something like
ollama run gemma3 [1] there are many User Interfaces or front-ends that can be coupled with Ollama such as Open-Webui.== References ==
- ↑ This will download and run a 4B parameter model.