Ollama: Difference between revisions
mNo edit summary |
No edit summary |
||
(2 intermediate revisions by one other user not shown) | |||
Line 2: | Line 2: | ||
[https://www.ycombinator.com/companies/ollama Ollama] was funded by [https://www.ycombinator.com/people/jared-friedman Jared Friedman] out of Y Combinator (YC). Founders Jeffrey Morgan and [https://2024.allthingsopen.org/speakers/michael-chiang Michael Chiang] wanted an easier way to run LLMs than having to do it in the cloud. In fact, they were previously founders of a startup project named Kitematic which was the early UI for Docker. Acquired by Docker, it was the precursor to [[Docker Desktop]]. | [https://www.ycombinator.com/companies/ollama Ollama] was funded by [https://www.ycombinator.com/people/jared-friedman Jared Friedman] out of Y Combinator (YC). Founders Jeffrey Morgan and [https://2024.allthingsopen.org/speakers/michael-chiang Michael Chiang] wanted an easier way to run LLMs than having to do it in the cloud. In fact, they were previously founders of a startup project named Kitematic which was the early UI for Docker. Acquired by Docker, it was the precursor to [[Docker Desktop]]. | ||
Ollama will enable you to get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models. | |||
== Installing it == | == Installing it == | ||
I had some problems getting off the ground with Ollama. Some details are in [[Ollama/install]] | I had some problems getting off the ground with Ollama. Some details are in [[Ollama/install]] The problems were described better in [[Nvidia on Ubuntu]] where we cover one of the greatest challenges of Linux computing: graphics drivers. | ||
== Docs == | == Docs == | ||
Line 12: | Line 14: | ||
Looking at the logs with <code>journalctl -e -u ollama</code> told me what my new generated public key is, but also that it could not load a compatible GPU so I spent time fixing that. | Looking at the logs with <code>journalctl -e -u ollama</code> told me what my new generated public key is, but also that it could not load a compatible GPU so I spent time fixing that. | ||
Start with the [https://github.com/ollama/ollama/blob/main/README.md README] for an intro. | |||
== Interface == | |||
Although you can instantly begin using a model from the command line with something like | |||
{{References}} | <code>ollama run gemma3</code> <ref>This will download and run a 4B parameter model.</ref> there are many User Interfaces or front-ends that can be coupled with Ollama such as [https://github.com/open-webui/open-webui Open-Webui].{{References}} | ||
[[Category:Artificial Intelligence]] | [[Category:Artificial Intelligence]] |
Latest revision as of 07:55, 29 June 2025
Ollama is a tool that allows users to run large language models (LLMs) directly on their own computers, making powerful AI technology accessible without relying on cloud services. It provides a user-friendly way to manage, deploy, and integrate LLMs, offering greater control, privacy, and customization compared to traditional cloud-based solutions.
Ollama was funded by Jared Friedman out of Y Combinator (YC). Founders Jeffrey Morgan and Michael Chiang wanted an easier way to run LLMs than having to do it in the cloud. In fact, they were previously founders of a startup project named Kitematic which was the early UI for Docker. Acquired by Docker, it was the precursor to Docker Desktop.
Ollama will enable you to get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.
Installing it
I had some problems getting off the ground with Ollama. Some details are in Ollama/install The problems were described better in Nvidia on Ubuntu where we cover one of the greatest challenges of Linux computing: graphics drivers.
Docs
The docs tell you how you can customize and update or uninstall the environment.
Looking at the logs with journalctl -e -u ollama
told me what my new generated public key is, but also that it could not load a compatible GPU so I spent time fixing that.
Start with the README for an intro.
Interface
Although you can instantly begin using a model from the command line with something like
ollama run gemma3
[1] there are many User Interfaces or front-ends that can be coupled with Ollama such as Open-Webui.== References ==
- ↑ This will download and run a 4B parameter model.