Ollama: Difference between revisions

From Freephile Wiki
extracted install bits
mNo edit summary
Line 6: Line 6:


I had some problems getting off the ground with Ollama. Some details are in [[Ollama/install]]
I had some problems getting off the ground with Ollama. Some details are in [[Ollama/install]]
== Docs ==
The [https://github.com/ollama/ollama/blob/main/docs/linux.md docs] tell you how you can customize and update or uninstall the environment.
Looking at the logs with <code>journalctl -e -u ollama</code> told me what my new generated public key is, but also that it could not load a compatible GPU so I spent time fixing that.




{{References}}
{{References}}
[[Category:Artificial Intelligence]]
[[Category:Artificial Intelligence]]

Revision as of 09:59, 19 June 2025

Ollama is a tool that allows users to run large language models (LLMs) directly on their own computers, making powerful AI technology accessible without relying on cloud services. It provides a user-friendly way to manage, deploy, and integrate LLMs, offering greater control, privacy, and customization compared to traditional cloud-based solutions.

Ollama was funded by Jared Friedman out of Y Combinator (YC). Founders Jeffrey Morgan and Michael Chiang wanted an easier way to run LLMs than having to do it in the cloud. In fact, they were previously founders of a startup project named Kitematic which was the early UI for Docker. Acquired by Docker, it was the precursor to Docker Desktop.

Installing it

I had some problems getting off the ground with Ollama. Some details are in Ollama/install

Docs

The docs tell you how you can customize and update or uninstall the environment.

Looking at the logs with journalctl -e -u ollama told me what my new generated public key is, but also that it could not load a compatible GPU so I spent time fixing that.


References