Ollama
How to run LLMs locally with Ollama
Installation
brew install ollamaUninstallation
Uninstalling ollama does not remove downloaded models!
brew uninstall ollamarm -rf ~/.ollamaRunning Ollama Server
Once up and running, we can download and run models.
ollama serveStop the server by hitting
CTRL + Cor by closing the Terminal session
Discovering and Downloading Models
Models are viewable here
Models are downloaded here
~/.ollama/models/manifests/registry.ollama.ai/library/
Command Cheat Sheet
List downloaded models
View running models
Running models
If the model isn't installed already, it will be downloaded
Integrating with VS Code
Ollama must be running for this to work β ollama serve




Last updated
Was this helpful?