Ollama
How to run LLMs locally with Ollama
Installation
brew install ollamaUninstallation
Uninstalling ollama does not remove downloaded models!
brew uninstall ollamarm -rf ~/.ollamaRunning Ollama Server
Once up and running, we can download and run models.
ollama serveStop the server by hitting
CTRL + Cor by closing the Terminal session
Discovering and Downloading Models
Models are viewable here
Models are downloaded here
~/.ollama/models/manifests/registry.ollama.ai/library/
Command Cheat Sheet
List downloaded models
ollama listView running models
ollama ps Running models
If the model isn't installed already, it will be downloaded
ollama run gemma3:27bIntegrating with VS Code
Ollama must be running for this to work — ollama serve




Last updated
Was this helpful?