Ollama

How to run LLMs locally with Ollama

Installation

brew install ollama

Uninstallation

brew uninstall ollama
rm -rf ~/.ollama

Running Ollama Server

  • Once up and running, we can download and run models.

ollama serve
  • Stop the server by hitting CTRL + C or by closing the Terminal session


Discovering and Downloading Models

  • Models are viewable here

  • Models are downloaded here ~/.ollama/models/manifests/registry.ollama.ai/library/


Command Cheat Sheet

List downloaded models

ollama list

View running models

ollama ps 

Running models

If the model isn't installed already, it will be downloaded

ollama run gemma3:27b

Integrating with VS Code

Last updated

Was this helpful?