Ollama

How to run LLMs locally with Ollama

Installation

brew install ollama

Uninstallation

triangle-exclamation
brew uninstall ollama
rm -rf ~/.ollama

Running Ollama Server

  • Once up and running, we can download and run models.

ollama serve
  • Stop the server by hitting CTRL + C or by closing the Terminal session


Discovering and Downloading Models


Command Cheat Sheet

List downloaded models

View running models

Running models

If the model isn't installed already, it will be downloaded


Integrating with VS Code

circle-exclamation

Last updated