LM Studio

How to run LLMs locally with LM Studio

Overview

  • LM Studio provides a way to run LLMs locally on your machine

  • It's similar to Ollamabut provides a GUI interface making it more user-friendly and also similar in style to ChatGPT


Installation

brew install lm-studio

Running LM Studio

  • Open the app and hit CMD + L to select a model to use

  • Then wait for the model to load i.e., READY state

  • Click the Eject button (seen above) to unload the model from memory


Discovering and Downloading Models

  • Models are downloaded from Hugging Face

  • Models are downloaded here ~/.lmstudio/models/

  • Open the app and hit CMD + SHIFT + M

  • By selecting the below option, we can ensure models will fit on our device (based on memory)


Integrating with VS Code

Start LM Studio Server


Install VS Code Extension, Continue

  • The most straightforward way I've found is by installing the VS Code extension, continue

  • While this is a 3rd-party coding assistant, the extension is open-source and we can entirely leverage our local LLMs without signing up for this service

  • Open up a file e.g. main.tf and hit CTRL + I

  • This opens the chat window on the side where you can type in what you want the LLM to write for you

Last updated

Was this helpful?