LM Studio
How to run LLMs locally with LM Studio
Overview
LM Studio provides a way to run LLMs locally on your machine
It's similar to Ollamabut provides a GUI interface making it more user-friendly and also similar in style to ChatGPT
Installation
brew install lm-studioRunning LM Studio
These models are loaded into your system's memory (RAM) when in READY state so be cognizant of usage
Open the app and hit
CMD + Lto select a model to useThen wait for the model to load i.e.,
READYstate

Click the
Ejectbutton (seen above) to unload the model from memory
Discovering and Downloading Models
Models are downloaded from Hugging Face
Models are downloaded here
~/.lmstudio/models/Open the app and hit
CMD + SHIFT + M

By selecting the below option, we can ensure models will fit on our device (based on memory)

Integrating with VS Code
Start LM Studio Server

Install VS Code Extension, Continue
The most straightforward way I've found is by installing the VS Code extension, continue
While this is a 3rd-party coding assistant, the extension is open-source and we can entirely leverage our local LLMs without signing up for this service


Open up a file e.g.
main.tfand hitCTRL + IThis opens the chat window on the side where you can type in what you want the LLM to write for you

Last updated
Was this helpful?