# Claude Code with Local LLM

## Overview

* A local computer isn't going to compete with a datacenter anytime soon but maybe you're without Internet, want additional data privacy, or have hit your Claude Code subscription limit for the month. In such cases, we can setup Claude Code to work with local LLMs.

{% hint style="info" %}
These same steps can be used for non-local models e.g., if you have an AI server rig, you can point Claude Code (and most other apps) to the model hosted there.&#x20;
{% endhint %}

***

## Setup

### LM Studio

* Download a model, for help checkout [#discovering-and-downloading-models](https://www.techwithtyler.dev/ai/lm-studio#discovering-and-downloading-models "mention")&#x20;
* Start a server, via GUI see [#start-lm-studio-server](https://www.techwithtyler.dev/ai/lm-studio#start-lm-studio-server "mention") or run this

{% code overflow="wrap" %}

```bash
## 25k recommended for claude code usage
lms load openai/gpt-oss-20b --context-length 25000

lms server start --port 1234
```

{% endcode %}

### Claude Code Setup — Terminal

* In the terminal run the below making sure to swap out the port and model based on your needs

{% code overflow="wrap" %}

```bash
export ANTHROPIC_BASE_URL=http://localhost:1234
export ANTHROPIC_AUTH_TOKEN=lmstudio
```

{% endcode %}

* Make sure this is the model you've downloaded and have running locally

{% code overflow="wrap" %}

```zsh
claude --model openai/gpt-oss-20b
```

{% endcode %}

* In Claude, we can see the local model is being used

<figure><img src="https://2721275171-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F8yu8YbDfwd1VqEdUxGyA%2Fuploads%2FrKc9JW3yVQ8DsZn4iRCn%2FCleanShot%202026-02-22%20at%2015.20.27%402x.png?alt=media&#x26;token=8530e2bc-02ef-4d2f-8310-b7fd0da90320" alt=""><figcaption></figcaption></figure>

### Claude Code Setup — VS Code

* In VS Code, hit `CMD + SHIFT + P`  and open `Preferences: Open User Settings (JSON)` and paste this snippet within the main `{ }`&#x20;
* Make sure swap out the the URL as needed

{% code overflow="wrap" %}

```json
"claudeCode.environmentVariables": [
        {
            "name": "ANTHROPIC_BASE_URL",
            "value": "http://localhost:1234"
        },
        {
            "name": "ANTHROPIC_AUTH_TOKEN",
            "value": "lmstudio"
        }
    ],
```

{% endcode %}

### Validating Setup — Developer Logs

* Within LMStudio, we can view the developer logs to validate our interaction with Claude is leveraging the local LLM

<figure><img src="https://2721275171-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F8yu8YbDfwd1VqEdUxGyA%2Fuploads%2FGzHp38Oo9VIda735xaK1%2FCleanShot%202026-02-22%20at%2015.25.46%402x.png?alt=media&#x26;token=221f5be7-25f9-45b5-b38b-f303d6f80fed" alt=""><figcaption></figcaption></figure>

***

## Resources

* <https://lmstudio.ai/blog/claudecode>
