Tech with Tyler
search
⌘Ctrlk
LinkedInGitHubYouTube
Tech with Tyler
  • πŸ‘‹Welcome!
    • whoami
    • !!! Disclaimer !!!
  • πŸŽ“Academy
    • AWS Security Cookbook by Tyler
  • πŸ€–AI
    • Running LLMs Locally
      • Choosing Models Based on Hardware
      • LM Studio
      • Ollama
      • Claude Code with Local LLM
    • Model Context Protocol (MCP)
    • Claude Code
  • ☁️Cloud Security
    • AWS
    • Azure
    • Capture the Flags (CTFs)
    • Tools
  • dharmachakraKubernetes & Containers
    • Kubernetes Threat Matrix
    • Kubernetes General Info
    • Docker
  • πŸ‘¨β€πŸ’»Coding & CLI Tooling
    • CLI Tools
    • Coding and Scripting
    • Proxy Tooling
  • βš™οΈDevSecOps
    • CI/CD
    • Hashicorp Terraform
    • Hashicorp Vault
    • IAC Scanning
    • Secrets Scanning
  • πŸ’»Operating Systems
    • Linux
    • macOS
  • 🎁Miscellaneous
    • Jenkins
  • πŸ—οΈProjects
    • Active Directory Homelab Automation
    • AWS Cloud Resume Challenge
    • Proxmox Homelab as Code
  • πŸ“ŒOther
    • Useful Resources
gitbookPowered by GitBook
block-quoteOn this pagechevron-down
  1. πŸ€–AI

Running LLMs Locally

How to run LLMs locally

Choosing Models Based on Hardwarechevron-rightLM Studiochevron-rightOllamachevron-rightClaude Code with Local LLMchevron-right
PreviousAWS Declarative Policieschevron-leftNextChoosing Models Based on Hardwarechevron-right