Local LLM crib sheet
My reminder of my local LLM set up as of May 2025.
Ollama
Models are stored in ~/.ollama/models, but it’s easiest to manage with ollama ls and then ollama rm.
Example: ollama run gemma3 or ollama run olmo2.
Integration with other projects is the main reason I have Ollama installed.
Smart cat
sc is an LLM shell command that can defer to local and remote LLMs.
Two key configuration files:
~/.config/smartcat/.api_configs.toml to set model specifics:
[ollama]
url = “http://localhost:11434/api/chat”
default_model = “gemma3”
timeout_seconds = 180
And ~/.config/smartcat/prompts.toml to set the default API and also the system prompt to use. The system prompt in there makes the output from sc extremely terse.
[default]
api = “ollama”
char_limit = 128000
Example usage:
-
sc “yes or no: do you know the Rust programming language?” sc “Write a short friendly README.md for this project” -c src/* Cargo.toml
The configuration for the editor I use means I can : select text, press | and then sc “make this idiomatic”.
llm-mlx
I don’t need to document this because Simon does it really well already.