Your Macs can do more than you think.
Run open-source AI models on Apple Silicon. Got more than one Mac? mlxstudio combines them — no setup, no cloud.
Free. 10 MB — not 200. Apple Silicon only.
mlxstudio
Search
Today
that segfault in audio
help me grok this codebase
why is my loss plateauing
Yesterday
rewrite the ingestion stuff
compare 8B vs 70B for cod...
debug the tokenizer output
Last week
explain attention heads
batch inference benchmark
Llama-3-70B-4bit
how does it split the model across both my macs?
Each Mac gets a slice of the layers. Your Mac Mini handles layers 0–39, your MacBook Pro takes 40–79. They talk over your local network — each one processes its chunk and passes the result
nice, what's the latency like?
On your WiFi you'll see about 3–5ms per layer hop. With two nodes that's
Ask anything...