Your Macs can do more than you think.

Run open-source AI models on Apple Silicon. Got more than one Mac? mlxstudio combines them — no setup, no cloud.

Free. 10 MB — not 200. Apple Silicon only.
mlxstudio
Today
that segfault in audio
help me grok this codebase
why is my loss plateauing
Yesterday
rewrite the ingestion stuff
compare 8B vs 70B for cod...
Llama-3-70B-4bit
how does it split the model across both my macs?
Each Mac gets a slice of the layers. Your Mac Mini handles layers 0–39, your MacBook Pro takes 40–79. They talk over your local network — each one processes its chunk and passes the result
42.8 tok/s 2 nodes
Ask anything...

Two Macs, one model, zero config.

Same WiFi network. That's the setup. mlxstudio splits the model, streams the tokens, handles the rest.

We'll only write when we ship something.

You're on the list.
or just download it now