MLX Local Inference Stack
by bendusy
v2.2.0
Full local AI inference stack on Apple Silicon Macs via MLX. Includes: LLM chat (Qwen3-14B, Gemma3-12B), speech-to-text ASR (Qwen3-ASR, Whisper), text embedd...
180
Downloads
1
Stars
6
Versions
Latest Changes
Install MLX Local Inference Stack with One Click
Get a managed OpenClaw server and install this skill from your dashboard. No SSH, no Docker, no configuration needed.
Deploy with ClawHost