shimmy

Local inference server with OpenAI-compatible GGUF endpoints

brewmacoslinux
Try with needOr install directly
Source

About

Small local inference server with OpenAI-compatible GGUF endpoints

Commands

shimmy

Examples

Start shimmy server with default settings$ shimmy
Start shimmy on a specific port$ shimmy --port 8080
Load a specific GGUF model file$ shimmy --model /path/to/model.gguf