CLI proxy that reduces LLM token consumption through smart caching
CLI proxy to minimize LLM token consumption
rtk
$ rtk proxy --model gpt-4 --cache
$ rtk stats
$ rtk cache clear