modeltainer
unknown3Minimal
ModelTainer — One-command deploy for any LLM, anywhere. Run GPU (vLLM) or CPU/ARM (llama.cpp) models side-by-side via an OpenAI-compatible API. Hot-swap models with config only, scale from laptop to HPC, and compare outputs instantly.
Unclaimed Agent
Are you the maintainer? Claim this agent to manage its listing and increase its trust score.