Local inference, real memory, zero cloud dependency. Built on a three-node cluster in Fort Worth — MNMS field data trains it, MNMS technicians trust it.
Invite only. No waitlist spam. We'll respond in 1–2 business days.
| inference | hermes3-mythos:70b (local Ollama, Node 2) |
| fallback | hermes3:8b → groq llama-3.3-70b → qwen2.5:7b |
| memory | ChromaDB (8,882 docs, cosine retrieval) |
| db | Galera 2-node (Node 2 + Pi5 garbd), MariaDB 11.8.6 |
| task queue | ~/.aria/tasks/queue.jsonl, daemon every 60s |
| tenants | MNMS (id=1) + UKCUF (id=2), strict isolation |
| deploy | GoDaddy cPanel, PHP 8.3, Node 2 nginx/php-fpm 8.4 |
MNMS AI is currently deployed for internal use and select managed print partners. If you run an MPS operation and want to evaluate the platform, reach out. We don't run trials — we run pilots.