Private Beta

AI-powered
managed print
intelligence.

Local inference, real memory, zero cloud dependency. Built on a three-node cluster in Fort Worth — MNMS field data trains it, MNMS technicians trust it.

Invite only. No waitlist spam. We'll respond in 1–2 business days.

aria-daemon · inference log
06:14:22 INFER hermes3-mythos:70b meter_reading anomaly check — client_id=177, device=C5740i 412ms
06:14:23 RESULT hermes3-mythos:70b offline_since=2026-03-23; last_read=9821 pages; delta_expected=3240; flag=[ok_lost]
06:14:51 INFER hermes3-mythos:70b QBO invoice reconciliation — tenant_id=1, range=Apr 2026 388ms
06:14:52 RESULT hermes3-mythos:70b 42 invoices scanned; 1 delta flagged (#4722, $0 vs $846.52); manual_review=REQUIRED
06:15:30 INFER hermes3-mythos:70b drum life projection — HL-L2350DW fleet, 18 devices, avg coverage 4.2% 297ms
06:15:31 RESULT hermes3-mythos:70b replacement_window=[2026-05-12, 2026-06-03]; confidence=0.87; cost_est=$1,440
06:16:04 EMBED chromadb/local indexing 14 new service docs → print-knowledge collection 1840ms
06:16:05 OK chromadb/local collection.count=8,882 docs; new_vectors=14; cosine_threshold=0.72
15:42:56 WAIT aria-daemon next cycle in 60s
Capabilities
AMR → inference
Meter Intelligence
Reads every meter pull from ameterreading.com, flags offline devices, projects consumable depletion windows, and surfaces anomalies before clients call.
QBO diff engine
Invoice Reconciliation
Compares billed amounts against metered usage across both tenants. Surfaces zero-dollar invoices, missed line items, and billing-to-QBO deltas before month close.
ChromaDB / local
Knowledge Retrieval
Service manuals, error code history, and technician notes indexed in ChromaDB. ARIA PRIME answers service questions against the full 8,882-document corpus — no cloud.
hermes3-mythos:70b
ARIA Daemon
Persistent swarm intelligence running on Node 2 (B5000). Supervised task queue, ChromaDB memory, 60-second cycle. Model: hermes3-mythos:70b on local Ollama. No API key burned.
Technical Stack
inference hermes3-mythos:70b (local Ollama, Node 2)
fallback hermes3:8b → groq llama-3.3-70b → qwen2.5:7b
memory ChromaDB (8,882 docs, cosine retrieval)
db Galera 2-node (Node 2 + Pi5 garbd), MariaDB 11.8.6
task queue ~/.aria/tasks/queue.jsonl, daemon every 60s
tenants MNMS (id=1) + UKCUF (id=2), strict isolation
deploy GoDaddy cPanel, PHP 8.3, Node 2 nginx/php-fpm 8.4

Request access.

MNMS AI is currently deployed for internal use and select managed print partners. If you run an MPS operation and want to evaluate the platform, reach out. We don't run trials — we run pilots.

Access Request