Every AI feature in AdvisorLevel runs on a private Ollama server you control. Client PII never leaves your infrastructure. No OpenAI. No Anthropic for client-data tasks. Just Qwen on your own GPUs, gated by the same FINRA lexicon as your messaging.
Open any conversation, hit "Draft", get 3 compliant reply candidates. Each is pre-scanned by the lexicon — risky drafts are flagged before you send.
50-message thread? Hit "Summarize" and Qwen returns 3 bullets: what the client wants, what you agreed to, and the next step.
Every inbound is scored. Distressed or angry messages get prioritized in the inbox so the right person responds first.
Want it on cloud-grade Anthropic Claude for non-PII tasks (marketing copy, public templates)? Set AI_DRIVER=claude and provide your key — we'll route per-task.