A AdvisorLevel
🧠 Powered by your private Ollama instance — Qwen 2.5

An AI co-pilot that never sees the cloud.

Every AI feature in AdvisorLevel runs on a private Ollama server you control. Client PII never leaves your infrastructure. No OpenAI. No Anthropic for client-data tasks. Just Qwen on your own GPUs, gated by the same FINRA lexicon as your messaging.

✍️

Reply drafting

Open any conversation, hit "Draft", get 3 compliant reply candidates. Each is pre-scanned by the lexicon — risky drafts are flagged before you send.

📚

Thread summarization

50-message thread? Hit "Summarize" and Qwen returns 3 bullets: what the client wants, what you agreed to, and the next step.

📊

Sentiment scoring

Every inbound is scored. Distressed or angry messages get prioritized in the inbox so the right person responds first.

POST /api/ai/draft-reply { "conversationId": "cmo...", "intent": "Confirm Friday meeting at 2pm", "tone": "friendly" } → { "drafts": [ { "text": "Hi Pat, confirming Friday 2pm...", "flagged": false, "shouldBlock": false }, ... ], "modelVersion": "qwen2.5:7b" }

Want it on cloud-grade Anthropic Claude for non-PII tasks (marketing copy, public templates)? Set AI_DRIVER=claude and provide your key — we'll route per-task.