Backing Ambitious AI

Multi-Expert · Debate · No Retraining

Enterprise-grade reasoning. Run locally—your data never leaves. Or scale to cloud.

See How It Works Scroll to explore
Company

AAA AI — Tech Startup & Digital-Native Platform

We are a tech startup offering our own app, tool, and platform: an AGI-style multi-expert reasoning system. All products are built and operated by us.

What we do

We build and run an enterprise multi-expert AI platform: specialized reasoning modules (math, risk, planning, coding, vision), debate & critique, persistent memory, and world state—without retraining. Available as a web app, iOS app, and desktop agent packages. Run 100% locally with Ollama or scale to cloud LLMs.

Problems we solve

  • Data privacy: enterprise and individuals want AI without sending data to ChatGPT, Claude, or other public APIs.
  • Single-LLM limits: one model cannot match multi-expert reasoning, debate, and audit-ready structured outputs.
  • Vendor lock-in and per-token cost: we offer local-first deployment and optional cloud, with clear pricing.
  • Compliance: SOC2, GDPR, HIPAA—you own your data and compliance when you run on your infrastructure.

Target audience

Enterprises and teams that need privacy-preserving, explainable AI (finance, healthcare, legal, government); developers and power users who want to run advanced reasoning locally; and individuals who want an iOS app with full offline mode and no data sharing.

Team

Key team members

Core team behind the platform and products.

Kirill Pokidov

Founder, lead developer

Architecture, backend, iOS app, and desktop agent packages. Built the AGI-style reasoning platform, multi-expert system, debate & critique, memory, and world model. Sole developer of the private codebase; responsible for product, infrastructure, and deployment.

Products

What we build and offer

Web app

Full multi-expert chat, memory, experts, and settings. Run locally or with cloud LLMs.

web.aaaai.me →
iOS app

iPhone & iPad. On-device models, full offline mode; sync and experts when online.

App Store →
Desktop & Mac apps

Chat and IDE for macOS (DMG), agent packages (Linux, Windows). Download and connect to platform.

Download (Chat, IDE, more) →
Documentation

Setup, API, deployment, and architecture. For self-host and cloud.

Docs →
Video & demos

Video/vision, real-time analytics, and platform capabilities.

Video section →
01 — About

AAA AI — Enterprise Multi-Expert Platform

We build an AGI-style reasoning platform: multiple specialized experts, debate & critique (a critic reviews expert opinions and improves the answer), persistent memory, world state, and continuous learning—without retraining.

Input flows through perceptionmemoryexperts → optional answer verification and unifyplanner → action. Run fully local with Ollama or scale to cloud LLMs. Structured expert outputs (assumptions, evidence, uncertainty) for audit-ready decisions.

Your Data Never Leaves. Never Shared with ChatGPT.

Cloud vs Local: Run on Your Hardware

ChatGPT, Claude, Gemini send every prompt, document, and conversation to their servers. With AAA AI + Ollama, nothing is ever sent anywhere. Run on your hardware. Zero sharing. Full control.

Public LLMs (ChatGPT, Claude, Gemini)

  • Every query sent to OpenAI, Anthropic, Google
  • Your data used for training—unless you pay Enterprise
  • SOC2, GDPR, HIPAA—vendor-dependent, complex
  • Vendor lock-in, API limits, per-token cost

AAA AI + Ollama (100% Local)

  • Zero data sent to any third party—ever
  • Your prompts, documents, memory—stay on your machine
  • SOC2, GDPR, HIPAA—you own compliance
  • One-time GPU cost, no usage limits, no surprise bills

Deploy on your servers. Sensitive data never touches ChatGPT, Claude, or any public API. Expert multi-agent AI. Your infrastructure. Your control.

03 — Capabilities

What We Build

Core

Perception

LLM-based input understanding: intent, entities, domain, complexity.

Core

Memory

FAISS vector store for experiences. Semantic retrieval, no retraining.

Core

World Model

Structured state: entities, events, goals, relationships.

Experts

Math · Risk · Planning

Specialized reasoning modules. Add/remove at runtime.

Experts

Coding & Vision

LiveBench-optimized coding, VLM for camera and live voice.

Experts

Video Analytics

Real-time video analysis: objects, scenes, events. Runs locally—data never leaves your hardware.

Experts

Weather & Search

Open-Meteo, DuckDuckGo—live data, no API keys required.

Experts

Debate & Critique

A critic LLM reviews expert opinions: agree/disagree, errors, missing points, improved answer, confidence. Higher-quality decisions out of the box.

Core

Structured Reasoning

Expert strict schema: final_answer, assumptions, evidence, uncertainty, failure_modes, confidence. Audit-ready, explainable outputs.

Ops

Web Search in Chat

Live web search during conversation (DuckDuckGo, Google, Bing, Tavily). Optional LLM verification of results for accuracy.

Ops

Dynamic Scaling

Auto-scale experts by query complexity. Unlimited on-demand experts.

Ops

Local & Cloud

Ollama, DeepSeek, OpenAI, Claude, Gemini. Per-expert provider override.

Ops

Agent Packages

DMG, PKG (Mac), DEB (Linux), EXE (Windows) in Settings. Auto-connect to web.aaaai.me. System tray status; connected agents shown in UI.

Ops

Channels

Telegram, Discord, Slack, Line, WhatsApp, webhooks. Configure tokens and allowlists in Settings → Channels. Same LLM/agents as web UI.

Video & Real-time Analytics

Video/vision: camera + VLM, live voice mode. Real-time analysis of objects, scenes, and events—all running locally on your hardware. Data never leaves your infrastructure.

Beyond Single-LLM Chatbots

Compared to single-LLM or RAG-only setups, AAA AI delivers multi-expert reasoning, debate & critique for higher-quality answers, and full control over data and compliance.

Debate & critique

A critic reviews expert opinions and produces an improved answer with confidence. Fewer errors, better decisions.

Learning from use

Experience memory grows from interactions. No retraining, instant updates.

Multi-perspective

Math, risk, planning, coding—each expert contributes; planner synthesizes.

Transparent & audit-ready

See which experts were consulted. Optional structured outputs: assumptions, evidence, uncertainty.

Zero data sharing

Run with Ollama—nothing sent to ChatGPT, Claude, or any public API. Your data stays yours.

Run for $0 or scale

Fully local with Ollama. Optional cloud APIs and per-user limits when you need scale.

06 — Mission

From POC to Production AGI

The platform already includes debate & critique, answer verification, unify experts, web search in chat, and structured expert outputs. Scale by adding more expert modules (100s → 1000s), memory to millions of experiences, and distributing across GPUs. Same architecture—you scale the components.

Product sheet

Specifications

Platform facts at a glance: where AAA AI runs, how notes and workflows fit in, and what ships today across chat, tasks, and the macOS IDE.

Languages UI: English & Russian. Chat & experts: follow your model's multilingual coverage (typically 50+ languages; cloud models often 100+).
Platforms Web, iOS & iPadOS, Android, macOS Chat, macOS IDE, desktop agents (macOS / Linux / Windows), Telegram, Discord, Slack.
Chat Multi-expert panel, debate & critique, streaming, web search in chat, memory-backed context, per-expert model/provider.
Notes Voice-first capture, week/day strip, list + detail, server sync; iOS full-screen notes + macOS Chat notes sidebar.
Workflows Visual orchestration UI: agents, HTTP, webhooks, conditions — plus 122+ integrations (Slack, GitHub, Notion, CRMs, sheets, and more).
Tasks Kanban-style board tied to the same AI pipeline: create, stage, and track runs with history.
IDE (macOS) Native app: multi-agent coding, quick edit, Git, terminal, MLX on Apple Silicon for local inference.
Memory & storage Vector experience memory (FAISS); PostgreSQL for accounts and settings. Retention and caps are yours on self-host; SaaS per product limits.
Privacy modes iOS/iPadOS: on-device chat (e.g. Llama 3.2 1B) + optional vision; offline-first. Desktop: Ollama / MLX or cloud APIs — you choose per expert.
Interfaces HTTPS REST, WebSocket streaming, MCP tools, integration OAuth/connectors, downloadable agent installers (DMG, PKG, DEB, EXE).

Latest product updates

As of March 2026

Notes & capture

Voice recording UX with animated orb; calendar week/day filtering; note detail, edit, and delete; parity between iOS and macOS Chat notes.

Chat & experts

Live voice and video-style sessions; smoother expert switching; web search in thread with optional LLM verification of results.

Workflows & integrations

Orchestration canvas expanded for triggers, approvals, and agent nodes; deeper hooks into the same backends as the web app.

Tasks

Board views aligned with agent/task execution; clearer staging and history for AI-backed work items.

IDE

MLX-backed assistance on Apple Silicon; chat panel and multi-agent coding loop tightened for day-to-day repo work.

Channels & devices

Telegram, Discord, and Slack bots on the same experts as the portal; mobile MCP-style device capabilities for richer automation.

iPhone & iPad: Full offline mode

The AAA AI app runs on your device with on-device models. No subscription required for offline—your data never leaves your pocket. Download once, use anywhere: planes, underground, low connectivity.

  • On-device inference — Llama 3.2 1B for chat; optional vision model (Qwen2-VL 2B) for photos and live camera. All runs locally.
  • Training & learning — Same architecture as the platform: experience memory, multi-expert reasoning, no retraining. When online, chats sync and benefit from server-side memory and experts.
  • Privacy & compliance — Zero data sent to ChatGPT or any third party in offline mode. Enterprise-ready: SOC2, GDPR, HIPAA-friendly when you control the backend.
  • Works everywhere — Offline mode switches automatically after connection loss; optional Face ID. One tap to download models in Settings when on Wi‑Fi.

Use the app with web.aaaai.me when online for full multi-expert and memory; use offline mode when you need privacy or have no network.

07 — Pricing

Pricing by scale

Per-user monthly license. On-prem: one-time installation. Cloud SaaS: simple per-user pricing. Contact us for exact quote in your currency.

Cloud SaaS

Per user
€20 / $20 / £20 / 2 000 ₽ per user/month
No installation fee. Hosted by us.
  • e.g. 100 users = €2 000 / $2 000 / £2 000 / 200 000 ₽ per month
Subscribe

Starter

1–100 users
€30 / $30 / £30 / 3 000 ₽ per user/month
Installation: €2 500 / $2 500 / £2 500 / 250 000 ₽
  • e.g. 50 users ≈ €1 500/mo + €2 500 setup
Pay installation

Growth

101–1 000 users
€20 / $20 / £20 / 2 000 ₽ per user/month
Installation: €10 000 / $10 000 / £10 000 / 1 000 000 ₽
  • e.g. 500 users ≈ €10 000/mo + €10 000 setup
Pay installation

Enterprise

1 001–10 000 users
€15 / $15 / £15 / 1 500 ₽ per user/month
Installation: €10 000–100 000 / $10 000–100 000 / £10 000–100 000 / 1–10 mln ₽
  • e.g. 5 000 users ≈ €75 000/mo + custom setup
Pay installation
Subscription / Payment
08 — Get in touch

Back the future of reasoning

Open-source POC. Private code by Kirill Pokidov. Explore the repo, run locally with Ollama, or reach out for collaboration.

Report issue / Support Contact / Email Subscription / Payment Portal
^