Backing Ambitious AI

Multi-Expert · Memory · No Retraining

Start Exploring Scroll to explore
01 — About

AAA AI — Reasoning Platform

We are building a proof-of-concept architecture that demonstrates how AI systems can reason like AGI: multiple specialized experts, persistent memory, world state tracking, and continuous learning—without retraining.

Input flows through perceptionmemory retrievalexpert modulesplanner → action, with a world model and learning loop closing the cycle. Run fully local with Ollama or scale to cloud LLMs.

02 — Capabilities

What We Build

Core

Perception

LLM-based input understanding: intent, entities, domain, complexity.

Core

Memory

FAISS vector store for experiences. Semantic retrieval, no retraining.

Core

World Model

Structured state: entities, events, goals, relationships.

Experts

Math · Risk · Planning

Specialized reasoning modules. Add/remove at runtime.

Experts

Coding & Vision

LiveBench-optimized coding, VLM for camera and live voice.

Experts

Weather & Search

Open-Meteo, DuckDuckGo—live data, no API keys required.

Ops

Dynamic Scaling

Auto-scale experts by query complexity. Unlimited on-demand experts.

Ops

Local & Cloud

Ollama, DeepSeek, OpenAI, Claude, Gemini. Per-expert provider override.

Beyond Single-LLM Chatbots

Compared to RAG-only or fine-tuning-only setups, this architecture offers experience-based learning, multi-expert reasoning, and transparent decisions.

Learning from use

Experience memory grows from interactions. No retraining, instant updates.

Multi-perspective

Math, risk, planning, coding—each expert contributes; planner synthesizes.

Transparent

See which experts were consulted and how the final answer was formed.

Run for $0

Fully local with Ollama. Optional cloud APIs when you need scale.

04 — Mission

From POC to Production AGI

This POC demonstrates the core pattern. The scaling path: add more expert modules (100s → 1000s), scale memory to millions of experiences, distribute across GPUs, add domain knowledge bases. The architecture stays the same—you scale the components.

05 — Get in touch

Back the future of reasoning

Open-source POC. Private code by Kirill Pokidov. Explore the repo, run locally with Ollama, or reach out for collaboration.

kirill@aaaai.me +382 68 624 850
Portal
^