Asycd AI Research (March 2026): Demos in Multi-Agent Orchestration and Bayesian Prompt Compression
The Asycd research lab officially launched the week of February 16th, initiating a high-intensity immersion into the current state-of-the-art in Generative AI.
We’ve titled this inaugural phase “The Collision.” The objective is straightforward: ingest the latest research, identify its breaking points, and build from the wreckage.
Week 1: Aggressive Intake & Intros
The first week served as a rapid-fire synthesis of the current landscape. The team conducted a deep-dive review of peer-reviewed papers, industry whitepapers, and the internal Asycd knowledge base.
EXAMPLE: Saksham— Distributed State Consistency & Cost-Aware Orchestration for Multi-Agent Systems.
Saksham’s review established the theoretical backbone for AetherFlow, bridging neurobiology and distributed systems to solve “Agentic Entropy” (context bloat and high inference costs).
The “Methylation Gate” (Epigenetic AI): Inspired by DNA methylation, Saksham researched mathematical thresholds to silence “noisy” events, preventing agents from collapsing under context bloat.
Synaptic Pruning & The “Janitor Agent”: Using the brain’s method of removing unused connections as a blueprint for an asynchronous SLM (Small Language Model) that cleanses and compresses causal logs.
Ebbinghaus’s Memory Decay: Applied psychological “forgetting curves” to create a temporal decay parameter, ensuring agents prioritize recent episodic weight while consolidating long-term resonance.
Causal Ordering & Logical Clocks: Implementing Lamport’s Clocks to solve race conditions in multi-agent environments, ensuring memory remains chronologically consistent.
The Saga Pattern & Rollbacks: Mapping microservice transaction protocols to LLM workflows, allowing the system to “un-learn” or roll back states if an agent hallucination is detected.
Economic Cascading (FinOps): Researching dynamic routing to shift 70% of cognitive load to “Tier 0” heuristics and SLMs, reserving frontier models only for high-ambiguity decisions.
This review bridges neurobiology, psychology, and distributed systems to address “Agentic Entropy” (context bloat, race conditions, and cost inefficiency in multi-agent systems) — Saksham
Week 2: The “Crappy First Draft”
In Week 2, we moved into the V0 Phase — internally known as the “Crappy First Draft” week. Here, the priority shifts from perfect theory to raw execution. Every researcher was tasked with producing a functional script, notebook, or prompt chain. The goal isn’t polished output; it’s a system that runs so we can find out why it fails.
Featured Demos & “V0 Bums”
Bayesian Prompt Compression (Joel): Joel demonstrated Structural Pattern Discovery, using a Gaussian process to systematically compress prompts while maintaining performance levels.
The Result: Initial V0 testing achieved a 10% reduction in tokens on sample prompts.
Next Steps: Scaling tests across diverse prompt libraries with a specific focus on high-accuracy classification tasks.
AetherFlow: Distributed State Consistency (Saksham Kapoor): Saksham presented a demo for cost-aware orchestration in multi-agent systems.
The Innovation: AetherFlow categorizes agent traces into three tiers: Expressed, Methylated, and Auditable memory.
The Result: This architecture significantly reduces hallucination spread by preserving a clear causal history across the agent network.
Identifying where these early systems fail — and explaining those failures through the lens of Week 1’s research — is exactly what sets the stage for our next phase.
What’s Next: Phase 2 — “Pivot & Patch”
As we enter Week 3, the lab transitions into the Iteration Phase. Our researchers will now use their V0 failure data to refine their architectures. Expect to see aggressive pivots in model selection, new approaches to vector retrieval, and the integration of advanced reasoning loops.

