Multi-Agent System Architecture
Why a Multi-Agent Approach?
Given the complexity of financial data, no single “super model” can handle all tasks optimally. Instead, specialized Agents each focus on distinct capabilities:
Sentiment Agent: Analyzes social media, news, and transcripts for sentiment or emotion cues.
Options Agent: Focused on interpreting the Greeks, scanning for unusual open interest or implied volatility changes.
Fundamental Analysis Agent: Dives into earnings statements, balance sheets, and macroeconomic indicators.
Crypto Agents: Look at on-chain data, staking metrics, tokenomics, and developer/community signals.
Anomaly Detection Agent: Aggregates signals from other Agents to produce an overall risk or anomaly score.
Orchestrator Agent
At the top is an Orchestrator Agent (or “Manager Agent”) that receives queries or triggers. Examples:
User Query: “Is Tesla (TSLA) undervalued right now?”
The Orchestrator examines the query using an LLM-based intent classifier: “They want a fundamental + sentiment + anomaly check.”
Passes tasks to the Fundamental Analysis Agent (look at P/E, growth, etc.) and the Sentiment Agent (recent news sentiment).
Receives partial results, merges them, crafts a cohesive answer, then returns it to the user.
System Trigger: “We detected an unusual pattern in uni v3 pool of fet/usdc.”
The Orchestrator notifies the Crypto Agent and the Anomaly Detection Agent. They share data about volume spikes, suspicious wallet movements, or social media hype.
Once complete, the Orchestrator compiles a summarized “Crypto Alert” to be posted on socials or the user terminal.
Agent Communication and Collaboration
Agents communicate via an internal messaging. Each partial output is structured (e.g., JSON) and fed back into a central LLM that acts as a “meta reasoner.” The LLM can refine or reinterpret partial results if Agents disagree or produce conflicting signals.
Stateful Conversations (Q1 2025): We maintain a short-term memory of the query context. If a user modifies their question mid-flow (e.g., “Actually, compare TSLA to Ford instead”), the Orchestrator updates the tasks accordingly.
Chain-of-Thought (Internal): The multi-agent system logs how each Agent arrived at its conclusion. These logs remain internal but help with audits or error diagnoses.
LLM as the “Heart” and AI Models as the “Brain”
We can envision the large language model as the “heart” that maintains fluid communication and natural-language reasoning, while specialized ML or statistical models are the “brain” that do domain-specific heavy lifting (e.g., forecasting, anomaly detection). The LLM interprets user questions, delegates to the correct specialists, then synthesizes results.
Last updated