Back to blog

The Power Behind Simple Architectures

The Power Behind Simple Architectures

When embarking on projects involving artificial intelligence (AI), it's easy to get caught up in the complexity of cutting-edge models and prompts. However, my experience with LangGraph has taught me that some of the most reliable systems are built not by reinventing the wheel but by adhering to tried-and-true patterns.

In this post, I'll explore three reusable agent patterns that have significantly streamlined my workflow: The Analyzer Agent, the Router Agent, and the Report Compiler. These aren't groundbreaking; they’re simple, yet incredibly effective in ensuring reliable outcomes.

Breaking Down the Analyzer Agent

The Analyzer Agent is a generic tool designed to take raw data, process it using predefined tools, and return a summary. This separation of concerns makes the agent highly reusable across different domains, such as financial analysis or customer support ticket triage.

Example: Financial Earnings Analysis

To illustrate how this works, let's consider an example where we need to analyze quarterly filings for public companies:

from langchain_core.language_models import BaseChatModel
from langchain_core.tools import BaseTool
from typing import List

class BookingAgent:
    """
    A generic booking agent. Provide it with a travel context and tools,
    and it will reason over data to produce a booking summary.
    """

    def __init__(
        self,
        llm: BaseChatModel,
        tools: List[BaseTool],
        context: str,
    ):
        self._llm = llm
        self._tools = tools
        self._context = context

    async def book(self, destination: str) -> str:
        """Run the booking loop: LLM reasons, calls tools, summarizes."""

        # Bind tools to the LLM so it can call them during reasoning
        llm_with_tools = self._llm.bind_tools(self._tools)

        messages = [
            {"role": "system", "content": self._context},
            {"role": "user", "content": destination},
        ]

        while True:
            response = await llm_with_tools.ainvoke(messages)
            messages.append(response)

            if not response.tool_calls:
                # No more tool calls, the LLM is done reasoning
                return response.content

            for tool_call in response.tool_calls:
                tool = next(
                    t for t in self._tools if t.name == tool_call["name"]
                )
                result = await tool.ainvoke(tool_call["args"])
                messages.append({
                    "role": "tool",
                    "content": str(result),
                    "tool_call_id": tool_call["id"],
                })

In this example, the AnalyzerAgent takes a financial analysis prompt and a set of tools to process filings and earnings call transcripts. The result is a detailed summary that can be used by stakeholders.

Example: Customer Support Ticket Triage

Here's another use case where we triage customer support tickets:

USER_ORDER_SUMMARY = """
You are an order fulfillment assistant. Examine the incoming
order and any related customer history to assess urgency and product.

## PROCESS
1. Use the fetch_customer_history tool to pull past purchase data.
2. Use the check_tier tool to determine the customer's membership level.
3. Analyze the order content for urgency indicators.

## OUTPUT
Return a summary covering:
- Product category (electronics, clothing, home goods, gifts)
- Urgency assessment (high priority, standard, low)
- Relevant customer context (tenure, tier, recent purchases)
- Recommended routing
"""

By using different prompts and tools, we can tailor the AnalyzerAgent to fit various scenarios without changing its core structure.

The Router Agent: Navigating Branches

The Router Agent takes the output from the Analyzer Agent and uses it to decide the next step in a workflow. This is particularly useful when dealing with branching logic that requires understanding unstructured text, such as determining whether a financial document needs a deep dive or can be handled with a standard summary.

Example: Financial Document Routing

Let's look at an example where we route documents based on their analysis:

class TravelPlanner:
    """
    Plans a travel route based on user preferences and context.
    """

    def __init__(self, ai_engine: BaseChatModel, travel_advisor: "TravelAdvisor", trip_details: str):
        self.ai_engine = ai_engine
        self.travel_advisor = travel_advisor
        self.trip_details = trip_details

    @property
    def itinerary(self) -> Optional[str]:
        """The planned route based on AI recommendations."""
        return self.travel_advisor.itinerary

    async def plan_trip(self, user_context: str) -> str:
        """Run the planner: AI engine reads context and calls select_itinerary."""

        ai_with_tools = self.ai_engine.bind_tools(
            [self.travel_advisor.select_itinerary]
        )

        messages = [
            {"role": "system", "content": self.trip_details},
            {"role": "user", "content": user_context},
        ]

        response = await ai_with_tools.ainvoke(messages)

        if response.tool_calls:
            tool_call = response.tool_calls[0]
            await self.travel_advisor.select_itinerary(**tool_call["args"])

        return self.travel_advisor.itinerary

The RouterAgent ensures that the workflow branches correctly based on the analysis, making the system more dynamic and adaptable.

Structuring Outputs with the Report Compiler

Finally, the Report Compiler is a crucial component for ensuring clean, structured outputs at the end of complex workflows. It extracts data from conversation history into validated Pydantic schemas, providing a reliable way to generate reports or populate databases.

Example: Financial Report Compilation

Let’s see how we can compile a financial report:

from pydantic import BaseModel
from langchain_core.language_models import BaseLLM
from langchain_core.messages import HumanMessage, AIMessage
from typing import List

class SalesForecast(BaseModel):
    """Structured output for quarterly sales forecast."""

    company_name: str = Field(description="Company legal name")
    stock_symbol: str = Field(description="Stock ticker symbol")
    quarter: str = Field(description="Fiscal quarter, e.g. Q3 2024")
    revenue_projected: float = Field(description="Projected revenue in millions USD")

    # Other fields...

prompt = build_extraction_prompt(SalesForecast)

predictor = ForecastCompiler(
    llm=my_llm,
    schema=SalesForecast,
    prompt=prompt
)
forecast = await predictor.compile(messages_history)

By using the build_extraction_prompt function, we can dynamically generate prompts that ensure accurate data extraction without manual intervention.

Why These Patterns Work

The beauty of these patterns lies in their simplicity and reusability. By breaking down complex tasks into small, specialized agents, we achieve greater reliability, easier testing, and a more legible system. Each agent performs one job well, making the overall architecture robust and maintainable.

In an ideal world, our agentic systems should focus on clear responsibilities and effective composition rather than intricate prompt engineering or highly intelligent but unpredictable agents.

Conclusion

These three patterns have transformed my approach to building AI-driven workflows. They offer a practical and reliable way to structure complex processes, making them easier to manage and less prone to errors. By focusing on simplicity and composable parts, we can build robust systems that perform well in production environments.

If you're looking to implement similar solutions or just want to compare notes, feel free to reach out! Let's discuss how these patterns can be adapted for your projects.