From: aidotengineer

The year 2025 is anticipated to be a significant turning point for AI, with agentic workflows transitioning from a buzzword to a practical reality in the field [00:08:00]. OpenAI has observed the emergence of AI agents and their integration into workflows [00:00:29]. This shift signifies a graduation for generative AI from being merely an assistant to becoming a co-worker [00:08:41].

Through their experience, OpenAI has identified various best practices and lessons learned for building and improving AI agents [00:08:12].

Defining an AI Agent

An AI agent is conceptualized as an AI application composed of several components of AI agents [00:09:02]:

  • Model and Instructions - A core model typically guided by prompts [00:09:06].
  • Tools - Access to tools for information retrieval and interaction with external systems [00:09:11].
  • Execution Loop - An encapsulated loop where the model controls its termination [00:09:16].

In each cycle, an agent receives natural language instructions, decides whether to issue tool calls, runs those tools, synthesizes a response from the tool’s output, and provides an answer. The agent can also determine if it has met its objective and terminate the execution [00:09:24].

Lessons Learned in Developing AI Agents

When building AI agents, particularly concerning design challenges for AI agents and evaluating and optimizing AI agents and workflows, several key insights have emerged [00:09:51]:

1. Start Simple, Optimize When Needed, Abstract Minimally

When designing an AI agent that orchestrates multiple models, retrieves data, reasons, and generates output, there are two primary approaches [00:09:58]:

  • Starting with Primitives - Making raw API calls, logging results, outputs, and failures [00:10:07].
  • Starting with a Framework - Picking an abstraction and wiring it up to handle details [00:10:14].

While frameworks are enticing for quick proof-of-concept setups, they often obscure the underlying primitives and system behavior, preventing an understanding of constraints necessary for optimization [00:10:23]. A better approach is to first build with primitives to understand task decomposition, failure points, and areas for improvement [00:10:50]. Abstraction should only be introduced when there’s a clear need to avoid reinventing the wheel (e.g., for embedding strategies or model graders) [00:11:05]. The focus should be on understanding data, failure points, and constraints, rather than just choosing a framework [00:11:23].

2. Start with a Single Agent, Then Graduate to a Network

Teams often prematurely jump into designing complex multi-agent systems with dynamic coordination and reasoning, which creates many unknowns [00:11:48]. A more effective strategy is to:

  • Start with a Single Agent - Develop a single agent purpose-built for a specific task [00:12:08].
  • Deploy and Observe - Put it into production with a limited user set and observe its performance [00:12:16].
  • Identify Bottlenecks - This process helps identify real bottlenecks such as hallucinations, low adoption due to latency, or inaccuracy from poor retrieval performance [00:12:21].
  • Incrementally Improve - Based on identified underperformance and user needs, incrementally improve the system [00:12:35].

Complexity should be increased as more intense failure cases and constraints are discovered [00:12:44]; the goal is a working system, not necessarily a complicated one [00:12:51].

3. Implement Networks of Agents with Handoffs for Complexity

For more complex tasks, a network of agents combined with handoffs can be highly effective [00:13:07].

  • Network of Agents - A collaborative system where multiple specialized agents work together to resolve complex requests or perform interrelated tasks, handling subflows within a larger agentic workflow [00:13:17].
  • Handoffs - The process where one agent transfers control of an active conversation to another [00:13:38]. Unlike human transfers, handoffs in AI can preserve the entire conversation history, allowing the new agent to seamlessly continue [00:13:53].

This approach allows for bringing the right tools and models to the right job within a flow. For example, a customer service flow might use a smaller model (like GPT-4o mini) for initial triage, a larger model (GPT-4o) for managing the user conversation, and a reasoning-focused model (O3 mini) for accuracy-sensitive tasks like checking refund eligibility [00:14:03]. Handoffs, by maintaining conversation history while swapping models, prompts, and tool definitions, offer sufficient flexibility for a wide range of scenarios [00:14:39].

4. Utilize Guardrails for Safety and Integrity

Guardrails are mechanisms that enforce safety, security, and reliability within an application, preventing misuse and ensuring system integrity [00:14:55].

  • Simple Prompts - Keeping model instructions simple and focused on the target task ensures maximum interoperability and predictable improvement in accuracy and performance [00:15:11].
  • Parallel Execution - Guardrails should generally not be part of the main prompts but run in parallel [00:15:25]. The proliferation of faster, cheaper models like GPT-4o mini makes this more accessible [00:15:31].
  • Deferred Actions - High-stakes tool calls or user responses (e.g., issuing a refund, showing personal account information) can be deferred until all guardrails have returned a clear signal [00:15:42]. This includes running input guardrails (to prevent prompt injection) and output guardrails on the agent’s response [00:15:57].

In summary, the lessons for developing and optimizing AI agents involve using abstractions minimally, starting with a single agent before graduating to a network, and keeping prompts simple while relying on guardrails for edge cases [00:16:09]. These practices address common challenges in developing AI agents and are crucial for testing and optimization of AI coding agents.