From: aidotengineer
The field of AI engineering is undergoing a significant transformation, with a notable pivot towards agent engineering. This shift marks a new phase in the discipline’s maturity, driven by evolving capabilities and increasing demand for sophisticated AI applications [02:07:07].
Current State of AI Engineering
AI engineering has seen considerable growth, marked by publications like an O’Reilly book on the topic [00:37:37]. Historically, past summits have aimed to landmark the state of the industry, covering topics such as the rise of the AI engineer, the three types of AI engineers, and the discipline’s maturation and spread [01:05:05].
Despite this progress, there’s a perceived “peak” by some, including Gartner, suggesting a potential decline [00:48:00]. However, the discipline continues to evolve, facing resistance from those who view it as either an extension of ML engineering with “a few prompts” [01:41:00] or primarily software engineering “calling a few LLM APIs” [01:47:00]. The speaker anticipates AI engineering will emerge as its own distinct discipline, growing beyond its current 90% software engineering and 10% AI composition [01:54:00].
The Pivot to Agent Engineering
The AI Engineer Summit has deliberately pivoted to become the agent engineering conference, a decision not made lightly [02:44:00]. This focus means saying “no” to other areas like RAG (Retrieval-Augmented Generation), open models, and GPUs, in favor of concentrating solely on agents [02:53:00]. This strategic narrowing has opened new doors, attracting speakers who work on agent frameworks [03:26:00]. A new rule has been implemented to exclude vendor pitches, prioritizing insights from those actively putting agents into production [03:34:00].
It has been observed that “everything plus agent works,” with combinations like “agent plus RAG,” “agent plus sent,” and “agent plus search” proving effective [04:00:00].
Defining an Agent
A crucial step in any agent conference is defining what an agent is [05:25:00]. Perspectives vary:
- Machine Learning: Views agents in the context of reinforcement learning environments, focusing on actions achieving goals [05:41:00].
- Software Engineers: Tend to be more reductive, sometimes simply equating it to a for-loop [05:49:00].
Simon Willison, considered a “patron saint” in the AI engineering community, has crowdsourced over 300 definitions [06:01:00]. Common themes in these definitions include:
- Goals [06:17:00]
- Tools [06:19:00]
- Control flow [06:20:00]
- Long-running processes [06:21:00]
- Delegated authority [06:22:00]
- Small multi-step task completion [06:23:00]
OpenAI also recently released a new definition of agents that warrants attention, as they are building upon it [06:52:00]. This evolving definition of agents highlights the dynamic nature of the field.
Why Agents are Working Now
The current effectiveness of agents, compared to a year or two ago, is attributed to several key factors:
- Increased Capabilities: AI capabilities have significantly grown, reaching human baselines around 2023-2025 [07:19:00]. This includes better reasoning, improved tool use, and more advanced tools [07:37:00].
- Model Diversity: OpenAI’s market share has decreased from 95% two years ago to about 50% now, indicating a much more diverse model landscape [07:51:00]. The emergence of two new Frontier Model Labs in the past week also adds to this diversity [07:59:00].
- Lower Cost of Intelligence: The cost of GPT-4 level intelligence has decreased by 1,000 times in the last 18 months, with similar trends for other intelligence levels [08:14:00].
- RL Fine-tuning Options: The availability of RL fine-tuning options further enhances agent development [08:28:00].
- Focus on Outcomes: Conversations with leaders like Brett Taylor highlight the shift towards charging for outcomes rather than just costs [08:43:00].
- Multi-agents and Faster Inference: Advancements in multi-agent systems and faster inference due to better hardware contribute significantly [08:49:00].
These factors collectively contribute to the impact and future potential of AI and agents, making agents more viable and effective now than ever before.
Agent Use Cases and Anti-Use Cases
Certain agent use cases are showing clear product-market fit (PMF):
- Coding Agents: These agents assist in generating, debugging, and optimizing code [09:12:00].
- Support Agents: These agents provide automated customer service and support [09:12:00].
- Deep Research Agents: These agents excel at conducting thorough research [09:15:00].
There are also “anti-use cases” that should be avoided:
- Flight Booking Agents: The speaker suggests allowing users to book their own flights [09:25:00].
- Instacart Order Agents: Similarly, users prefer to manage their own grocery orders [09:33:00].
- Astroturfing Agents: Agents used for deceptive online campaigns are to be avoided [09:35:00].
These examples offer insight into building effective agents by focusing on areas where automation truly adds value without removing user agency or being deployed for harmful purposes.
The Future of AI and Agents
The growth of AI products, particularly in agentic models, is significant. OpenAI reported 400 million users, a 33% growth in three months [09:47:00]. ChatGPT usage dramatically increased after the introduction of agentic models, doubling its usage [10:09:00]. It is projected that ChatGPT could reach a billion users by the end of the year, quintupling its user base from September of last year [10:27:00].
This massive growth underscores that the growth of any AI product is “very, very tight to reasoning capabilities and the amount of agents that you can ship for your users” [10:41:00]. The role of the AI engineer is now evolving towards building agents, much like ML engineers build models and software engineers build software [11:01:00].