From: aidotengineer
Introduction
AI engineering is an emerging discipline that is still very early in its development [01:59:00]. The discipline is anticipated to grow, moving beyond its current state of being “90% software engineering 10% AI” [02:03:00]. This field involves an “anthropological” approach, observing how people describe themselves, form groups, and establish identities and industries [02:18:00].
Evolution of AI Engineering
Over time, the discipline of AI engineering has been maturing and spreading across different fields [01:19:00].
Key milestones in the discussion of AI engineering include:
- “The Rise of the AI Engineer” (during “Latent Space” conference) [01:11:00]
- “The Three Types of AI Engineer” (at the first AI Engineer Summit) [01:13:00]
- “AI Engineer Worlds Fair” (last year), which focused on the discipline’s maturation and spread [01:17:00]
Challenges and Resistance
Despite its growth, AI engineering faces resistance from two perspectives within the AI engineer spectrum [01:36:00]:
- Machine Learning (ML) viewpoint: Views the AI engineer as “mostly an MLE plus a few prompts” [01:41:00]. ML engineers use terms like “test time compute” because inference is primarily for testing [02:27:00].
- Software Engineering viewpoint: Considers AI engineering to be “mostly software engineering and calling a few LLM APIs” [01:47:00].
However, it is expected that AI engineering will eventually emerge as its own distinct discipline [01:54:00]. AI engineers, for instance, care about “inference time compute” as they are concerned with actual inference [02:30:00].
Pivot to Agent Engineering
The AI Engineer Summit has pivoted to become the “agent engineering conference” [02:44:00], making a conscious decision to focus solely on this area [02:51:00]. This pivot was driven by audience interest, as demonstrated by last year’s top-performing YouTube talks which heavily featured “agentic things” [03:17:00].
A key rule for the conference is “no more vendor pitches,” aiming to focus on practical applications and shared experiences from those putting agents into production [03:36:00].
Defining an Agent
A fundamental step in discussing agents is defining the term [05:25:00]. Drawing on Simon Willison’s crowdsourced definitions, an agent can be understood in terms of:
- Goals [06:17:00]
- Tools [06:19:00]
- Control flow [06:20:00]
- Long-running processes [06:21:00]
- Delegated authority [06:22:00]
- Small multi-step task completion [06:23:00]
OpenAI also recently released a new definition of agents [06:52:00].
Why Agents Are Working Now
Agents are gaining traction now for several reasons, which include enhanced capabilities and changes in the AI landscape:
- Improved Capabilities: Better reasoning, better tool use, and improved tools [07:37:00]. AI capabilities, particularly between 2023 and 2025, have grown to “hit human baselines” [07:23:00].
- Model Diversity: OpenAI’s market share has decreased from 95% to 50% in two years, indicating a more diverse landscape [07:51:00]. New frontier model labs are emerging as potential challengers [08:01:00].
- Reduced Cost of Intelligence: The cost of GPT-4 level intelligence has decreased by 1,000 times in the last 18 months [08:16:00].
- RL Fine-tuning Options: The availability of RL fine-tuning options [08:28:00].
- Multi-agents and Faster Inference: Development in multi-agent systems and faster inference due to better hardware [08:49:00].
- Outcome-based Charging: Charging for outcomes rather than costs [08:44:00].
Agent Use Cases and Impact
Coding agents and support agents currently have “product market fit” (PMF), and deep research agents are also reaching PMF [09:10:00].
The speaker highlights “anti-use cases” that should be avoided, such as demonstrating agents that book flights or Instacart orders, as users prefer to handle these tasks themselves [09:24:00].
The growth of agentic models has a significant impact on user adoption:
- OpenAI reported 400 million users, a 33% growth in three months [09:46:00].
- ChatGPT’s user base grew from zero to 400 million in two to two-and-a-half years [09:55:00].
- Initial stagnation in ChatGPT usage was attributed to a lack of shipping “agentic models” [10:09:00].
- “01 models” have doubled ChatGPT usage [10:24:00].
- ChatGPT is projected to reach one billion users by the end of the year, quintupling its September last year user base [10:28:00].
This growth indicates a strong correlation between the success of AI products and their reasoning capabilities and the number of agents they can ship [10:41:00].
The Future Role of AI Engineering
The job of an AI engineer is evolving towards building agents, similar to how ML engineers build models and software engineers build software [11:00:00]. This shift underscores the growing importance of agentic capabilities in the broader AI engineering landscape.