From: aidotengineer

The Evolving Landscape of AI Engineering

AI engineering is a maturing discipline, moving beyond its initial phases described in previous summits, which covered the “rise of the AI engineer” [01:11:00], the “three types of AI engineer” [01:13:00], and its “maturing and spreading across different disciplines” [01:19:00]. While an O’Reilly book on AI engineering exists [00:37:00], Gartner suggests the field may have peaked [00:48:00].

Resistance and Disciplinary Emergence

Currently, there is resistance from two sides of the AI engineer spectrum [01:36:00]:

  • Machine Learning (ML) viewpoint: Sees AI engineering as primarily ML with some prompts [01:41:00].
  • Software Engineering viewpoint: Views it as mostly software engineering calling LLM APIs [01:47:00].

However, AI engineering is expected to emerge as its own distinct discipline [01:55:00]. Historically, AI engineering was considered 90% software engineering and 10% AI, a ratio expected to grow over time [02:03:00]. Differences in terminology, such as “test time compute” (ML) versus “inference time compute” (AI engineering), highlight this emerging distinction [02:27:00].

The Pivot to Agent Engineering

The AI Engineer Summit has deliberately pivoted to become the “Agent Engineering Conference” [02:44:00]. This decision involved saying “no” to other areas like RAG (Retrieval-Augmented Generation), open models, and GPUs, to focus specifically on agents [02:53:00].

Speaker and Content Curation

A key challenge in this pivot was ensuring quality content. Last year’s top-performing YouTube talks indicated a strong audience interest in “agentic things” [03:18:00]. However, this resulted in speakers predominantly from agent framework companies [03:26:00]. To address the question of “who’s putting this in production?” [03:32:00], a new rule was introduced: no more vendor pitches [03:36:00], making content curation significantly harder [03:46:00].

The “Everything + Agent Works” Formula

A simple observation highlights the potential: “everything plus agent works” [04:00:00]. This includes:

This “simple formula” is considered a way to “make money in 2025” [04:07:00].

Debunking the “Year of Agents” Claim

The phrase “2025 is the year of Agents” is a common prediction [04:20:00], often pushed by industry leaders like Satya Nadella, Roman, Greg Brockman, and Sam Altman [04:32:00]. However, the speaker and co-host were initially skeptical [04:48:00], reflecting a broader audience sentiment that “agents” is a buzzword people are tired of hearing [05:00:00].

OpenAI’s stance on branding has also shifted; in March 2024, they advised against using “agents” in branding, but now suggest putting it back on [05:09:00].

The Fundamental Challenge of Defining “Agent”

A significant hurdle in the field is defining what an “agent” is [05:25:00]. Different perspectives exist:

  • Machine Learning: Focuses on reinforcement learning environments, actions, and achieving goals [05:41:00].
  • Software Engineers: Have a reductive view, often seeing it as a simple “for loop” [05:49:00].

Simon Willison crowdsourced over 300 definitions, highlighting common elements [06:06:00]:

OpenAI also recently released a new definition for agents [06:52:00].

Why Agents are Gaining Traction Now

Despite past skepticism, agents are working now for several reasons [07:12:00]:

  • Increased Capabilities: AI models’ capabilities are rapidly growing and starting to “hit human baselines” [07:19:00]. This includes better reasoning, improved tool use, and more effective tools [07:37:00].
  • Model Diversity: OpenAI’s market share has decreased from 95% to 50% in two years, leading to a much more diverse landscape [07:51:00], with new frontier model labs emerging as potential challengers [08:02:00].
  • Lower Cost of Intelligence: The cost of GPT-4 level intelligence has dropped 1,000 times in the last 18 months [08:14:00].
  • RL Fine-Tuning Options: The availability of RL fine-tuning options [08:28:00].
  • Business Model Shifts: Charging for outcomes instead of costs [08:43:00].
  • Technological Advancements: Work on multi-agents and faster inference due to better hardware [08:49:00].

Agent Use Cases and Anti-Patterns

Certain agent use cases have achieved “product market fit” (PMF) [09:08:00]:

However, there are also “anti-use cases” that should be avoided as primary demonstrations [09:23:00]:

The Impact of Agentic Models on AI Product Growth

The growth of AI products is strongly tied to reasoning capabilities and the deployment of agents [10:41:00]. OpenAI reported 400 million users, a 33% growth in three months [09:47:00]. ChatGPT, for example, experienced a year of stagnation when it didn’t ship agentic models [10:05:00]. The introduction of 01 models doubled ChatGPT usage [10:24:00], and it is projected to reach one billion users by the end of the year [10:28:00], quintupling its user base from September of last year [10:32:00]. This massive growth signifies that the job of an AI engineer is evolving towards building agents [11:00:00], similar to how ML engineers build models and software engineers build software [11:04:00].