From: aidotengineer

Introduction to AI Engineering and Agents

The field of AI engineering is undergoing rapid development, maturing and spreading across various disciplines [01:19:17]. Historically, AI engineering talks have sought to landmark the state of the industry, from the rise of the AI engineer to the maturation of the discipline [01:05:08]. However, there is ongoing debate about the nature of AI engineering, with some seeing it as an extension of machine learning (ML) with “a few prompts” [01:41:00], and others viewing it primarily as software engineering that calls Large Language Model (LLM) APIs [01:47:00]. It is projected that AI engineering will emerge as its own distinct discipline, with the AI component growing beyond its current 10% share compared to software engineering [01:55:00].

A significant shift in the focus of the AI Engineer Summit has been its pivot to being an agent engineering conference [02:44:00]. This decision was made based on audience interest, as top-performing talks from the previous year indicated a strong desire for “all the agentic things” [03:20:00]. This specialization means saying “no” to other areas like Retrieval-Augmented Generation (RAG), open models, and GPUs, to concentrate specifically on agents [02:53:00].

Defining “Agent”

Before delving into the role of AI agents, it is crucial to define what an agent is [05:25:00]. Perspectives vary depending on the background:

  • Machine Learning (ML): Often views an agent in terms of reinforcement learning environments, focusing on actions achieving goals [05:41:00].
  • Software Engineering (SWE): Tends to be more reductive, potentially seeing an agent as a simple “for loop” [05:49:00].

Simon Willison, considered a “patron saint” in the AI engineering community, has crowdsourced over 300 definitions of what an agent is [06:01:00]. Common themes among these definitions include:

  • Being goal-oriented [06:17:00]
  • Utilizing tools [06:17:00]
  • Involving control flow [06:20:00]
  • Running long processes [06:20:00]
  • Having delegated authority [06:22:00]
  • Completing small, multi-step tasks [06:23:00]

OpenAI also recently released a new definition for agents [06:52:00], which is expected to influence future developments [07:03:00].

Why Agents are Working Now

The current emergence and effectiveness of AI agents are attributed to several factors [07:12:00]:

  • Improved Capabilities: AI capabilities, particularly in reasoning and tool use, have significantly grown from 2023 to 2025, reaching human baselines around the present time [07:19:00].
  • Model Diversity: The market share of OpenAI has diversified, dropping from 95% two years ago to 50% now, with new frontier model labs emerging [07:51:00]. This increased competition is seen as exciting for 2025 [08:05:00].
  • Reduced Cost of Intelligence: The cost of GPT-4 level intelligence has decreased by 1,000 times in the last 18 months, making advanced AI more accessible [08:14:00].
  • RL Fine-tuning Options: New options for Reinforcement Learning (RL) fine-tuning are becoming available [08:28:00].
  • Multiagent Systems: Significant work is being done in multi-agents [08:49:00].
  • Faster Inference: Improvements in hardware are leading to faster inference times [08:50:00].

Challenges and Benefits

Despite the excitement, there has been skepticism. Many leaders, including Satya Nadella, Roman, Greg Brockman, and Sam Altman, predict 2025 will be “the year of Agents” [04:20:00]. However, some were initially skeptical, and the term “agents” was even noted as a buzzword people were tired of hearing [04:48:00]. An interesting reversal occurred, as OpenAI initially advised companies to remove “agents” from their branding but later suggested putting it back on [05:15:00].

Agent Use Cases and Production

In terms of practical applications, certain agent use cases have found clear product-market fit (PMF):

  • Coding Agents [09:12:00]
  • Support Agents [09:12:00]
  • Deep Research Agents [09:15:00]

However, there are also “anti-use cases” that developers are urged to avoid demonstrating, such as agents that book flights or Instacart orders, as users often prefer to handle these tasks themselves [09:24:00].

Impact on User Growth and Future Potential

The growth of AI products, particularly ChatGPT, is tightly linked to reasoning capabilities and the deployment of agents [10:41:00]. OpenAI reported 400 million users, a 33% growth in three months [09:46:00]. ChatGPT’s user base is projected to reach one billion users by the end of the year, quintupling its September usage [10:28:00]. This rapid growth suggests that the ability to ship agentic models directly correlates with increased user adoption [10:09:00].

The role of AI engineering is now evolving towards building agents, much like ML engineers build models and software engineers build software [11:00:00]. This shift indicates significant opportunities for those involved in the development and deployment of AI agents [10:54:00].