From: aidotengineer

Full stack AI engineering today involves deploying “zero ops resilient agent-powered user-ready apps” in serverless environments [00:00:23]. The core challenge for AI engineers is to get agentic workflows into the hands of users [00:00:28]. This modern approach necessitates a specific infrastructure, which typically includes a client application, an agent framework, an orchestration layer, all running serverlessly in the cloud [00:00:48].

Agent Frameworks

Agent frameworks are foundational for building AI applications. There are numerous options emerging constantly [00:01:16].

Examples of Agent Frameworks

Preferred Agent Framework: OpenAI Agents SDK

The OpenAI agents SDK is highlighted for its capabilities [00:03:17]:

Orchestration Layers

Orchestration layers are crucial for managing complex AI workflows, especially for long-running jobs that might exceed typical cloud function time limits [00:07:36].

Examples of Orchestration Layers

Preferred Orchestration Layer: Ingest

Ingest is favored for its event-driven nature and ease of use [00:03:53]:

  • Uses events to trigger workflows, eliminating the need to manage JSON state machines [00:03:56]
  • Operates entirely on demand, removing concerns about server warm-up [00:04:01]
  • Features automatic retry mechanisms [00:04:06]
  • Provides step-level observability to monitor workflow progress and identify errors [00:04:08]
  • Offers a one-click integration with Vercel [00:04:14]

Integrating Agent Frameworks and Orchestration

A recommended stack for AI engineering combines Next.js for the client application, OpenAI’s agents SDK for agentic capabilities, Ingest for orchestration, and Vercel for serverless deployment [00:02:55].

Architectural Overview

The typical architecture involves a Next.js client app connected to a database [00:06:02]. When new work is needed, the client app triggers a workflow by sending an event to the Ingest service [00:06:11]. Ingest, acting as the orchestration layer, manages the connection to Python serverless functions where the AI agents (using the OpenAI agents SDK, which is currently Python-only) are running [00:06:17]. Vercel automatically hosts these Python functions [00:06:34]. These functions handle AI inference and return results to the orchestration layer, which then updates the client app and caches data in the database [00:06:41].

Example Application Workflow

An example application that generates a newsletter demonstrates this integration [00:07:08]. The workflow highlights:

  • Serverless Scalability: The system supports long-running jobs without crashing or exceeding time limits for cloud functions, enabling cost efficiency by paying only for actual usage [00:07:34].
  • Local Developer Experience: The setup allows for a seamless local development environment requiring three terminals for Python agents, Next.js, and the Ingest dev server [00:07:56].
  • Type Safety: Full type safety is maintained across the stack using Pydantic in Python and TypeScript in Next.js [00:08:03].
  • Ingest Workflow Structure: Workflows in Ingest define clear, individual steps. Each step.run invocation ensures reliable, sequential execution, passing results between steps [00:13:28]. For instance, one agent performs research, another formats the newsletter, and a final step saves the output to storage [00:13:56].
  • Vercel Deployment: Vercel automatically detects and deploys Python functions within the API directory, simplifying AI agent deployment without requiring complex configuration files like vercel.json [00:12:01].

This combination of tools offers expected scalability, resilience, and the full agentic power of OpenAI’s SDK [00:15:29].