From: aidotengineer

This article details the use of Docker for setting up and running an interactive AI workshop, particularly focusing on the development of stateful agents. The workshop leverages a server-client architecture with Docker serving as the containerization platform for the server component.

Workshop Prerequisites

To participate in the interactive component of the workshop, users are recommended to have Docker installed on their laptops [00:00:21]. After installing Docker, users need to pull a specific Docker image, which serves as the server for the workshop [00:00:24]. A Jupyter notebook, accessible via a provided link or through a workshops channel, acts as the client that runs against this Docker server [00:00:28]. The materials are also available online for later access [00:01:31].

Most attendees confirmed having Docker installed on their laptops [00:01:18]. The Python programming language is used for the workshop [00:14:09]. While Python is used for the client and backend logic, the server itself can be written in other languages like Go or Rust, and there are TypeScript and REST API SDKs available [00:14:23].

Workshop Setup Steps

  1. Start the Leta Server: Use a docker run command to kick off the Leta server from the pulled Docker image [00:16:26]. This command can be copied from the GitHub link or the Slack channel [00:16:31]. A free, live endpoint is available, so API keys are not required for the demo [00:16:50].
  2. Clone the Repository: Clone the workshop’s GitHub repository [00:19:30].
  3. Navigate to Example Directory: Change the directory into the specific example folder [00:19:40].
  4. Launch Jupyter Notebook: Run the Jupyter notebook command, which should automatically open a browser tab with the notebook [00:20:28].

Architecture and Benefits of Docker in AI Agent Development

The Leta stack, used in the workshop, is an open-source system built with Fast API, Postgress, and Python logic [00:17:27]. The Docker container exposes a robustly documented API, which is how interactions with the AI agents occur [00:17:37].

Server-Client Paradigm for Stateful Agents

A key distinction of the Leta framework, compared to others like LangChain or AutoGen, is its server-client process [00:22:05]. This is crucial because agents are intended to be stateful and persist indefinitely [00:22:11]. Persisting information indefinitely is difficult when holding everything in application state; thus, a server acts as a centralized source of truth [00:22:13].

By running the server in a Docker image, it can be parked anywhere, including in the cloud or easily deployed on Kubernetes [00:22:25]. This setup means that once an agent is created, a handle is provided, and subsequent messages are sent to this handle without needing to provide the full conversation history every time [00:23:09]. This is a core aspect of enabling stateful agents.

Sandboxed Tools

Tools used by agents within Leta are automatically sandboxed by default [00:29:24]. This allows arbitrary Python code to run securely inside the sandbox, which is essential for deployments where multiple users’ tools shouldn’t interfere with each other [01:00:14]. E2B is supported by default for local sandboxing, and E2B keys can be used for private cloud deployments [01:00:19].

Multi-Agent Systems

Docker-based deployments enable robust multi-agent orchestration. Unlike some frameworks where agents are “trapped” in a Python file and cannot run asynchronously, Leta’s stateful agents run on servers and maintain state, making them accessible via APIs [01:02:00]. This architecture allows agents to communicate with each other through message passing over APIs, similar to how humans interface in a remote company using communication tools like Slack [01:02:26]. This facilitates asynchronous communication and enables agents to share information across different agents, as memory blocks can be linked together [00:32:12].

Development Workflow and Best Practices

For rapid iteration, a low-code environment like the Leta UI (web-based application) is often faster than an SDK [00:56:07]. This UI allows users to easily create agents, set memory blocks, and send messages in a visualized form [00:56:19]. It also provides insights into the agent’s context window, showing the full payload being sent to the LLM, aiding in debugging and optimization [00:58:48]. The UI helps developers see what’s going through the context window, which is often difficult to trace in other frameworks [00:59:03].

The ability to easily run and test tools separately from the agent is also highlighted as an advantage [00:59:45]. Developers can write custom Python tools and even make them meta by importing the Leta client within the tool itself, allowing agents to create or manage other agents [00:49:23]. This flexibility supports complex AI development workflows.

Overall, using Docker for the Leta server facilitates the development of robust, stateful AI agents by providing a persistent, scalable, and secure environment for agent services and their associated tools.