From: aidotengineer
Steven Moon, founder of Hai, advocates for a different approach to Enterprise AI deployment. Instead of building parallel systems, he emphasizes leveraging decades of existing enterprise infrastructure [00:00:17].
The Challenge of AI Agent Deployment
The primary challenge in deploying AI agents and solutions isn’t merely about powerful models or clever problem-solving. It’s about deploying these digital workers in a way that respects an enterprise’s existing security, compliance, and workflow frameworks that organizations have spent years perfecting [00:00:30].
The New Computing Paradigm: LLMs and AI Agents
AI engineers are at a unique juncture in computing history. For the first time, software applications can understand users directly and use the same interfaces as people do [00:00:45]. Large Language Models (LLMs) represent a new computing paradigm where AI agents can:
- Reason with requests [00:00:58]
- Understand context [00:01:00]
- Interact naturally through existing channels [00:01:00]
Avoiding Old Patterns
Despite this new paradigm, the rush to deploy AI agents often leads to reverting to old patterns: each new AI agent becomes another external system, another portal, another set of credentials, and another security review [00:01:06]. This approach creates more barriers between users and the new capabilities AI can provide, rather than embracing the direct understanding that agents offer [00:01:21].
Satya Nadella, CEO of Microsoft, observed a fundamental shift in business software, suggesting that traditional Software as a Service (SaaS) interfaces are becoming obsolete as AI agents become the primary way users interact with business systems [00:01:34]. Building new AI portals and dashboards recreates this very pattern that is becoming obsolete [00:01:53].
Enterprise AI Agents as Digital Employees
Enterprise AI agents should function like any other employee [00:02:00]. This means they should:
- Follow security policies [00:02:03]
- Use approved systems [00:02:05]
- Stay within data boundaries [00:02:07]
- Access only what’s needed [00:02:09]
- Be monitored and audited [00:02:10]
The Power of Existing Enterprise Systems
Enterprises already possess the necessary infrastructure for this approach [00:02:13]:
- Secure compute environments [00:02:16]
- Identity management [00:02:19]
- Data governance [00:02:21]
- Compliance frameworks [00:02:21]
- Audit capabilities [00:02:22]
These systems have been refined and hardened over decades, and many companies have their own private clouds where AI agents can operate within their security boundaries [00:02:26]. Modern AI infrastructure allows agents to run in private clouds, keep data within tenants, use existing security controls, leverage current workflows, and maintain complete oversight [00:02:37]. The technology exists to deploy AI with the same privacy controls applied to human employees [00:02:49].
Creating new interfaces for AI agents risks solving “yesterday’s problem” by building translation layers between humans and machines at a time when machines can directly understand us [00:02:56]. The question should be: “Could this capability be delivered through systems our users already know and trust?” [00:03:12].
Hai’s Approach and the Future of IT
Hai, for example, leverages battle-tested platforms like Microsoft 365 and Azure, inheriting existing trust and infrastructure into their AI agents [00:03:25]. This approach enables IT departments to provision AI agents exactly like human employees [00:03:47].
Jensen Huang, CEO of Nvidia, captured this transformation by stating that “the IT department of every company is going to be the HR department of AI agents in the future” [00:03:58]. This means IT teams can:
- Create agent accounts using existing Active Directory tools [00:04:13]
- Apply standard security policies [00:04:16]
- Set permissions through familiar interfaces [00:04:17]
- Use existing audit and monitoring tools [00:04:21]
There’s no new system to learn or special handling required, making AI agents another employee to manage through familiar tools [00:04:24].
Agent-to-Agent Communication and Scalability
Email opens up a powerful pattern for agent-to-agent communications [00:04:42]. Just as humans use email to collaborate, AI agents can email each other to share data and coordinate work [00:04:50]. This ensures every interaction is:
- Fully logged and auditable [00:04:57]
- Permissions enforced automatically through existing systems [00:05:00]
- Data flows are transparent and controllable [00:05:04]
This framework facilitates building observable and controllable AI systems at enterprise scale [00:05:06]. While Microsoft’s ecosystem was chosen by Hai, these patterns apply to other enterprise platforms like Google Workspace [00:05:13].
A Fundamental Shift in Enterprise AI Adoption
Leveraging existing enterprise infrastructure frees AI engineers to focus on building new capabilities and solving novel problems, rather than reinventing infrastructure that already works [00:05:20].
The future of Enterprise AI isn’t about building new interfaces for agents, but about enhancing the systems that have been perfected over decades [00:05:48]. Enterprises possess hardened, secured, and refined systems like document management, internal messaging platforms, and workflow tools [00:06:03]. Now that software agents can directly understand human intent, each of these becomes a potential gateway for AI capabilities [00:06:14].
This represents a fundamental shift in how enterprises approach AI adoption and application development [00:06:23]. Instead of asking what new tools are needed, the question should be: “Which of our existing systems can we enhance with AI agents?” [00:06:31] The most powerful solution may be quiet intelligence added to tools customers already trust and use daily [00:06:45].
The era of mandatory translation layers between humans and machines is ending, giving way to direct understanding and seamless AI collaboration [00:06:51].