From: redpointai

The adoption of AI, particularly large language models (LLMs), presents both immense opportunities and significant challenges for startups and established companies alike. Intercom, a customer support platform, offers insights into navigating these challenges based on their rapid integration of AI into their product, Finn, following the release of ChatGPT .

Initial Response and Strategic Pivoting

Upon the release of ChatGPT, Intercom recognized that customer support was “so in the kill zone of AI and these large language models” due to their conversational abilities, fact-finding, and summarization capabilities . This immediate threat and opportunity led them to consider ripping up their entire AI/ML roadmap to go all-in on generative AI . This aggressive pivot demonstrates a key challenge: the need for rapid strategic reassessment and commitment in a fast-evolving AI landscape.

AI Product Development Philosophy

Intercom’s approach to building AI startups and the challenges of scaling involved a “crawl, walk, run” strategy, starting with “zero downside” AI features . This aligns with Lean Startup principles in AI by minimizing risk while exploring value . Initial features included conversation summarization, message translation, and text expansion within their inbox .

The logic behind this approach is that if users don’t like a summary, they simply don’t click the button, but if it’s useful, demand for more automation quickly follows .

Technical Challenges in AI Adoption

Cost Optimization

A significant challenge in AI model training and deployment and strategy in AI deployment identified was the cost associated with AI model usage. With 500 million conversations a month, automatically summarizing all of them would be prohibitively expensive . This led to a “cost optimization” phase, where they had to be clever about what features to automate and when .

Despite these cost considerations, Intercom remains in “deep exploration mode,” prioritizing finding new AI opportunities over immediate cost optimization . The belief is that technology generally gets “cheaper and faster,” so prioritizing the best product first is key .

Guardrails and Hallucination Prevention

A major challenge in Enterprise AI deployment and a concern for businesses is controlling AI behavior, preventing hallucinations, and ensuring trustworthiness . Key strategies include:

  • Torture Tests: Creating extensive scenarios to test for misbehaviors and desired behaviors .
  • Prioritization: Training models to prioritize specific contexts over their general knowledge to prevent undesirable outputs (e.g., political opinions, competitor recommendations) .
  • Model Selection: Continuously evaluating various LLMs (GPT-3.5, GPT-4, Anthropic’s Claude, Llama) based on trust, cost, reliability, stability, uptime, malleability, and speed . Speed is particularly highlighted as a critical factor .

Missing Tooling and Infrastructure

The rapid evolution of AI means essential tooling is often missing, forcing companies to build solutions themselves . Examples include:

  • Prompt Management: Tools for subtle prompt changes, versioning, and A/B testing across different models .
  • Robust Infrastructure: Challenges with server locations (e.g., EU data residency leading to a relationship with Microsoft Azure) .
  • Developer Experience: Opportunities for new tools in the AI developer experience, similar to how cloud computing spawned new multi-billion dollar categories . However, there’s a risk of being “sherlocked” by foundational model providers like OpenAI if they build their own developer tools .

Organizational Structure for AI Development

Intercom operates with a centralized AI/ML team of about 17-20 people (initially 9), comprising data scientists and AI/ML engineers with deep domain expertise . This central team enables “regular product engineers” (around 150 people) to build user-facing features on top of the AI team’s provided endpoints .

The choice of centralized vs. distributed AI teams depends on the company’s AI maturity:

  • AI-as-a-feature: Companies applying AI as “salt and pepper” can use product engineers with some AI familiarity .
  • AI-first/AI-dependent: Companies whose existence depends entirely on AI or are pushing the bleeding edge require dedicated data scientists and experienced AI engineers .

A critical challenge is the inherent uncertainty in AI/ML projects compared to traditional software development. While design risks can be explored upfront in traditional software, AI projects introduce a “second wave” of uncertainty: “is any of this even possible?” . This means projects must be viewed as a “portfolio of bets,” with varying probabilities of success .

Strategic Considerations for AI Startups

For startups, a key challenge and opportunity in AI integration is to identify areas where incumbent technology stacks are “irrelevant” . This means finding domains where an AI-first approach would lead to an “entirely different” product, UI, and underlying architecture . Startups should avoid areas where incumbents can easily copy AI features due to existing complex infrastructure (e.g., email sending platforms) .

For incumbents, the recommended algorithm for AI adoption is:

  1. Remove: Identify and automate entire workflows that AI can reliably do, then delete the old manual processes .
  2. Optimize: If AI cannot fully remove a workflow, use it to augment or reduce it to a simple decision set, massively increasing efficiency .
  3. Sprinkle: Add “salt and pepper” AI features for a complete offering .
  4. Sell: Focus on explaining and demonstrating the value to customers .

Future Outlook and Broader Adoption

A major challenge and advancement in AI technology for broader adoption is latency . The current speed of AI interactions can feel like “modem internet days” . Faster, on-device AI models (like Google’s Gemini builds for phones or future Apple LLMs) are anticipated to normalize conversational interactions with software . This normalization, much like the iPhone’s impact on software design, is expected to make AI adoption a competitive battleground and reduce user skepticism .

Intercom expects AI to handle a significant percentage of customer support requests, potentially 100% in certain verticals like e-commerce where queries are limited . For more complex products (e.g., Google Docs), 80-90% automation might be achievable . The future will also see AI taking actions (e.g., issuing refunds in Stripe), moving beyond just providing text answers . This presents a new challenge and opportunity in AI infrastructure development in building robust systems for authentication, monitoring, and data logging .

The ability to control human involvement in AI-driven decisions will also be crucial, allowing for full automation or human oversight for critical actions .