From: aidotengineer

LinkedIn has embarked on a significant journey to build its Gen AI platform, recognizing its critical role in the evolving landscape of AI agents [00:00:19], [00:00:50]. The core purpose of this platform is to provide a unified interface for a complex AI ecosystem, simplifying development for engineers [08:27:00].

LinkedIn’s Gen AI Platform Journey

Collaborative Articles (Early 2023)

LinkedIn’s initial formal Gen AI feature, launched in early 2023, was “Collaborative Articles” [01:29:00]. This feature used the GPT-4 model to create long-form articles, inviting members to comment on them [01:49:00]. At this stage, the team developed key components, including:

  • A gateway for centralized model access [02:00:00].
  • Python notebooks for prompt engineering [02:08:00].

However, this early phase suffered from limitations, such as two distinct tech stacks (Java for online, Python for backend) and a lack of capability to inject rich data into the product experience, preventing it from being considered a true platform [02:13:00], [02:24:00], [02:30:00].

Co-pilot/Coach (Mid 2023)

By mid-2023, LinkedIn began developing its second generation of Gen AI products, internally known as “Co-pilot” or “Coach” [02:42:00]. An example is a feature that provides personalized job fit recommendations based on a user’s profile and job descriptions, leveraging a RAG (Retrieval Augmented Generation) process [02:56:00], [03:03:00].

This phase marked the start of building core platform capabilities:

  • Python SDK: Central to the platform, built on top of the LangChain framework to orchestrate LLM calls and integrate with large-scale infrastructure [03:16:00].
  • Unified Tech Stack: Recognizing the high cost and error potential of transferring Python prompts to Java, the tech stack was unified to Python [03:38:00].
  • Prompt Management: Investment began in a “prompt source of truth” module to help developers version their prompts and provide structure around meta-prompts [03:51:00].
  • Conversational Memory: An infrastructure was developed to track LLM interactions and retrieval content, injecting it into the final product to enable conversational bots [04:08:00].

Multi-Agent Systems (Late 2023/Early 2024)

LinkedIn launched its first true multi-agent system, the “LinkedIn HR Assistant,” in late 2023/early 2024 [04:31:00], [04:35:00]. This system assists recruiters by automating tedious tasks like job posting, candidate evaluation, and outreach [04:42:00].

The platform evolved significantly to support these AI agents:

  • Distributed Agent Orchestration: The Python SDK was extended into a large-scale, distributed agent orchestration layer to handle complex scenarios like retries and traffic shifts [05:08:00].
  • Skill Registry: A key investment was made in a skill registry, providing tools for developers to publish APIs as “skills” that agents can discover and invoke easily [05:45:00].
  • Experiential Memory: Beyond conversational memory, the platform added experiential memory, a storage system to extract, analyze, and infer tacit knowledge from agent-user interactions [06:14:00]. This memory is organized into working, long-term, and collective layers to help agents understand their context [06:35:00].
  • Operability: Recognizing the autonomous and unpredictable nature of agents, operability became crucial [06:50:00]. An in-house solution built on OpenTelemetry tracks low-level telemetry data for replaying agent calls and guiding future optimization [07:10:00].

Components of the Gen AI Platform

The LinkedIn Gen AI platform can be categorized into four main layers [07:39:00]:

  1. Orchestration [07:44:00]
  2. Prompt Engineering [07:47:00]
  3. Tools and Skills Invocation [07:49:00]
  4. Content and Memory Management [07:50:00]

Supporting this, sister teams build out other crucial aspects of the LinkedIn Gen AI ecosystem [07:56:00]:

  • Modeling Layer: Fine-tuning open-source models [08:02:00].
  • Responsible AI Layers: Ensuring agent behavior adheres to policies and standards [08:06:00].
  • AI Platform/Machine Learning Infrastructure: Hosting the models [08:14:00].

Value Proposition: A Unified Interface

The primary value of LinkedIn’s Gen AI platform is to act as a unified interface for this complex ecosystem [08:23:00]. This means developers don’t need to understand every individual component, but can instead leverage the platform for quick access to the entire ecosystem [08:32:00]. For example, a single parameter in the SDK can switch between OpenAI models and internal models, significantly reducing infrastructure integration complexity [08:50:00].

Furthermore, this centralized platform enforces best practices and governance, ensuring applications are built efficiently and responsibly [09:12:00].

Why a Unified Platform is Critical

LinkedIn believes that Gen AI systems are fundamentally different from traditional AI systems [09:55:00]. In traditional AI, there’s a clear separation between model optimization and serving, allowing AI and product engineers to operate on different tech stacks [10:04:00]. However, in Gen AI systems, this line blurs, with everyone contributing to overall system performance optimization, creating new tooling and best practice challenges [10:24:00].

Gen AI and agent systems are considered “compound AI systems” [10:49:00], defined as systems tackling AI tasks using multiple interacting components, including multiple calls to models, retrievers, or external tools [10:55:00]. This requires skills across both AI and product engineering [11:10:00]. The Gen AI app platform is crucial to bridge this skill gap [11:17:00].

Building and Scaling the Platform

Hiring Principles for the Team

When building a Gen AI platform team, LinkedIn prioritizes certain qualities [12:35:00]:

  • Strong Software Engineering Skills: Prioritized over pure AI expertise [12:47:00].
  • Potential over Experience/Degrees: Given the rapid evolution of the field, the ability to learn is more valuable than potentially outdated experience [13:03:00].
  • Diversified Team: Instead of seeking “unicorn” engineers with all qualifications, LinkedIn hires a diversified team (full-stack software engineers, data scientists, AI engineers, data engineers, fresh grads, startup backgrounds) [13:15:00]. Collaboration helps individuals pick up new skills [13:50:00].
  • Critical Thinking: The team consistently evaluates new open-source packages, engages with vendors, and proactively deprecates solutions, understanding that current builds may be obsolete within a year [14:06:00].

Key Platform Components to Build

Based on their experience, LinkedIn highlights critical components for a Gen AI platform [15:01:00]:

  • Prompt Source of Truth: Essential for robust version control of prompts, similar to model parameters, ensuring operational stability [15:03:00].
  • Memory: A key component to inject rich data into the agent experience [15:26:00].
  • APIs as Skills: In the agent era, uplifting APIs into easily callable skills for agents is crucial, requiring surrounding tooling and infrastructure [15:42:00].
  • Python Preference: Python is strongly recommended due to its prevalence in research and open-source communities, and its scalability [14:37:00].

Scaling and Adoption Strategies

For successful adoption and scaling of the platform, LinkedIn recommends:

  • Solving Immediate Needs: Start by addressing immediate problems (e.g., a simple Python library for orchestration) rather than building a full-fledged platform from the outset [16:04:00].
  • Focus on Infrastructure and Scalability: Leverage existing robust infrastructure, such as LinkedIn’s messaging infrastructure for a memory layer, for both cost efficiency and scalability [16:29:00].
  • Developer Experience (User-centric AI design and feedback loops): The platform’s success hinges on developer adoption. Design the platform to align with existing developer workflows to ease integration of mCP with AI Applications | adoption and boost productivity [16:46:00].

More technical details are available in LinkedIn’s engineering blog posts [17:12:00].