From: aidotengineer

Evolution of Memory in LinkedIn’s Generative AI Platform

Initially, LinkedIn’s generative AI (GenAI) product experience focused on simple prompt-in, string-out applications, such as collaborative articles utilizing the GPT-4 model [00:01:31]. This first generation, while leveraging models like GPT-4, lacked the capability to inject rich data into the product experience [00:02:30].

Conversational Memory

In mid-2023, as LinkedIn developed its second generation of GenAI products, internally known as “co-pilot” or “coach,” a critical component emerged: conversational memory [00:02:42].

  • Purpose: This infrastructure helps to keep track of Large Language Model (LLM) interactions and retrieved content, injecting this information into the final product [00:04:11].
  • Application: It is crucial for building conversational bots [00:04:23], enabling personalized recommendations, such as suggesting job fits based on a user’s profile and job description via a Retrieval Augmented Generation (RAG) process [00:02:56].

Experiential Memory

With the launch of LinkedIn’s first multi-agent system, the “LinkedIn H assistant,” the platform extended its memory capabilities beyond just conversational memory [00:04:31].

  • Definition: Experiential memory is a memory storage system designed to extract, analyze, and infer factual knowledge from interactions between an agent and its users [00:06:21].
  • Structure: This memory is organized into different layers, including:
  • Benefit: These layers help the agent become aware of the surrounding content, enhancing its autonomy and decision-making [00:06:45]. Agents, by definition, are autonomous, deciding which APIs and LLMs to call [00:06:55].

Role of Memory in the GenAI Platform Architecture

Memory management is classified as one of the four core layers of the GenAI platform, alongside orchestration, prompt engineering, and tools/skills invocation [00:07:42].

  • Memory is a critical component for injecting rich data into the agent experience [00:15:32].
  • LinkedIn has leveraged its existing messaging infrastructure as a memory layer, proving to be both cost-efficient and scalable [00:16:33].

The overarching goal of the GenAI platform is to provide a unified interface for a complex ecosystem, enabling developers to easily access various components without needing to understand every individual part [00:08:23]. This includes simplified model switching and reduced infrastructure integration complexity [00:08:50].