From: redpointai

The world of AI is rapidly evolving, leading to a shift from deterministic systems to more stochastic models, which presents new challenges and opportunities for businesses [00:10:16]. Salesforce, an incumbent company with extensive and diverse data, is actively navigating this landscape by integrating AI into its core products and developing strategies for model selection and evaluation to meet enterprise needs [00:10:08].

Salesforce’s AI Product Strategy

Salesforce has shipped its first set of generative AI applications under the Einstein GPT brand, including Service GPT for customer service and Sales GPT for sales cloud users [01:54:39]. A notable example is service reply recommendations and case summaries, which have significantly reduced time-consuming tasks for customer service representatives [02:10:10]. These applications leverage a “Retrieval Augmented Generation” (RAG) approach, grounded in customer data within Salesforce [02:27:39].

The company recently launched Einstein Co-pilot, a natural language conversational assistant automatically grounded in customer data, metadata, and Salesforce flows [05:20:00]. Alongside this, Co-pilot Studio was released, enabling customers to customize their own co-pilot through a prompt builder and a model builder, allowing them to fine-tune or bring their own predictive models [05:37:00].

Organizational Structure for AI Development

Salesforce’s approach to integrating AI has evolved from largely decentralized AI teams within each application cloud to a shared services AI platform team [06:21:00]. This central team builds foundational components like the Einstein Trust Layer and the Model Gateway, which are essential for every Salesforce application [06:54:00]. Product-specific teams now focus on predictive AI for their use cases and build on this shared platform, creating specialized actions for Einstein Co-pilot within their respective clouds (e.g., Sales Cloud building sales actions) [07:07:00].

Trust and Guardrails in Enterprise AI

For enterprises, especially those dealing with sensitive customer data like Gucci, trust is paramount when deploying AI [07:28:00]. Salesforce addresses this through a multi-layered approach:

  1. Technology Integration: The Einstein Trust Layer is engineered into the product, offering features like data masking, data grounding to reduce hallucinations, citations, audit trails, prompt defense, and zero data retention prompts [07:54:00]. Data masking, for instance, has been used for years to mitigate bias from sensitive data fields like name, gender, or zip code [09:23:00].
  2. Acceptable Use Policy: This policy requires AI bots to self-identify as AI to customers [08:16:00].
  3. Stakeholder Engagement: Salesforce has developed and open-sourced a set of trusted AI guiding principles centered on accuracy, honesty, and empowerment, shared across the industry and with government regulators [08:36:00].

A significant challenge in enterprise AI adoption is addressing data security, data privacy, and ethical risks, including preventing data leaks and honoring internal sharing rules and entitlements within the organization [0:24:12]. Once these trust concerns are alleviated, customers are typically eager to experiment with AI [02:39:00].

AI Model Selection and Customization for Businesses

Salesforce adopts an open architecture approach for AI model selection, allowing customers to choose from models on their service, bring their own models, or integrate third-party models [0:19:12].

Data Types Supporting AI

Salesforce’s strength in AI is underpinned by four types of unique data:

  1. Structured CRM data records: Traditional salesforce heritage [01:38:00].
  2. Unstructured data: Knowledge articles, conversation transcripts from Slack, contact center voice calls, chats, and emails [01:47:00]. Salesforce’s Data Cloud is expanding to include vector search and hybrid reranking for both structured and unstructured data, with zero ETL partnerships with major data lake providers [01:47:00].
  3. Metadata layer: Created 25 years ago for multi-tenancy, this layer is crucial for providing context to AI, indicating which data objects, tables, or functions to use [01:35:00].
  4. Feedback data: As the world’s largest database of customer outcomes (e.g., sales opportunity stage, marketing campaign results), this data serves as a reward function for any AI model, whether predictive or generative, and is captured in the Data Cloud [01:58:00].

Model Choice and Specialization

The current model landscape is dynamic, with no clear winner [01:59:00]. Salesforce believes different models will be optimal for different tasks and use cases over time [01:59:00].

  • Salesforce’s internal models are being built and fine-tuned for industry-specific and domain-specific use cases, such as code generation, Salesforce flow generation, or financial services sales [01:59:00].
  • Customer choices often align with existing cloud providers (e.g., Google Cloud customers preferring Gemini) or depend on the complexity of the task (e.g., GPT-4 for complex agentic planning) [02:20:00].

Rather than fine-tuning models for specific brand voices (like Gucci vs. Ford), it’s often simpler and more effective to adjust the prompt itself [02:48:00].

AI Model Evaluation and Benchmarking

Evaluating AI models is complex due to variables like fine-tuning and the RAG pipeline [02:56:00]. Salesforce focuses on tracking and benchmarking models for cost, performance, and latency for each specific task [02:01:00].

  • Domain-Specific Benchmarking: Salesforce creates sales-specific benchmarks tailored to industries like pharma or wealth management [02:14:00]. This approach recognizes that performance in one domain (e.g., general question answering) may not reflect performance in a highly specific business context (e.g., upselling a handbag after a belt issue) [02:44:00].
  • Feedback Loops: The vast amount of customer outcome data collected by Salesforce allows for continuous feedback loops to make models “smarter and smarter” by acting as a reward function [01:36:00].

Barriers to Enterprise AI Adoption

While the potential for AI is immense, businesses face significant hurdles in scaling deployment beyond initial piloting:

  1. Trust (Data Security and Privacy): This is the biggest barrier [02:12:00]. Enterprises worry about data leakage and ensuring that AI adheres to existing internal data sharing rules and entitlements [02:16:00].
  2. Business Case: Clearly defining the business case and demonstrating productivity gains or margin expansion is crucial for justifying AI investment [02:48:00]. AI adoption is about whether the return on investment (ROI) outweighs the cost [02:48:00]. For example, Gucci saw reduced average handle time in customer service and increased conversion rates, transforming a cost center into a revenue center with AI [03:02:00].
  3. User Education: Addressing user fears about job displacement is vital [02:51:00]. Salesforce emphasizes that AI is meant to empower users and handle undesirable tasks, rather than replacing roles [02:56:00]. Products are designed with clear onboarding experiences and “co-pilot” naming to reinforce AI as a helpful coworker [02:56:00].
  4. Cost: While costs are decreasing, they remain a consideration for scaling AI solutions across an entire contact center or for multiple use cases [02:30:00]. Salesforce addresses this by routing tasks to the appropriate model size and emphasizing clear business cases [02:47:00].

To facilitate adoption, offering turnkey AI use cases that are easy to get up and running, like service reply recommendations, allows customers to immediately see the business value [02:27:00].

Future Outlook for Enterprise AI

AI is expected to transform every department within a company, necessitating a re-evaluation of job descriptions and the learning of new skills, similar to the adoption of the internet and email in the 1990s [03:03:00].

Key areas of growth for generative AI in the enterprise include:

Beyond these, AI will move into workflow orchestration, allowing co-pilots to initiate actions like processing returns or sending shipping labels based on defined workflows and administrator permissions [03:27:00]. This requires careful governance and control, allowing admins to select which existing flows and functions the co-pilot can access [03:56:00].

The future of work will increasingly involve “team plus AI” collaborations, as seen in platforms like Slack [03:26:00]. Slack AI already provides conversation and channel summaries [03:47:00]. Integrating Einstein Co-pilot into Slack will enable real-time assistance, such as generating service reply recommendations, summarizing customer activity, or assisting with sales account closing by providing insights to the entire team [03:58:00]. This “multiplayer” AI experience will shift from single-user interactions to collaborative engagements with AI bots as team members [04:53:00].