From: redpointai
Trust and data security are paramount concerns in the development and deployment of AI, particularly within enterprise environments. Salesforce, as a leading incumbent in the data space, places significant emphasis on engineering trust into its AI products [00:07:28].

Salesforce’s Multi-Layered Approach to Trust

Salesforce addresses trust on three distinct levels to ensure responsible AI deployment [00:07:49]:

  1. Technological Safeguards (Einstein Trust Layer): This layer is engineered directly into the product to mitigate data security, data privacy, and ethical risks [00:07:54]. Key features include:

    • Data masking: Proactively suggests masking sensitive data fields that could introduce bias in AI models, a practice that dates back to predictive AI days [00:09:23].
    • Data grounding: Utilizes Salesforce’s Data Cloud to reduce hallucinations by grounding AI responses in customer data [00:08:01].
    • Citations: Provides sources for AI-generated content [00:08:03].
    • Audit Trail: Records AI interactions for accountability [00:08:05].
    • Prompt defense: Protects against malicious inputs [00:08:06].
    • Zero retention prompts: Ensures customer data used in prompts is not retained by the model providers [00:08:06].
  2. Policy Frameworks (Acceptable Use Policy): This layer sets clear guidelines for AI usage. For example, AI bots deployed by customers are required to self-identify as AI to consumers [00:08:16].

  3. Stakeholder Engagement: Salesforce has developed and open-sourced a set of “trusted AI guiding principles” centered around accuracy, honesty, and empowerment. These principles are shared with the industry and government regulators, contributing to broader AI safety and regulation discussions [00:08:36].

Data Security and Privacy Considerations

For enterprises, data security and privacy are critical. Salesforce emphasizes that the data belongs to its customers, not Salesforce, distinguishing itself from many consumer companies [00:16:19].

Key aspects include:

  • Data Protection: Mitigating risks of data leaking out of an organization or unauthorized access to workflows and data within different departments [00:24:16].
  • Sharing Rules and Entitlements: Honoring existing sharing rules and entitlements for different employees and departments is crucial to ensure AI respects data access permissions [00:24:25].
  • Customer Control: Customers have control over which existing workflows and Apex functions their AI co-pilot can access, allowing them to designate access levels. This enables a cautious, step-by-step approach, starting with less harmful actions like data lookups before potentially enabling more impactful actions [00:36:56].

Addressing Broader Concerns

The shift from deterministic to stochastic AI models presents new challenges in development and operation [00:10:16]. Salesforce actively engages with users to build products that empower rather than replace jobs [00:11:02]. The naming of “co-pilot” (not “autopilot”) reflects this philosophy, positioning AI as a helpful coworker [00:11:51].

The biggest barriers to Enterprise AI adoption today are related to trust due to real data security and data privacy risks [00:24:12]. Once these trust concerns are addressed, customers are generally eager to explore and customize AI capabilities [00:24:39].