From: redpointai
AI is profoundly transforming creative tooling across images, video, and music [00:00:00]. Scott Belsky, founder of Behance and Chief Product Officer and Chief Strategy Officer at Adobe, shared insights into Adobe’s strategy regarding artificial intelligence [00:00:05].
Core Focus Areas
Adobe’s approach to AI is centered around three key areas:
- Interfaces [00:03:00]
- Models [00:03:04]
- Data [00:03:07]
The Firefly Family of Models
Adobe’s flagship generative models are part of the Firefly family [00:03:10]. These are homegrown and specifically trained on licensed content [00:03:12], [00:03:17]. Adobe has a compensation program for content contributors [00:03:20] and offers blanket indemnification for all its customers [00:03:29]. This ensures compliance and legal protection for users, preventing issues such as generating copyrighted characters like Spider-Man [00:04:41], [00:04:54].
Integration and Availability
Adobe integrates these models into its existing products like Photoshop [00:03:36]. They are also increasingly made available to third parties and large companies for complex, at-scale workflows, aiming to automate tasks that individuals might otherwise perform piecemeal [00:03:42], [00:03:44].
Control Capabilities
A significant focus for Adobe is building “control capabilities” on top of the models, rather than solely pursuing the “next best model” [00:02:18], [00:03:59]. Examples include:
- Project Neo: A 3D illustration program that, unexpectedly, was used by creative professionals to create 3D structures as a reference for Adobe Firefly, enabling extreme precision in image generation [00:01:15], [00:01:28].
- Structure Reference and Style Match: These capabilities allow users to guide the generative process with greater specificity [00:04:01].
- Generative Recolor in Illustrator: This feature allows users to apply hundreds or thousands of color palettes based on prompts to vector creations instantly, a process that previously took days [00:06:42], [00:06:48].
Custom Models and Data Leverage
Adobe is focused on enabling custom models for its customers [00:04:06]. For example, Nickelodeon could train a version of Firefly on SpongeBob SquarePants content to generate new ideas for character development and storylines without concern [00:04:11].
Adobe leverages its deep understanding of creative tools and user data—such as how users adjust Lightroom dials and filters—to enrich its models and refine outputs [00:12:21], [00:12:28]. This “full-stack advantage” allows Adobe to deliver a better end-to-end experience where they are world experts [00:11:35], [00:30:50].
Partnerships with LLMs
For Large Language Models (LLMs), Adobe opts for partnerships rather than building their own [00:04:27], [00:04:29]. These LLMs are surfaced through products like Acrobat, the AI assistant, and digital experience products for marketing analytics and queries [00:04:32].
Future Directions for Adobe AI
UI on Demand
A significant next step is the development of AI that can generate custom user interfaces (UI) on demand [00:13:24]. Instead of a fixed UI, the product would dynamically spin up specific UI elements tailored to the user’s current task or prompt, disappearing when no longer needed [00:13:31], [00:13:43]. This allows for customized experiences for the long tail of specific use cases [00:14:04].
Professional Grade Quality
Adobe emphasizes the importance of professional-grade quality in its AI outputs [00:14:17], [00:14:19]. For example, while many startups are developing video models, Adobe focuses on specific customer use cases for commercial-grade media creation, where the bar for quality is exceptionally high and outputs “can’t look weird” [00:14:21], [00:14:35], [00:14:39].
Proactive Suggestions
Beyond helping users execute tasks, the next tier of AI capability involves proactive suggestions [00:24:25], [00:24:30]. This means the AI agent could suggest alternative approaches, identify potential issues, or recommend variations that the user might not have considered [00:24:37].
Business Model: Generative Credits
Adobe uses a generative credit system for its AI features [00:25:58]. Customers receive a basic amount of credits with their existing plans, with options to upgrade for more [00:26:48], [00:27:00]. This model allows Adobe to adjust credit costs for more intensive capabilities (e.g., generative video) while aiming to keep access as cheap as possible [00:27:17], [00:27:25].
Evolving Views on AI Models
Initially, Scott Belsky thought only a few models would dominate the AI landscape [00:29:07]. However, he now believes there will be thousands of models, many of which will become commodities or focus on niche use cases [00:10:00], [00:29:15]. This shift is driven by the rapid increase in model capabilities and the focus on cost-effectiveness for common use cases [00:29:40], [00:29:47].
Adobe views external frontier models (like Pika or Sora) as platforms, similar to how better chips from Apple improve their products [00:30:19], [00:30:33]. As long as users need an interface to tell a story, leverage data for customization, and access tools for creative control, Adobe sees itself winning [00:30:24].
The Devil’s in the Defaults
A key lesson learned in building AI features is that “the devil’s in the defaults” [00:28:16]. When generative fill and a new generation bar were made default in Photoshop, it “totally unlocked utilization” in an unforeseen way [00:28:22], [00:28:33]. The challenge now is to make AI a default part of every product workflow to provide superpowers [00:28:41].