From: redpointai
Adobe, a prominent software company for creativity, has extensively integrated Artificial Intelligence (AI) into its product suite, significantly transforming content creation workflows. Alexandre Cen, VP of Generative AI and Sensei at Adobe, highlights how the company leverages AI to enhance productivity, accessibility, and personalization in its diverse offerings [00:00:07].
Adobe’s AI Ecosystem and Strategy
Adobe’s AI integration spans its three main business units:
- Creative Cloud: Includes tools like Photoshop, Illustrator, and Adobe Express for content creation and authoring [00:03:02].
- Document Cloud: Features the Acrobat franchise [00:03:09].
- Experience Cloud: Offers content management, commerce solutions, analytics, and CRM data hosting for enterprise customers [00:03:13].
AI capabilities are embedded across products in each unit [00:03:25]. At the core of Adobe’s AI development for Creative Cloud is the Sensei platform, an internally developed infrastructure that enables hundreds of researchers to train various models [00:03:32]. These models are then tech-transferred into products like Photoshop and Adobe Express [00:03:44].
In 2022, Adobe fully embraced large models, including diffusion models and Large Language Models (LLMs), leading to the creation of the Firefly Suite of models [00:04:52]. Launched in March, the Firefly beta has already seen over a billion images generated across platforms like firefly.com, Photoshop (Generative Fill), and Adobe Express [00:05:24].
Specific AI Tools and Integrations
Within Creative Cloud, AI powers features such as:
- Photoshop: Auto-masking, content-aware fill, object detection, and neural filters [00:03:50]. The Neural Filters project, predating generative and diffusion models, introduced intent-driven editing with sliders (e.g., smart portrait to make someone smile or age them) [00:12:15]. Generative Fill in Photoshop allows users to select an area and describe what they want to remove, insert, or retouch [00:15:33].
- Adobe Express: Integrates Firefly to enable fast social content creation through templates for communicators and marketers [00:18:02].
- Illustrator: Is also integrating Firefly capabilities [00:05:45].
In Document Cloud, Liquid Mode in Acrobat uses AI to reflow documents on mobile phones for easier reading [00:04:06]. For Experience Cloud, AI capabilities include anomaly detection in analytics and marketing content generation in Adobe Experience Manager (CMS) [00:04:29].
Impact on Creative Workflows
Adobe views AI as another wave of technological disruption, similar to digital publishing, digital photography, the internet, and mobile [00:07:23]. The company believes that while AI makes content creation cheaper and easier, it also exponentially increases the demand for content [00:08:18]. This means creators who embrace AI can become 10x more productive and creative [00:09:03].
Adobe’s strategy is that AI will “touch” most content, either through full generation or editing processes. The goal is not full automation, but to keep creative people in control of both the creation and curation of content [00:09:12]. The company aims for AI to act as a “co-pilot,” guiding users to the desired end result rather than instructing them on complex tool usage [00:15:17].
User Experience Design for AI
Integrating AI into existing complex products like Photoshop, which has many controls accumulated over decades, presented design challenges [00:10:52]. Adobe has explored various user interfaces:
- Intent-Driven Editing: Introduced with Neural Filters, using sliders (e.g., for smile, age) to control AI actions based on user intent [00:12:17].
- Prompt-Based Interfaces: While lowering the barrier to entry, these can be frustrating as users don’t always get what they imagine [00:12:45]. Adobe is aiming for a mix of language, pointing, and sliders for more control [00:13:19].
- Contextual Bars: In Photoshop, a contextual bar appears near a selection, offering options like Generative Fill, often generating content without requiring a prompt by using the rest of the image as an implied prompt [00:22:30].
To help users, especially new ones, overcome the “cold start problem” of not knowing what to prompt, Adobe employs several strategies:
- Inspiration Streams: On firefly.com, users can remix existing prompts from a curated community stream, allowing them to find something they like and riff from it [00:18:56].
- Adobe Style Engine: Instead of complex prompts for style, users can select styles, lighting conditions, and composition rules from a series of curated options, making it easier to control the output [00:20:10].
- Prompt Autocompletion: Leveraging existing search autocompletion technology to guide users [00:21:26].
- Generative Templates: For Adobe Express, templates are baked in with generative AI capabilities, providing complex compositions with layers and effects [00:21:57].
- Contextual Recommendations: For enterprises using Adobe Experience Manager, contextual AI recommendations can auto-generate portions of text, respecting brand voice [00:23:50]. This ensures content automatically complies with brand guidelines, addressing the “content velocity” problem [00:24:52].
Trust, Safety, and Data Strategy
Adobe’s approach to AI model training and usage places a high bar on trust and safety, particularly concerning bias and harmful content [00:26:19]. A key differentiator of their Firefly strategy is training on data from their Adobe Stock database [00:26:41]. This approach:
- Provides high-quality content [00:27:03].
- Ensures the right to train on the data, reducing concerns about consent and compensation for artists [00:26:56].
- Minimizes the risk of IP infringement and harmful content, as Adobe Stock has automated and manual moderation processes [00:27:09].
Despite this, training data can still contain inherent biases. Adobe addresses this through:
- Person Detector: A model identifies person references in prompts (e.g., “lawyer”) and debiases the content to introduce a fair distribution of skin tones, genders, and age groups based on the country of origin [00:29:19].
- Toxicity Detectors and NSFW Filters: Multiple AI models are trained to differentiate terms and filter out “not safe for work” content to ensure Firefly is accessible for all ages [00:31:01].
- Child-Specific Safeguards: Systems detect prompts referencing children to prevent the generation of inappropriate content (e.g., children with tobacco) [00:31:21].
- Feedback Mechanisms: firefly.com and Photoshop include robust feedback systems for users to report issues, which provides new training data for refining algorithms and rules [00:32:02].
Adobe also clarified that it does not train models on customer data stored in Creative Cloud, which is handled separately due to sensitive brand campaigns [00:33:30]. However, for firefly.com, terms of use allow storing prompts and generated images for training, serving as a reinforcement learning from human feedback (RLHF) system to align models with user preferences [00:34:03].
Organizational Structure for AI Development
Adobe employs a hybrid organizational structure for AI:
- Horizontal Layer:
- Adobe Research: Conducts fundamental research in various modalities (images, video, vector, 3D), publishes papers, and brings models closer to production [00:41:52].
- Sensei Team: Manages the distributed training platform, enabling researchers to train models in a multi-tenant environment, and provides curated high-quality datasets [00:42:16].
- Applied Research: Refines and prepares models from research for productization, including further training or architectural changes [00:42:52].
- Gen Services Team: Creates efficient services and APIs to run these models in the cloud, focusing on optimized inference to ensure fast response times (e.g., below 15 seconds for a 1K by 1K image) [00:43:09]. This “AI Super Highway” acts as a managed model zoo [00:44:59].
- Vertical Layer (Tech Transfer Subgroups): Each product team (Photoshop, Adobe Express, Illustrator) has specialized subgroups that take the APIs from the Gen Services team and build user experiences around them [00:44:09]. These experts understand the core workflows needing AI the most [00:44:39].
This dual approach combines economies of scale for core AI development with product-specific expertise for seamless integration.
The Future Vision of AI in Creative Cloud
Alexandre Cen believes that all software companies will become AI companies, even if they don’t explicitly call themselves such [00:49:08]. For Creative Cloud, the future of AI in Adobe’s products involves:
- Continuous Architectural Evolution: AI model architectures will constantly change, with Adobe focusing on embracing new generations that offer 2x quality or performance [00:49:32].
- Hyper-Personalization: AI will enable massive changes in content types, particularly driving “hyper-personalization” for enterprises and individuals. This solves the “content velocity” or “content supply chain” problem, allowing brands to communicate with customers in an individualized, or even segment-of-one, manner [00:50:31].
- Creative Directors of Meta Experiences: Creative Cloud customers will evolve from visual designers to “creative directors of meta experiences” that are hyper-personalizable [00:51:25].
- Fluid Human-Computer Interaction: The interaction will become more fluid, less convoluted, and more accessible to larger audiences, with AI at its core, but perhaps not explicitly mentioned [00:51:51].
Overall, Adobe’s vision for the future of AI platforms in content creation is one of greater accessibility, creativity, and velocity, driven by continuous AI innovation.