From: redpointai

AI is transforming creative tooling across images, video, music, and more [00:00:00]. Scott Belsky, founder of Behance, prolific investor, and Chief Product Officer/Chief Strategy Officer at Adobe, discusses this evolution, including when companies should train their own models and Adobe’s approach [00:00:10]. The conversation also touches on startup opportunities in the creative tool space, hyper-personalized brands, and the future of Adobe AI [00:00:16].

Unlocking New Creative Workflows

While mainstream AI-generated content often goes viral, Scott Belsky is particularly interested in how creative professionals are leveraging AI to unlock new workflows [00:01:02].

Project Neo and Firefly Synergy

An example of this is the unexpected synergy between Adobe’s Project Neo, a 3D illustration program, and Adobe Firefly, an image model [00:01:15]. Users discovered they could render 3D structures in Project Neo and then use them as “structure references” for Adobe Firefly, providing “extreme precision” in guiding image generation [00:01:25]. This workflow was not anticipated by Adobe, highlighting that the focus is shifting from “the next best model” to the “controls that one can apply on top of the model” [00:02:07].

Adobe’s AI Strategy

Adobe’s AI strategy centers on three pillars: interfaces, models, and data [00:03:00].

Firefly Family of Models

The Firefly family consists of homegrown generative models trained on licensed content [00:03:10]. Adobe offers a compensation program for content contributors and aims to provide indemnification to all customers, emphasizing the importance of transparent model training [00:03:20].

Interface Integration and Control

Adobe integrates these models into products like Photoshop [00:03:36]. They also make models available to third parties and large companies for “big complicated at-scale workflows” [00:03:42]. Control capabilities like structure reference and style match are being built [00:03:59].

Data and Custom Models

Adobe focuses on enabling custom models for customers [00:04:06]. For example, Nickelodeon could train a version of Firefly on SpongeBob SquarePants to generate content for ideation, character development, and storylines without copyright concerns [00:04:12].

LLM Partnerships

For large language models (LLMs), Adobe partners with other companies rather than building their own [00:04:28]. These partnerships surface through products like Acrobat, AI Assistant, and digital experience products for marketing analytics [00:04:32].

User Adaptation and the Role of “Taste”

AI tools allow creative professionals to “explore far more surface area of possibility far more quickly” [00:06:31]. For instance, Generative Recolor in Illustrator allows instant application of hundreds or thousands of color palettes to vector creations, a process that used to take days [00:06:42]. This frees up time for higher-order exploration [00:07:04].

Scott Belsky draws a parallel to the invention of the camera, which initially offended portrait artists but eventually led to photography becoming an art form [00:07:51]. The skill shifted from rudimentary tasks to choosing the right lens, lighting, and ultimately, selecting the “best” photos, which requires “taste” [00:08:32].

Taste is now “more important than ever before” [00:08:58]. Adobe’s role is to enable users to “flex their taste” through prompt augmentation, onboarding experiences, and helping them bring their “Mind’s Eye” to life [00:09:12].

The Future of AI Models and Interfaces

Model Proliferation

Belsky believes that instead of converging on a single creative model, there will be “thousands of models,” many of which will become commodities or focus on “very niche use cases” [00:10:00]. Creative tools should allow users to choose different models for different purposes [00:10:17]. For example, mood boarding might use models trained on diverse content (even if not licensed for commercial use) for ideation, while commercial projects require models trained on licensed data [00:10:29].

Adobe’s strategy for building its own models is based on areas where they are “World experts” and can deliver a “better end-to-end experience” [00:11:35]. This includes fine-tune controls, custom model capabilities, and leveraging data from how users interact with their tools [00:11:51].

UI on Demand

The next generation of AI tools will move beyond command-line prompts to offer “custom UI to fine-tune what you just got” [00:13:29]. This means the interface will adapt dynamically to the user’s specific use case, spinning up custom UI as needed and disappearing when done [00:13:58].

Challenges in AI Video Creation

While many startups are touting AI video models, the bar for “professional grade” and “commercially viable” quality is very high [00:14:17]. Belsky compares it to self-driving cars, noting a significant gap between “cool happy path demos” and pragmatic, everyday use cases [00:15:02].

A pragmatic milestone for AI video creation would be the ability to extend a movie scene by a few seconds to meet timing requirements, saving costly re-shoots [00:15:31].

Opportunities and the Impact of Hyper-Personalization

Startup Opportunities

Belsky believes the real opportunity for startups in the creative and marketing space lies in differentiation at the interface and data level, rather than solely relying on models, many of which will become commoditized [00:16:51].

The industry is moving towards “hyper-personalized digital experiences at scale” [00:17:24]. This requires customer data, marketing workflows for deployment and optimization, and a creative stack connected to these systems with “guard rails” and “brand check AI” [00:17:45].

Opportunities exist in:

  • Helping small businesses “operate as huge businesses” by enabling capabilities like a 100-person marketing team [00:19:06].
  • Tackling antiquated processes in sectors like law or government [00:19:29].

The Pendulum of Hyper-Personalization

The rise of hyper-personalized content means brands will “flood the zone,” inundating individuals with tailored experiences [00:20:18]. In response, humans will “crave scarcity meaning and craft in the digital experiences that we consume” [00:21:52]. This will “reboot the role of humans in content creation” [00:21:58]. Shared social experiences, like watching a movie and discussing it with friends, are unlikely to disappear [00:22:07].

AI in Music

Many AI music startups focus solely on AI’s role, forgetting that the reason people repeatedly listen to a song is often the human story behind it [00:22:31]. AI should lower the barrier for participation in music creation while also enabling artists to tell compelling human stories that resonate with listeners [00:22:56].

Future of AI Agents and Cost Considerations

Proactive AI Suggestions

Beyond simply doing things for users or answering questions, the next tier of AI agents will offer “proactive suggestions of things you didn’t know you should be doing or trying” [00:24:25]. This includes guiding exploration, predicting performance, and suggesting variations [00:24:37]. Additionally, customers want increased speed and quality [00:24:56].

Cost and Pricing

Cost is a consideration for enabling faster performance, and there’s a constant push for efficiency in models [00:25:24]. However, product teams are encouraged to prioritize value for the customer over cost constraints [00:25:47].

Adobe’s pricing model uses “generative credits” tied to existing plans [00:26:01]. Customers can upgrade for more credits. This flexible approach allows Adobe to adjust credit costs for more intensive capabilities (like generative video) while aiming to keep them accessible and integrated into everyone’s workflow [00:27:07].

Surprises and Changing Perspectives

The Power of Defaults

The biggest surprise in building Adobe AI features was the impact of “the devil’s in the defaults” [00:28:16]. Placing Generative Fill and a new generative bar prominently in Photoshop “totally unlocked utilization in a way we could have never forecasted” [00:28:22]. The challenge is now making AI a default part of every product usage to grant “superpowers” [00:28:41].

Proliferation of Models

Scott Belsky has changed his mind on whether only a few models would dominate [00:29:04]. He now believes there will be “thousands of models,” with much happening “on the edge” [00:29:15]. As model capabilities increase, many use cases will fall below the “frontier,” leading to a focus on cost when delivering capabilities, especially with local or open-source models [00:29:34].

Adobe views frontier model players like Pika and Sora as platforms [00:30:19]. As these models improve, Adobe’s products also improve, as long as people still need an interface to tell a story, leverage data, and use tools to achieve their creative vision [00:30:23].

Beyond Creative AI

Exciting Non-Creative AI Startup

Belsky highlights Cobalt, a company using AI for identifying mineral deposits [00:31:08]. This application of AI to an antiquated, inefficient industry (mining) can increase identification of deposits by 4 to 10x in speed, reliability, and reduced cost, with global implications for batteries and EVs [00:31:48].

Future Entrepreneurial Focus: Personalization in Physical Spaces

If he were building a startup outside Adobe, Belsky would focus on extending personalization beyond digital experiences into physical spaces [00:32:28]. The goal is for AI to unlock the magical feeling of being “known” and remembered (e.g., at a restaurant knowing your favorite drink, or a personal shopper remembering your preferences), making highly personalized service accessible to everyone, similar to how Uber democratized personal drivers [00:32:50].

Learn More

To learn more about Scott Belsky’s insights, explore his monthly newsletter, “Implications” [00:34:27].