From: redpointai
OpenAI, a leading AI firm, focuses on building world-leading models that power much of the ongoing AI transformation [00:01:37]. Peter Welinder, VP of Product and Partnerships at OpenAI, oversees well-known products like GPT-4, ChatGPT, and GitHub Copilot [00:00:13].
Value Accrual in the AI Ecosystem
OpenAI believes that most value will ultimately accrue at the application layer across various applications, rather than solely at the model or infrastructure layer [00:02:21]. The core mission of OpenAI is to build AGI (Artificial General Intelligence), ensure its safety, and make sure it benefits all humanity [00:02:34]. Central to this mission is enabling as many builders as possible to build products on top of their technology [00:02:44].
To achieve this, OpenAI has made a conscious choice to avoid excessive value extraction at the model infrastructure layer, keeping prices low to enable more development [00:02:50]. This strategy is evident in dramatic price cuts over the years, such as a 70% price reduction and the release of GPT-3.5 Turbo, which was 10 times cheaper than the original 3.5 model while being much faster [00:03:07]. Both research and engineering efforts at OpenAI aim to bring down prices, broadening access and applicability of their models to more applications [00:03:22].
OpenAI acknowledges that the right tools for this technology have not yet been fully invented and prefers the developer ecosystem to figure out and build these tools [00:03:56]. With millions of people already building on their technology, they are the ones who will develop interesting applications and tools [00:04:40].
“I think that’s like a super critical component and tooling is obviously a super important component I think we spend less of that time on that like we we there’s just so many the the right tools for the for this this technology just haven’t been invented yet like some of them are probably in the early stages and we we Have No Illusion that we will be the ones that kind of come up with those like I would much rather have the developer EOS system figure out what are the best tools and and and actually build those” [00:03:45]
Ultimately, value capture is expected at the application layer where companies can build great applications leveraging standard competitive advantages like network effects and branding, which will not be different in the AI space [00:05:10]. OpenAI aspires to be a platform where the value generated on top is much greater than the value captured by the platform provider, akin to Bill Gates’s vision [00:05:46].
OpenAI’s Product Focus: Platform vs. Applications
OpenAI’s primary goal is to remain at the platform level [00:06:18]. While they offer ChatGPT, a consumer-focused product, it is designed as a very general application meant to answer generic questions [00:06:24]. It is not intended to be the best AI teacher, doctor, or lawyer; specialized applications are being built by other companies on top of OpenAI’s models [00:06:39]. These specialized applications involve integrations with other products, proprietary content, and specific domain expertise [00:06:58].
OpenAI wants to provide a platform for companies to build these experiences, potentially connecting them to ChatGPT through features like plugins [00:07:16]. This allows ChatGPT to connect to external services, further solidifying OpenAI’s role as a platform layer for both raw reasoning and consumer access [00:07:38].
Product Development Pace and Prioritization
OpenAI’s rapid pace of product development (e.g., plugins, web browsing, mobile) is rooted in a high focus on the models themselves [00:08:03]. OpenAI made a significant top-down bet on language models, dedicating most of its personnel, compute, and GPUs to training them [00:08:43]. They streamlined their focus, moving away from other bets like robotics or beating world champions in Dota 2, as confidence grew in the language model direction [00:09:27].
The flexibility of language models allows for rapid feature development like browsing and plugins [00:09:55]. The organization attributes its speed to its smart and driven researchers, engineers, and product people, who are highly motivated to get their work into users’ hands and learn from feedback [00:10:13]. Features like browsing and plugins originated from researchers’ visions of building on language models [00:10:50]. OpenAI aims to build products with the flexibility to easily deploy new features, like unifying disparate efforts (browsing, code interpreter, external APIs) under a common pattern like plugins [00:11:03].
Autonomous Agents and Plugins
OpenAI views its plugins as “mini-agents” capable of performing tasks by connecting to multiple APIs in a sequence [00:11:46]. For example, a plugin could create a recipe and add it to an Instacart shopping list [00:11:57]. The future direction involves more complex agent-like behavior, where the model only checks in with the user when uncertain [00:12:14].
The API aspect of OpenAI’s product is crucial for enabling developers to push the boundaries of this technology faster than OpenAI could on its own [00:12:34]. While plugins are still early and have rough edges, their potential is clear, and future generational models are expected to be smarter and more robust, making these applications work seamlessly [00:12:54].
OpenAI optimizes for a future with AGI, envisioning a product that behaves like a new employee—capable of executing tasks, asking clarifying questions, and iterating based on feedback [00:13:30].
Addressing Hallucinations and Developer Needs
The biggest challenge preventing wider enterprise adoption is the problem of models hallucinating facts, making them untrustworthy [00:26:17]. This is an open research problem, and OpenAI invests heavily in making its models more robust [00:26:38].
A common workaround involves grounding models in external data [00:26:55]. By embedding user questions and internal company documents, using vector search databases, models can find and summarize relevant information, significantly reducing hallucinations [00:27:01]. Users can also instruct the model to state “I don’t know” if an answer isn’t found in the provided context [00:27:34].
Regarding developer tooling, OpenAI listens to its developers to identify obstacles and enable faster progress [00:28:22]. If all developers need a specific piece of tooling, OpenAI might build it to facilitate quick onboarding and development [00:28:36]. However, their core focus remains on areas where they excel globally: training models and ensuring high-quality inference (running models at scale) [00:28:58].
“I think it’s like training these models making sure inference is really really good and so that’s that’s where we got put most of our efforts because that’s where we can provide the most value in some sense” [00:29:02]
OpenAI prioritizes lower prices, higher reliability, and lower latency because these fundamentals are crucial for any application; without them, tooling becomes irrelevant [00:29:21].
Internal Use of ChatGPT
Internally at OpenAI, ChatGPT is used for a broad range of purposes, reflecting its diverse applications among external users [00:44:05]. Common internal uses include:
- Coding: Debugging issues and analyzing stack traces [00:44:30].
- Writing: Overcoming writer’s block, improving writing quality, and drafting emails based on key points [00:44:40].
Future Outlook and Competitive Positioning
OpenAI believes that the majority of value will ultimately reside in the smartest models [00:21:54]. Just as companies seek the best talent, they will seek the smartest models to build the best products [00:22:09]. Choosing a “dumber” model will put a company at a competitive disadvantage [00:22:28].
This focus on the smartest models allows tackling the most economically valuable problems [00:22:42]. Initially, this means building co-pilots for various professions (e.g., lawyers, medical providers), but eventually, it could extend to AI scientists developing new drugs or solutions to climate change, leading to immense economic output [00:22:56].
While smaller, open-source models have auxiliary uses like embeddings for retrieval or on-premise deployments where latency is critical [00:23:41], applications requiring the utmost reliability and intelligence will gravitate towards proprietary models [00:23:56]. An example of OpenAI’s open-source investment is its Whisper models for accurate audio transcription, which enables more applications that feed into large language models [00:24:39]. This decision was based on enabling broader use cases, not direct revenue from speech recognition [00:25:05].
The biggest threat to OpenAI’s leadership position would be losing touch with its users and developers [00:47:10]. Peter Welinder expresses concern about the tension where new, better models can inadvertently replace features developers have built [00:47:40]. Scaling the “great customer experience” that existed when OpenAI had fewer customers is a key challenge to ensure people continue to embrace and build on their technology [00:48:31].