From: redpointai
The AI landscape is characterized by rapid development and a future that is difficult to predict with certainty [00:27:00]. However, some trends and potential future applications, particularly in the realm of personalization, are beginning to emerge [00:48:50].
Personalization: A Key Future Trend
Personalization at a user level is considered a crucial element for AI applications to truly take off [53:44:00]. The ability to dynamically render content and tailor it based on the individual user is seen as a significant underexplored angle for many applications [01:01:24].
Current State of Personalization
The speaker notes that companies like New Computer by Sam Whitmore and Hearth AI are actively exploring personalization in their applications, focusing on memory and relationship management [52:52:00].
Potential Personalized AI Applications
- Personalized Sports Commentary: Generating commentary for sports that is personalized to the viewer, for instance, by referencing their past viewing habits [02:57:00].
- Journaling Apps with Memory: A journal application that remembers personal details about the user from their entries and past interactions [55:01:00]. This contrasts with current models that struggle with long-term memory, like Character AI [55:22:00].
- Tailored Knowledge Bases: Applications like Notion QA are already providing personalized chat over a user’s knowledge base [00:25:40]. In the future, this could extend to services like Perplexity, where search answers are tailored to individual user interests [00:55:53].
Emerging Application Archetypes
Beyond personalized chat, several other types of AI applications are expected to become more prevalent:
- More Complex Chatbots: These bots would be designed as state machines with different stages, such as customer support bots that transition through various debugging steps, or AI therapists with defined conversational stages [41:55:00].
- Longer Running Jobs: Applications that perform more complicated tasks requiring multiple Large Language Model (LLM) calls and self-correction, taking minutes rather than seconds [42:19:00]. Examples include GPT researcher for generating research reports or GPT newsletter for creating first drafts of articles [42:24:00].
- AI-Native User Experiences (UX): There is significant innovation to be done in how people interact with AI [43:27:00]. An example is an AI-native spreadsheet where each cell is populated by a separate agent, running multiple agents in parallel [43:55:00]. This model allows for inspection of results after a task is completed, rather than continuous interaction [44:33:00].
Role of Open Source Models
Open-source models are expected to become more ubiquitous [51:58:00]. While proprietary models like OpenAI are currently preferred for complex tasks, there is strong interest in local models and agents that can run on user devices [52:05:00]. This trend is driven by the desire for local processing of personalized or sensitive data, such as private documents or coaching personas [52:16:00].
Obviated Concepts in AI Development
As AI models improve, some current practices may become less necessary:
- Context Window Management: With larger context windows in future LLMs, some “tricks” for managing conversation history and summarizing previous turns might become obsolete [46:01:00].
- Manual Structured Output: The need for prompts that explicitly instruct models to “write this in JSON or poppy” is hoped to disappear with better function calling and structured extraction capabilities [48:32:00].
- Some Multimodal Limitations: While currently overhyped for precise knowledge work, multimodal models are expected to improve their spatial awareness and extraction capabilities, enabling more sophisticated tasks [49:11:00].
However, certain aspects like retrieval-augmented generation (RAG) and the state machine mental model for agents are likely to persist due to their inherent utility and clarity for developers [47:14:00], [48:05:00].
Advice for Startups
For startups building in AI, the advice is to prioritize building and achieving product-market fit (PMF) over concerns about cost optimization or future technological shifts [45:08:00]. “No GPUs before PMF” emphasizes using existing powerful models like GPT-4 to validate ideas before investing heavily in custom model training or infrastructure [45:42:00]. The space is still early, and the focus should be on building applications and gaining a deep understanding of how users interact with these systems [01:00:02].