From: redpointai

Harrison Chase, founder and CEO of LangChain, initially found his interest in the AI world through sports analytics [01:11:00]. However, he noted that the potential applications of Large Language Models (LLMs) in the sports world are “fewer than I would have wished” [01:35:00], and he would have “loved for there to be like a really Clear Connection” [01:38:00].

Current and Potential Applications

Querying Player Statistics

One application observed involves using natural language to query player statistics [01:45:00]. This allows users to ask questions like “who leads the league in three-pointers” and receive an answer from a database [01:56:00]. This makes stats more accessible, which is interesting given their role in fantasy sports, gambling, and general interest in sports [02:06:00]. However, Chase doesn’t consider this “super unique or super revolutionary” [02:21:00].

Generating Commentary

A “really cool” potential application is generating commentary for sports [02:30:00]. This aligns with creative applications of LLMs like Midjourney or Character AI [02:40:00]. The idea is to generate commentators for a game without physical presence or even to personalize commentary to the specific viewer [02:52:00]. For example, commentary could reference a specific shot watched by the viewer previously [03:04:00].

Internal Operations

An NBA team has reportedly chatted with LangChain, primarily for internal operations [03:15:00]. This involves allowing people to query internal data within the organization, rather than consumer-facing applications [03:18:00].

AI-Native Spreadsheets

An example of an AI-native spreadsheet was highlighted, where filling in column headers and using a “click and drag” feature, similar to an Excel macro, would spin up a separate agent for each cell to populate the spreadsheet [43:55:00]. This process involves executing many different agents in parallel [44:10:00]. This type of application could be useful for sports analytics, as it allows for many different tasks to run concurrently, with users inspecting results afterwards, differing from a constant chat interaction [44:11:00].

Inference Costs

Regarding AI-native spreadsheets and similar applications, if an agent is spun up for every cell or task, inference costs could “add up really quickly” [44:55:00]. However, the expectation is that costs and latency will decrease over time, with OpenAI demonstrating a good track record of cutting costs [45:00:00]. The advice for startups is to “try to build it with GPT-4” and focus on achieving product market fit before worrying about costs, as costs will go down [45:22:00]. This aligns with the motto “no GPUs before PMF” (Product Market Fit) [45:44:00].