From: aidotengineer

Finance professionals face significant challenges in processing and analyzing vast amounts of unstructured data, leading to demanding workloads and potential for errors when handled manually [00:01:32]. Brightwave, a company specializing in AI-powered research agents, aims to address these issues by digesting large corpuses of financial content and performing meaningful analytical work [00:00:16][00:03:03].

The Human Cost of Manual Data Processing

The sheer volume and complexity of financial data often overwhelm human analysts. Mike Kahn, founder and CEO of Brightwave, notes that these tasks are “not a human level intelligence task” [00:01:21]. Junior analysts, in particular, are often tasked with impossible assignments under extremely tight deadlines, leading to a “deep sense of empathy for the stakes and the human cost of doing this work manually” [00:01:32][00:02:03].

Specific Workflow Challenges

  • Due Diligence: In a competitive deal process, professionals may encounter data rooms with thousands of pages of content. The challenge is to quickly gain conviction and identify critical risk factors that could diminish asset performance before other teams [00:00:27][00:00:37].
  • Earnings Season: Mutual fund analysts covering 80-120 names face an overwhelming amount of calls, transcripts, and filings. Understanding market dynamics at both sector and individual ticker levels is a non-trivial problem [00:00:47][00:00:58].
  • Contract Review: In confirmatory diligence, dealing with 80-800 vendor contracts requires spotting early termination clauses and understanding thematic negotiation patterns across an entire portfolio [00:01:08][00:01:14].

Historical Parallel

The current situation for finance professionals is likened to the pre-spreadsheet era for accountants in 1978 [00:02:22]. Accountants would manually “run the numbers” on physical spreadsheets, a cognitively demanding, important, and time-intensive task [00:02:30]. Just as computational spreadsheets revolutionized accounting by allowing more sophisticated thought and efficiency, AI tools are expected to do the same for financial analysis [00:02:50][00:02:56].

Technical Challenges in AI for Financial Data Processing

While AI offers immense potential for efficiency improvements, building effective AI tools for finance involves several technical hurdles:

Non-Reasoning Models and Error Propagation

Many current models perform “greedy local search,” leading to fidelity issues. For example, a 5-10% error rate in extracting organizations from a Reuters article can lead to exponential errors when chaining multiple such calls [00:04:10][00:04:27]. Winning systems will need to perform end-to-end Reinforcement Learning (RL) over tool use calls, allowing for locally suboptimal decisions to achieve globally optimal outputs [00:04:36]. However, intelligently availing oneself of knowledge graphs and other tools for globally optimal outputs remains an open research problem [00:04:52][00:05:05].

The Latency Trap

A significant challenge for agentic systems is the “latency trap” [00:11:59]. If the feedback loop for a user is too long (e.g., 8-20 minutes for a report), users cannot perform many iterations in a day, which limits their ability to develop proficiency and trust with the system [00:12:49][00:12:53]. The product needs to provide quick “impulse response” to the user’s input to facilitate learning and refinement of their mental model [00:12:34][00:12:43].

Limitations in Synthesis

  • Output Length: Models often struggle to produce very long, coherent, and novel outputs (e.g., 50,000 words), as the instruction tuning datasets typically have characteristic output lengths [00:13:10][00:13:25]. This means a very large input context window gets compressed into shorter, less information-dense outputs [00:13:50][00:13:58].
  • Recombinative Reasoning: The ability to synthesize disparate fact patterns from multiple documents (e.g., across biomedical literature or financial reports) is limited [00:15:05][00:15:09]. This is because demonstrations of such complex recombinative reasoning are low in post-training corpuses [00:14:43][00:14:50].
  • Complex Real-World Situations: State-of-the-art models still have limitations in managing complex real-world situations, such as temporality (e.g., distinguishing financial statements before and after a merger) and propagating metadata that contextualizes findings [00:15:44][00:15:50][00:16:01].

Product Design and User Interaction

Designing effective AI tools for finance is not just a UI/UX or product architecture problem; it involves revealing the thought process of a system that has considered thousands of pages of content in a useful and legible way [00:03:40][00:03:49].

Mimicking Human Decision-Making

An ideal autonomous agent should mimic the human decision-making process by decomposing tasks [00:08:00]. This involves:

  1. Looking for public market comparables (e.g., SEC filings, earnings call transcripts) [00:08:12].
  2. Assessing relevant content from knowledge graphs or news [00:08:20].
  3. Distilling findings that substantiate hypotheses [00:08:32].
  4. Enriching and error-correcting those findings [00:08:44].

User Interaction and Oversight

  • Intermediate Notes: Allowing the model to “think out loud” with intermediary notes about what it believes based on findings is extremely useful [00:08:55].
  • Self-Correction: Models can self-correct by being asked to verify accuracy (e.g., “is this factually entailed by this document?”) as a secondary call [00:09:22][00:09:41].
  • Human Oversight: Human oversight is crucial, allowing analysts to “nudge the model” with directives or by selecting interesting threads to explore [00:10:04][00:10:09]. This is because human analysts possess external information (e.g., conversations with management, portfolio manager opinions) not available to the AI [00:10:19].

The “Surface” Interface

Instead of a simple chat interface, the final form factor for these products is still evolving [00:03:57][00:04:01]. It’s envisioned as a “surface” that reveals the model’s thought process [00:16:41]. Key features include:

  • Details on Demand: The ability to click on citations for additional context, including what the model was “thinking” [00:17:30][00:17:40].
  • Interactive Outputs: Structured outputs that allow users to “pull the thread” and ask for more details on specific points [00:17:52][00:17:54].
  • Interrogating Findings: The ability to highlight any passage of text and ask for implications or further information [00:18:00][00:18:04].
  • Audit Trail: Providing visibility into all findings as a high-dimensional data structure, allowing users to “turn over that cube” and see the “receipts” or audit trail of the system’s analysis [00:18:45][00:18:59]. This allows investor/analyst “taste” to come into play, enabling them to zoom in on what catches their eye, like patent litigation or a factory fire [00:19:22][00:19:28].

These design patterns aim to allow AI systems to handle the complex, unstructured data challenges in finance, while maintaining human oversight and facilitating user interaction, ultimately driving greater efficiency and deeper insights.