From: aidotengineer
Mike Con, founder and CEO of Brightwave, describes how knowledge agents are transforming financial workflows by automating complex research and analysis tasks that are beyond human scale [00:00:20].
The Challenge in Financial Research
Finance professionals face significant challenges when conducting research and due diligence [00:00:27]:
- Due Diligence: In competitive deal processes, analysts must quickly gain conviction from data rooms containing thousands of pages of content, identify critical risk factors, and do so under extreme time pressure [00:00:31].
- Earnings Season: Mutual fund analysts cover 80-120 companies, sifting through calls, transcripts, and filings to understand market dynamics at both sector and individual ticker levels [00:00:47].
- Contract Analysis: In confirmatory diligence, reviewing hundreds of vendor contracts to spot early termination clauses or understand portfolio-wide negotiation themes is a “non-trivial problem” [00:01:08].
These tasks are often beyond human-level intelligence, pushing junior analysts into a “meat grinder” with impossible demands and tight deadlines [00:01:21]. The human cost of performing this work manually is substantial [00:02:04].
The Agentic Solution
The emergence of AI agents and knowledge agents, like Brightwave’s research agent, addresses these challenges [00:03:05]. These systems can digest vast volumes of content and perform meaningful work, accelerating efficiency and time-to-value by orders of magnitude [00:03:13].
Historical Parallel: Spreadsheets
The shift in finance workflows mirrors the impact of computational spreadsheets in the late 1970s [00:02:19]. Before spreadsheets, accountants manually “ran the numbers” on physical ledger paper, a cognitively demanding, important, and time-intensive job [00:02:30]. Today, no one wants that manual job because tools have drastically increased the sophistication of thought that can be applied to financial analysis [00:02:40]. Similarly, AI agents using humanlike interfaces and workflows are elevating the level of analysis possible in finance [00:02:50].
Technical Considerations for Agentic Systems
Reasoning and Error
Non-reasoning models primarily perform greedy local searches, leading to significant error rates when chained together. For instance, an error rate of 5-10% in extracting organizations from an article can lead to exponentially increasing errors in multi-step processes [00:04:10]. The winning systems will perform end-to-end Reinforcement Learning (RL) over tool use calls, where API call results inform the RL sequence of decisions, allowing for locally suboptimal decisions to achieve globally optimal outputs [00:04:36]. This remains an open research problem [00:04:52].
Product Design and User Experience
A critical design challenge is how to reveal the thought process of a system that has considered 10,000 pages of content in a useful and legible way to a human [00:03:40]. The final form factor for such products has not been determined, and simple chat interfaces are likely insufficient [00:03:57].
Instead of expecting users to become “prompting experts” (which can take thousands of hours), products should provide scaffolding to orchestrate workflows and shape system behavior [00:07:22]. Verticalized product workflows are likely to endure because they explicitly define user intent [00:07:41].
Mimicking Human Decisions
Basic autonomous agents should mimic the human decision-making process [00:08:00]. This involves decomposing tasks such as:
- Assessing content: From SEC filings, earnings transcripts, knowledge graphs, or news sources [00:08:12].
- Identifying relevant document sets: Determining which documents pertain to the query [00:08:30].
- Distilling findings: Extracting information that supports hypotheses or investment theses [00:08:32].
- Enriching and Error Correcting: This includes adding intermediary notes (“think out loud”) about what the system believes based on initial findings [00:08:43]. Models can self-correct by being asked to verify factual entailment or object classification [00:09:19].
Synthesis and Limitations
Synthesis, the process of weaving together disparate fact patterns from multiple documents into a coherent narrative, is crucial [00:09:55]. However, current models face limitations:
- Output Length: Models often struggle to produce very long, coherent outputs (e.g., 50,000 tokens) because training data lacks such extensive human-generated content [00:13:10].
- Compression Problem: Large input context windows compress information, leading to less specific outputs [00:13:50]. Decomposing research instructions into multiple sub-themes can yield higher quality, more information-dense results [00:14:27].
- Combinative Reasoning: Models lack sufficient training demonstrations for complex combinative reasoning across multiple documents, making it hard to generate truly thoughtful analysis [00:14:43].
- Complex World Situations: Models struggle with factors like temporality (e.g., understanding changes from mergers or contract addendums) [00:15:44].
Human Oversight and Product Interface
Human oversight remains extremely important [00:10:04]. The ability to “nudge” the model with directives or select interesting threads to explore is crucial because human analysts possess information not yet digitized, such as conversations with management or insights from portfolio managers [00:10:09].
The anthropomorphizing of systems (e.g., “portfolio manager agent”) can constrain flexibility [00:10:46]. Instead, a “Unix philosophy” approach with simple tools that do one thing well and work together via a universal interface (like text) is preferred [00:11:01].
The “latency trap” is a key concern: if the feedback loop for user interaction is too long (e.g., 8-20 minutes), users cannot refine their mental models of the system or develop fluency [00:12:00].
Brightwave’s Product Approach
Brightwave aims to reveal the agent’s thought process by presenting information as a “surface” rather than just a chat [00:16:41]. Key features include:
- Details on Demand: Users can click on citations to get additional context about a finding, including what the model was “thinking” [00:17:30].
- Interactive Outputs: Structured outputs allow users to “pull the thread” on specific findings [00:17:52].
- Continuous Surface: Users can highlight any passage of text to ask for more details or implications [00:18:00].
- Audit Trail: The system provides an “audit trail” by laying out all findings (e.g., fundraising timelines, litigation details) discovered from reviewing documents, allowing users to drill in on points of interest [00:19:00]. This “magnifying glass for text” empowers the analyst’s “taste” in identifying critical information [00:19:26].
The final form factor for this class of products is still evolving [00:19:47]. Brightwave is actively involved in developing AI agents for productivity and AI tools in financial research and due diligence, hiring professionals for roles such as product designer and front-end engineer [00:19:57]. They believe the most powerful products will lean into human taste-making abilities [00:10:31].