From: aidotengineer
Bloomberg, a company with nearly 15 years of investment in AI, has evolved its strategy for integrating AI into its operations and product development, particularly with the advent of large language models (LLMs) and agentic architectures [00:00:42].
Evolution of AI Development at Bloomberg
Initially, Bloomberg undertook the ambitious project of building its own large language model, a process that occupied the entirety of 2022 [00:00:46]. This effort provided significant learning on model construction, data organization, and performance evaluation [00:00:57]. However, with the rapid advancements in the open-source community, particularly after the emergence of ChatGPT, Bloomberg strategically pivoted [00:01:05]. The focus shifted to building products on top of existing external models, leveraging the vast array of use cases within Bloomberg [00:01:14].
Bloomberg’s AI Organizational Structure
Bloomberg’s AI efforts are organized as a specialized group reporting to the Global Head of Engineering [00:01:36]. This group collaborates extensively with Bloomberg’s robust data organization, as well as product and CTO teams in cross-functional settings [00:01:47]. The AI team comprises approximately 400 people across 50 teams located in London, New York, Princeton, and Toronto [00:01:58].
Approach to Generative AI Product Development
Bloomberg has been building products using generative AI for 12 to 16 months, starting with more agentic tools [00:02:12]. A key characteristic of their current products is their “semi-agentic” nature [00:08:57]. This means some components operate autonomously, while others are not, reflecting a cautious approach due to a lack of full trust in complete autonomy [00:09:03].
Non-Negotiable Principles
When implementing AI in their products, especially in the financial sector, Bloomberg adheres to strict non-negotiable principles [00:06:15]:
- Precision and Comprehensiveness: Ensuring high accuracy and completeness of information [00:06:25].
- Speed and Throughput: Delivering information quickly and efficiently [00:06:27].
- Availability: Ensuring continuous access to services [00:06:29].
- Data Protection: Safeguarding contributor and client data [00:06:30].
- Transparency: Maintaining clear visibility throughout the AI-driven processes [00:06:34].
These principles act as crucial “guard rails” in their AI systems. For instance, any system must prevent offering financial advice, reflecting a core business constraint [00:09:11].
Scaling AI Initiatives
Bloomberg’s approach to scaling AI initiatives focuses on two main aspects: dealing with system fragility and evolving organizational structure [00:09:39].
Addressing Fragility and Evolving Systems
The composition of LLMs into agents significantly multiplies potential errors, leading to fragile behavior [00:11:22]. Unlike traditional software with well-defined APIs and predictable outcomes, or even early machine learning models with manageable stochasticity, agentic architectures introduce high variability [00:10:07].
To counteract this, Bloomberg emphasizes building resilient systems by not relying on upstream systems for perfect accuracy [00:14:23]. Instead, they factor in the inherent fragility and continuous evolution of upstream components [00:14:26]. This involves implementing independent safety checks and robust monitoring (e.g., MLOps, CI/CD for remediation workflows and circuit breakers) within each agent [00:08:21]. This mindset allows individual agents to evolve faster without complex, time-consuming handshake signals or sign-offs from every downstream consumer before release [00:15:00].
Organizational Structure for Scaling
Bloomberg has rethought its organizational structure to accommodate the demands of new AI tech stacks and products [00:15:57].
-
Initial Phase (Vertical Alignment): In the early stages of product design, when understanding is limited and rapid iteration is crucial, Bloomberg favors vertically aligned teams [00:16:46]. This approach encourages fast iteration, and sharing of code, data, and models within a dedicated team focused on a specific product or agent [00:16:50].
-
Mature Phase (Horizontal Alignment): As the understanding of a product or agent’s use cases matures and more agents are built, the organization transitions to more horizontal structures [00:17:10]. This allows for optimization, performance increases, cost reduction, improved testability, and enhanced transparency [00:17:29]. Shared services, like universal guard rails (e.g., preventing financial advice), are centralized horizontally to avoid redundant efforts across numerous teams [00:17:39]. This also enables the breaking down of monolithic agents into smaller, more manageable pieces [00:18:12].
Example: Research Analyst Agent
For a research analyst, Bloomberg’s AI architecture for query understanding and answer generation is highly factorized [00:18:24]. An agent deeply understands user queries and session context to determine the necessary information [00:18:28]. This is then dispatched to specialized tools, often with an NLP front-end, to fetch structured data [00:13:36]. Answer generation is also a distinct, factored-out process with rigorous standards for well-formed answers [00:18:41]. Non-optional guard rails are called at multiple points, ensuring no autonomy where critical principles are concerned [00:18:52]. The system also leverages years of traditional and modern data wrangling techniques, including hybrid indices [00:18:59].