From: aidotengineer

Automating real estate due diligence uses AI to streamline the process of reviewing properties before they are bought or sold [02:01:58]. This approach aims to accelerate property transactions by tackling the traditionally manual and time-consuming task of legal due diligence [02:02:44].

The Challenge of Traditional Due Diligence

Real estate due diligence often involves lawyers reading “mountains of paperwork” to find “needles in a haystack” for potential red flags before properties can be transacted [02:17:34]. This process is slow and labor-intensive [03:48:38].

Orbital Co-Pilot: An AI Solution

Orbital, a company with offices in New York and London, developed “Orbital Co-Pilot” to address this challenge [01:56:07]. Its mission is to automate real estate due diligence [02:01:58]. Launched in January 2024, Orbital Co-Pilot is an agentic product designed to think like a real estate lawyer [03:14:02].

How Orbital Co-Pilot Works

The system automates tasks typically performed manually by lawyers, such as reading paperwork and compiling extracted information [03:45:00].

A typical workflow involves:

  1. Report Selection: Users choose an appropriate report, such as an occupational lease report [04:01:23].
  2. Document Upload: Users upload legal documents, like deeds and leases, which can total hundreds of pages [04:07:07].
  3. OCR and Structuring: Documents are first processed using Optical Character Recognition (OCR) to structure handwritten and typed text [04:16:38].
  4. Agentic Processing: The agentic system creates a plan, breaking it down into multiple subtasks [04:30:05]. Each subtask is its own agentic system with multiple LLM (Large Language Model) calls [04:37:37]. The system is given objectives, such as finding the lease date or annual rent, and reads legal documents to find answers [04:41:40].
  5. Report Generation: Once tasks are completed, a final report is generated, which can be quickly reviewed by a lawyer [04:56:00]. Citations within the report can be clicked to go back to the original source documents [05:07:08].
  6. Export: The report can be downloaded as a Word document for storage and client delivery [05:16:04].

Impact and Scale

Orbital’s agentic product has significantly scaled its operations:

  • Token Consumption: In 18 months, token consumption grew from less than a billion to almost 20 billion tokens per month, representing work previously done manually by lawyers [05:34:00].
  • Revenue Growth: The company scaled from zero revenue to multiple seven figures in annual recurring revenue within 18 months [06:06:04].

Developing at the AI Frontier: Key Decisions

Orbital’s development journey involved navigating the rapidly evolving AI landscape [00:37:05]. They transitioned from System 1 models (like GPT-3.5 and GPT-4 32K) to System 2 models (like 01 preview and 04 mini) [06:27:06].

Key decisions made:

  1. Optimizing for Prompting over Fine-tuning: This allowed for maximum speed of development. Feedback from users could lead to prompt adjustments that were pulled in real-time, easily incorporating changes into the product [07:00:23].
  2. Heavy Reliance on Domain Experts: Private practice real estate lawyers, with decades of experience, were embedded in the team to write many of the prompts [07:34:01]. They effectively teach the AI system their expertise [07:49:10].
  3. “Vibes Over Evals”: Instead of a rigorous, objective evaluation system, Orbital initially relied on human beings (often domain experts) testing the system subjectively [07:57:33]. This approach, combined with user feedback and internal team testing, has contributed to significant growth [08:12:08].

Orbital has a large and growing number of domain-specific prompts (over 1,000), which presents the challenge of “prompt tax” [09:18:14].

The Concept of “Prompt Tax”

“Prompt tax” is the hidden cost incurred when integrating new AI model functionalities into applications [01:00:00]. It arises from the need to migrate existing prompts to new models and the inherent fear of unknown consequences or regressions [10:04:12]. Unlike technical debt, prompt tax is driven by a desire to upgrade and unlock new capabilities now [10:59:16].

Battle-Tested Tactics for AI Development

Orbital has developed several tactics to manage the challenges of rapid AI model evolution:

  • Prompt Adaptation for System 2 Models:
    • Specify what to do, not how: Unlike System 1 models requiring specific instructions, System 2 models need only a clear objective [12:12:04].
    • Leaner Prompts: Repetitive instructions used for System 1 models can be removed [12:26:01].
    • Unblocking the Model: Avoid too many constraints; allow System 2 models time to think and reason [12:40:02].
  • Leveraging Thought Tokens: System 1 models’ thought tokens can be used for explainability for users (especially in complex legal matters) or for debugging when issues arise [13:13:03].
  • Progressive Delivery with Feature Flags: Similar to software development, new AI model upgrades can be rolled out progressively to mitigate risk [13:46:04].
  • “Betting on the Model” Mantra: The team operates on the principle that AI models will get smarter, cheaper, faster, and more capable in the future. This allows them to build features that will improve as models evolve [14:55:03].
  • AI-Assisted Prompt Migration: System 2 models can help migrate domain-specific prompts from older models, significantly reducing manual effort [15:44:03].
  • Embracing Uncertainty and Shipping: Given the probabilistic nature of AI models, teams must be brave enough to ship new models despite unknowns, and then deal with consequences by mitigating risks on the fly [16:11:00].
  • Strong Feedback Loops: Building systems to receive rapid user feedback (e.g., thumbs up/down) and deliver it immediately to AI engineers and domain experts allows for quick identification and fixing of issues [17:09:08]. This enables fixes in minutes or hours, rather than days or weeks [17:48:00]. This contributes to strong customer success.

The Future of AI Engineering

Deis Havaris, CEO of Google DeepMind, notes the unique challenge of the AI space: the underlying tech stack is evolving incredibly fast, making it difficult to bet on product features [18:14:48]. This requires deeply technical product people who can anticipate where the technology will be in the future [19:05:07].

There is a growing opportunity for “product AI engineers” who understand both customer problems and model capabilities to turn them into valuable product features [19:57:33]. This connective tissue, whether within a single AI engineer or a team, is a promising proposition for the future of the AI engineering community [20:13:01].

Paying the Prompt Tax: Shipping with Confidence

As AI moves faster, and agentic product surface areas grow, having more confidence in shipping new models is crucial for continued innovation [20:44:00]. While “vibes” have worked for Orbital thus far due to real-time user feedback and quick tooling, there’s a question of whether this scales indefinitely [21:06:00].

The challenge of an objective evaluation system (eval system) for complex, probabilistic LLMs with many edge cases (correctness, style, conciseness, citations in legal documents) might be prohibitively expensive or slow to implement [21:32:00].

Progressive delivery—rolling out new models internally first, then to limited users, and gradually scaling—allows for fixing issues on the fly and calibrating rollout based on feedback volume [22:30:00]. This strategy aims to maximize the benefits of new models while mitigating risks [11:37:00]. The emphasis remains on “buy now” to stay on the edge of the AI frontier and maximize opportunity [23:37:39].