From: redpointai

When working with AI models, a recommended approach is to “start small and work your way up” [00:00:01]. This progression should be justified by rigorous Return on Investment (ROI), ensuring that progress is made on matters of significance [00:00:02].

Initial Approach: Experimentation and Benchmarking

The initial phase involves testing and establishing benchmarks [00:00:08]. It is important to acknowledge that initial benchmarks may be inadequate and will require improvement [00:00:09].

The journey often begins with minimal expenditure, such as spending a small amount on platforms like OpenAI or Llama on Databricks, to conduct a “litmus test” on whether AI is suitable for a particular task [00:00:12]. It is difficult to predict upfront if AI will be effective for a specific use case [00:00:19].

Instead, the approach should be scientific:

“You’re a scientist, this is data science in the literal sense, go run an experiment and try it.” [00:00:25]

To maximize the chance of success, one should:

  • Try the experiment on the best possible model available [00:00:31].
  • Prompt the model directly [00:00:33].
  • Manually provide helpful documents into the context, even without Retrieval Augmented Generation (RAG), to observe the outcome [00:00:34].

Scaling Up: Advanced Techniques and Fine-tuning

After initial experimentation, if potential value is identified, the next step involves escalating the complexity [00:00:41]. This might necessitate the implementation of more advanced techniques, such as “hardcore RAG,” to integrate proprietary data, as models cannot inherently access internal enterprise information [00:00:45].

If value continues to be demonstrated, fine-tuning becomes a viable option [00:00:53]. Fine-tuning allows for the specialized knowledge to be deeply embedded (“baked”) into the model [00:00:55].

Benefits of Fine-tuning

While fine-tuning may involve higher upfront costs [00:00:57], it generally leads to improved quality outcomes [00:00:58].