From: redpointai

When approaching AI exploration and experimentation, it is recommended to “always start small and work your way up” [00:00:01]. Any progression should be justified by rigorous ROI [00:00:02], ensuring progress on important objectives [00:00:06].

Initial Experimentation and Litmus Testing

The journey into AI begins with minimal investment, such as “spending 20 cents on Open AI or on Llama on Databrick” [00:00:12] to perform a litmus test on AI’s suitability for a particular task [00:00:16]. It is currently difficult to predict whether AI will be effective for a specific use case [00:00:19].

The process should be approached scientifically, treating it as “data science in the literal sense” [00:00:26]. One should “go run an experiment and try it” [00:00:28], giving oneself the best chance of success [00:00:30]. This involves using the “best possible model you can get your hands on” [00:00:31], either by direct prompting or by manually supplying a few known-helpful documents into the context, to observe outcomes [00:00:34].

Iterative Development and Refinement

After initial experimentation, one should assess if there is potential for further development [00:00:41]. This progression may lead to more advanced techniques:

  • Benchmarking

    • “Build some benchmarks” and test the AI against them [00:00:08].
    • Be prepared to “realize your benchmarks suck and build better ones” [00:00:09].
  • Retrieval-Augmented Generation (RAG)

    • If initial tests show promise, it might be time for “hardcore RAG” [00:00:45].
    • This is crucial because the model “is not going to just have telepathy and know about my internal enterprise data” [00:00:48], requiring data to be brought to bear [00:00:47].
  • Fine-tuning

    • If the AI is already providing value, fine-tuning can be considered [00:00:53].
    • Fine-tuning allows “baking a lot of this into the model” [00:00:55].
    • While it incurs “a little more upfront cost” [00:00:57], it generally leads to “better quality” [00:00:58].