From: redpointai
When approaching AI projects, it is recommended to start small and incrementally work your way up [00:00:01]. This iterative process should be justified by a rigorous Return on Investment (ROI), ensuring progress is made on things that matter [00:00:02].
The Scientific Approach to AI
It is often difficult to predict whether AI will be effective for a specific use case [00:00:19]. Therefore, the approach should be that of a scientist:
“You’re a scientist. This is data science in the literal sense. Go run an experiment and try it.” [00:00:25]
The journey can begin by spending a minimal amount, such as 20 cents, on services like OpenAI or Llama on Data Brick, to “litmus test” if AI is suitable for a given task [00:00:12].
To give the experiment the best chance of success:
- Try it on the best possible model available [00:00:31].
- Prompt the model directly [00:00:33].
- Alternatively, manually pull in a few known helpful documents into the context, rather than immediately implementing Retrieval-Augmented Generation (RAG) [00:00:34].
- Observe the results to determine if there is inherent value [00:00:39].
Iterative Benchmarking
As part of testing and evaluating AI capabilities, it is crucial to build some benchmarks [00:00:08]. Initial benchmarks may not be perfect, and the process involves:
- Testing on existing benchmarks [00:00:08].
- Recognizing that those benchmarks may be inadequate [00:00:09].
- Building better ones over time [00:00:10].
Scaling Up
Once initial value is identified, the next steps in scaling up an AI solution can include:
- Implementing Hardcore RAG: This becomes necessary when the model requires access to specific enterprise data, as it will not inherently possess this information [00:00:45].
- Fine-Tuning: If significant value is being derived, fine-tuning can bake a lot of the accumulated knowledge into the model. While it may incur more upfront cost, it typically leads to better quality outputs [00:00:53].