Another recording from a live event hosted by DeepLearning.ai, A subject near/dear to my heart, since you’all know I’m running Guanaco locally.
Wow, just wow. So, think about fine-tuning a model and all the various steps you have to take (remembering the notebooks from the Generative AI class) to get your LLM to perform to your specific needs. Predibase has turned all this into a config file, called Ludwig! AND it has the parameters to do all this efficiently–the presentation described tuning Llama 7b on a T4 card with 16GB VRAM (it included a consumer-level card, the RTX 4080, in its list as well!).
They offer a hands-on demo, which I’ll definitely check out. While the hype surrounding AI is about what it can do, there is still the consideration of what it costs to do it. This Ludwig method not only makes training possible for non-ML experts, but it also incorporates many methods to reduce the hardware/resource pressure thereby reducing the costs to complete. Bravo Predibase! 👏👏