Just finished the first of 10 lessons in Google’s Generative AI Learning Path. It’s the best introduction to AI I’ve encountered, explaining the fundamental concepts of AI and ML as well as how they fit together. The video also goes into the types of GenAI models and the context in which each is best used. Towards the end, it funnels into an overview of Google’s AI tools. I haven’t tried Bard yet, but I have a feeling I will be using it as the course unfolds.
The class consists of a video, reading material, then a quiz. Don’t fret, the quiz is literally taken from the video which you can replay. I will update as I mosey down the path.
*Inhales deeply* @ the halfway point and overwhelmed! The first half is an excellent overview of both LLMs and image generators. While the videos include descriptions of Google’s AI tools, they don’t dilute the presentation. While I want to try some of the tools, many look to be enterprise-only.
The trajectory of the classes seemed to go from street level to El Capitan (the rock, not the theatre). Initiating the second half is a class on Encoder-Decoder Architecture. Minutes into the clip, I’m lost in a jungle of Tensorflow code. Scratching my head, I thought I had wandered into the wrong learning path. In the course info, the Audience has changed from “General” to “Data Scientists & ML Engineers”, in other words, not me. Sigh.
I’m disappointed, but undeterred. The amount of classes covering AI and prompting has absolutely exploded. I just received an email about DeepLearning’s new course developed with AWS on Generative AI with Large Language Models. I’ll check that out and report back.