Teaching a course on LLMs and GenAI
Related
More from Stories by Dmitry Kan on Medium
Special thanks to Doug Turnbull, Daniel Svonava, Atita Arora, Aarne Talman, Saurabh Rai, Andre Zayarni, Leo Boytsov, Pat Lasserre and Bob van Luijt for reading and commenting the drafts of this postJo Kristian Bergum recently wrote a massively influential X post: “The rise and fall of the vector database infrastructure...
This Fall I have co-taught a new course on LLMs and Generative AI at the University of Helsinki. It was the first course in its kind, with quite a large group of students.AI-Generated imageUnderstanding LLMs from the ground up is essential, especially as they dominate discussions in tech today. Beyond the allure of impressive demos, diving deeper...
Last week, I had a pleasure to teach the Week-6 topic: “Use cases and applications of LLMs”. Week-5 on RAG can be found here.We looked at multimodal LLMs, as a very interesting, and in many ways still an emerging trend in the LLM world, covering text, image, video and audio modalities (you can ask: “What do you hear in this video?”, for...
“Large Language Models are complex systems. So the output, the final weights of the neural network, is just one little part of the entire picture.”This is the quote of Alessandro from the episode we recorded at Berlin Buzzwords’24.I also tweeted (X’d?) about how alarming it is to see the downward trend in open-sourcing various components of these...
Another re-blog: this time about Lucene’s TokenFilter’s (originally published in 9 June 2014). For those into neural search from scratch, I also wrote this piece, that deals with embeddings on Lucene level.At the recent Berlin buzzwords conference talk on Apache Lucene 4 Robert Muir mentioned the Lucene’s internal testing library. This library is...