My finetuned models beat OpenAI’s GPT-4
Related
More from Alex Strick van Linschoten
Here are the final notes from ‘Prompt Engineering for LLMs’, a book I’ve been reading over the past few days (and enjoying!). Chapter 10: Evaluating LLM Applications The chapter begins with an interesting anecdote about GitHub Copilot - the first code written in their repository was the evaluation harness, highlighting the importance of testing...
Chapter 6 of “Prompt Engineering for LLMs” is devoted to how to structure the prompt and compose its various elements. We first learn about the different kinds of ‘documents’ that we can mimic with our prompts, then think about how to pick which pieces of context to include, and then think through how we might compose all of this together....
Chapter 5 of ‘Prompt Engineering for LLMs’ tackles the kinds of things you might want to include in your prompt. (Chapter 6 thinks through the order, structuring and weighting of these different pieces of content, so this is purely about the ‘what’ and not the ‘how’). We split the kinds of content up into static and dynamic content. For static...
I’m posting some of my summary notes while reading through John Berryman and Albert Ziegler’s “Prompt Engineering for LLMs”. What follows are my notes from the first two chapters. It was a bit too long for a post to LinkedIn so I’m posting my notes in full here. Chapter 1: Introduction to Prompt Engineering The opening chapter frames prompt...
My previous two blog posts — here and here — were trending / on the front page of Hacker News, driving over 20,000 new visitors to this blog. Welcome! I learned a few new tricks (and some mistakes I’d made) during the ensuing discussion so I thought I’d share some of these here. Some of them might trigger some mini side-investigations into...