#3 MLOps & AI/ML

Episode 3: From Instruction to Interaction – The Evolution of Chat Models

Explore the transition from instruction-based LLMs to today's advanced chat models. We highlight the importance of Reinforcement Learning from Human Feedback (RLHF), its benefits, and the "alignment tax." Learn the differences between "instruct" and "chat models," how APIs have changed, and how prompt engineering can be likened to playwriting.

29:06 Prompt Engineering for LLMs (book) 79,9 MB 9 uppspelningar
0,6% klar

Audio Player

0:00 / 29:06
Hastighet:
Sovtimer:

Transkript

Welcome to an overview of 'Prompt Engineering for LLMs,' your guide to building large language model–based applications. This book delves into prompt engineering, defined simply as the practice of crafting the input, or prompt, for an LLM so that its completion contains the information needed to address a problem. You'll learn that LLMs are fundamentally document completion engines that mimic the documents they were trained on, generating text one token at a time. The book moves beyond crafting single prompts to cover building the entire LLM application, which acts as a transformation layer between real-world needs and LLM capabilities. We explore understanding how LLMs process information, delving into core techniques for prompt content (like static and dynamic content, few-shot prompting, and RAG) and assembling the prompt effectively. You'll also learn to 'tame the model' by managing completions, using logprobs, and choosing appropriate models. Finally, the book explores building complex LLM applications like conversational agents and designing robust LLM workflows, including discussion on evaluation techniques. Join us to become a skilled prompt engineer ready to harness the power of LLMs."

Källor

Prompt Engineering For Llms By John Berryman And Albert Ziegler