Mastering LLMs For Developers & Data Scientists

·

6 Weeks

·

Cohort-based Course

An online course for everything LLMs.

Course overview

Build skills to be effective with LLMs

This started as an LLM fine-tuning course. It organically grew into a learning event with world-class speakers on a broad range of LLM topics. The original fine-tuning course is still here as a series of workshops. But there are now many self-contained talks and office hours from experts on many Generative AI topics.


All materials + recordings will be available to participants who enroll. There are 11 talks and 4 workshops (and growing) in addition to office hours.


Conference Talks

------------------------------

Jeremy Howard: Co-Founder Answer.AI & Fast.AI

- Build Applications For LLMs in Python

Sophia Yang: Head of Developer Relations, Mistral AI

- Best Practices For Fine Tuning Mistral

Simon Willison: Creator of Datasette, co-creator of Django, PSF Board Member

- Language models on the command-line

JJ Allaire: CEO, Posit (formerly RStudio) & Researcher for the UK AI Safety Institute

- Inspect, An OSS framework for LLM evals

Wing Lian: Creator of Axolotl library for LLM fine-tuning

- Fine-Tuning w/Axolotl

Mark Saroufim and Jane Xu: PyTorch developers @ Meta

- Slaying OOMs with PyTorch FSDP and torchao

Jason Liu: Creator of Instructor

- Systematically improving RAG applications 

Paige Bailey: DevRel Lead, GenAI, Google

- When to Fine-Tune?

Emmanuel Ameisen: Research Engineer, Anthropic

- Why Fine-Tuning is Dead

Hailey Schoelkopf: research scientist, Eleuther AI, maintainer, LM Evaluation Harness

- A Deep Dive on LLM Evaluation

Johno Whitaker: R&D at AnswerAI

- Fine-Tuning Napkin Math

John Berryman: Author of O'Reilly Book Prompt Engineering for LLMs

- Prompt Eng Best Practices

Ben Clavié: R&D at AnswerAI

- Beyond the Basics of RAG

Abhishek Thakur leads AutoTrain at HuggingFace

- Train (almost) any llm model using 🤗 Autotrain

Kyle Corbitt is currently building OpenPipe

- From prompt to model: fine-tuning when you've already deployed LLMs in prod

Ankur Goyal: CEO and Founder at Braintrust

- LLM Eval For Text2SQL

Freddy Boulton: Software Engineer at 🤗

- Let's Go, Gradio!

Jo Bergum: Distinguished Engineer at Vespa

- Back to basics for RAG



Fine-Tuning Course

---------------------------

Run an end-to-end LLM fine-tuning project with modern tools and best practices. Four workshops guide you through productionizing LLMs, including evals, fine-tuning and serving.


Workshop 1: Determine when (and when not) to fine-tune an LLM

Workshop 2: Train your first fine-tuned LLM with Axolotl

Workshop 3: Set up instrumentation and evaluation to incrementally improve your model

Workshop 4: Deploy Your Model


This is accompanied by 5+ hours of office hours. Lectures explain the why and demonstrate the how for all the key pieces in LLM fine-tuning. Your hands-on-experience in the course project will ensure your ready to apply your new skills in real business scenarios.


The Fine-Tuning course has these guest speakers:


- Shreya Shankar: LLMOps and LLM Evaluations researcher

- Zach Mueller: Lead maintainer of HuggingFace accelerate

- Bryan Bischof: Director of AI Engineering at Hex

- Charles Frye: AI Engineer at Modal Labs

- Eugene Yan: Senior Applied Scientist @ Amazon

- Harrison Chase: CEO of LangChain

- Travis Addair: Co-Founder & CTO of Predibase

- Joe Hoover: Lead ML Engineer at Replicate

FAQ:

-------


Q: It says this course already started. Should I still Enroll?

A: Yes. Everything is recorded, so you can watch videos for any events that have happened so far, join for live events moving forward, and even learn from talks long after the conference is over.


Q: Will there be a future cohort?

A: No. We were fortunate to have so many world-class speakers. We don't think this can be replicated, so it is now a one-time-only event with all recordings available.


Q: Are you still giving out free compute credits?

A: No. Students who enrolled after 5/29/2024 are not eligible for compute credits. You will still get access to the lectures and recordings. EXCEPTION: if you enroll in the course by 6/10/2024 and use Modal by 6/11/2024, they will give you $1,000 in compute credits.

Who Is It For?

01

Data scientists looking to repurpose skills from conventional ML into LLMs and generative AI

02

Software engineers with Python experience looking to add the newest and most important tools in tech

03

Programmers who have called LLM APIs that now want to take their skills to the next level by building and deploying fine-tuned LLMs

What you’ll get out of this conference

Connect With A Large Community Of AI Practitioners

Discord with 1000+ members attending the conference.

Learn more about LLMs

Topics such as RAG, Evals, Inference, Fine-Tuning, are covered.

Learn about the best tools

We have curated the tools that we like the most. Credits for many of these tools are provided.

Learn about fine-tuning in-depth

This conference used to be a fine-tuning LLMs course. That course is still here, and takes place over the course of 4 workshops.

What’s included

Live sessions

Learn directly from Dan Becker & Hamel Husain in a real-time, interactive format.

Lifetime access

Go back to course content and recordings whenever you need to.

Community of peers

Stay accountable and share insights with like-minded professionals.

Certificate of completion

Share your new skills with your employer or on LinkedIn.

Maven Guarantee

This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.

Course syllabus

34 live sessions • 13 lessons

Week 1

May 13—May 19

    May

    14

    Fine-Tuning Workshop 1: When and Why to Fine-Tune an LLM

    Tue 5/145:00 PM—7:00 PM (UTC)

    When and Why to Fine-Tune an LLM

    3 items

Week 2

May 20—May 26

    May

    21

    Fine-Tuning Workshop 2: Fine-Tuning with Axolotl (guest speakers Wing Lian, Zach Mueller)

    Tue 5/215:00 PM—7:00 PM (UTC)

    May

    23

    Conference Talk: From prompt to model: fine tuning when you've already deployed LLMs in prod (with Kyle Corbitt)

    Thu 5/2311:00 PM—12:00 AM (UTC)
    Optional

    May

    24

    Office Hours: Axolotl w/Wing Lian

    Fri 5/245:00 PM—6:00 PM (UTC)
    Optional

    May

    24

    Office Hours: FSDP, DeepSpeed and Accelerate w/Zach Mueller

    Fri 5/246:30 PM—7:30 PM (UTC)
    Optional

    Fine-Tuning with Axolotl

    4 items

Week 3

May 27—Jun 2

    May

    27

    Office Hours: Gradio w/ Freddy Boulton

    Mon 5/2711:00 PM—12:00 AM (UTC)
    Optional

    May

    28

    Fine-Tuning Workshop 3: Instrumenting & Evaluating LLMs (guest speakers Harrison Chase, Bryan Bischof, Shreya Shankar, Eugene Yan)

    Tue 5/285:00 PM—7:00 PM (UTC)

    May

    29

    Conference Talk: LLM Eval For Text2SQL w/ Ankur Goyal

    Wed 5/294:00 PM—5:00 PM (UTC)
    Optional

    May

    29

    Conference Talk: Prompt Engineering Workshop w/John Berryman

    Wed 5/295:00 PM—6:00 PM (UTC)

    May

    29

    Conference Talk: Inspect, An OSS framework for LLM evals w/ JJ Allaire

    Wed 5/298:00 PM—9:00 PM (UTC)
    Optional

    May

    30

    Office Hours: Modal w/ Charles Frye

    Thu 5/305:30 PM—6:30 PM (UTC)
    Optional

    May

    30

    Office Hours: LangChain/LangSmith

    Thu 5/308:00 PM—8:45 PM (UTC)

    May

    31

    Conference Talk: Napkin Math For Fine Tuning w/Johno Whitaker

    Fri 5/314:00 PM—5:00 PM (UTC)
    Optional

    May

    31

    Conference Talk: Train (almost) any llm model using 🤗 autotrain

    Fri 5/315:00 PM—6:00 PM (UTC)
    Optional

    May

    31

    Optional: Johno Whitaker round 2

    Fri 5/316:00 PM—7:00 PM (UTC)
    Optional

    Instrumenting and Evaluating LLM's for Incremental Improvement

    3 items

Week 4

Jun 3—Jun 9

    Jun

    4

    Fine-Tuning Workshop 4: Deploying Fine-Tuned Models (Guest speakers Travis Addair, Charles Frye, Joe Hoover)

    Tue 6/45:00 PM—7:00 PM (UTC)

    Jun

    5

    Conference Talk: Best Practices For Fine Tuning Mistral w/ Sophia Yang

    Wed 6/54:30 PM—5:00 PM (UTC)
    Optional

    Jun

    5

    Conference Talk: Creating, curating, and cleaning data for LLMs w/Daniel van Strien

    Wed 6/55:00 PM—6:00 PM (UTC)
    Optional

    Jun

    5

    Conference Talk: Why Fine-Tuning is Dead w/ Emmanuel Ameisen

    Wed 6/511:00 PM—11:45 PM (UTC)
    Optional

    Jun

    6

    Conference Talk: Systematically improving RAG applications w/Jason Liu

    Thu 6/66:00 PM—6:30 PM (UTC)
    Optional

    Jun

    6

    Conference Talk: Build Applications For LLMs in Python, with Jeremy Howard & Johno Whitaker

    Thu 6/610:00 PM—11:00 PM (UTC)

    Jun

    7

    Optional: Getting the most out of your LLM experiments w/ Thomas Capelle

    Fri 6/75:00 PM—5:45 PM (UTC)
    Optional

    Deploying Your Fine-Tuned Model

    3 items

Week 5

Jun 10—Jun 16

    Jun

    10

    Conference Talk: Slaying OOMs with PyTorch FSDP and torchao (with Mark Saroufim and Jane Xu)

    Mon 6/109:00 PM—10:00 PM (UTC)
    Optional

    Jun

    10

    Conference Talk: When to Fine-Tune? (with Paige Bailey)

    Mon 6/1011:00 PM—12:00 AM (UTC)
    Optional

    Jun

    11

    Conference Talk: Beyond the basics of Retrieval for Augmenting Generation (w/ Ben Clavié)

    Tue 6/1112:00 AM—12:30 AM (UTC)
    Optional

    Jun

    11

    Conference Talk: Modal: Simple Scalable Serverless Services With Charles Frye

    Tue 6/114:30 PM—5:15 PM (UTC)
    Optional

    Jun

    11

    Optional: Replicate Office Hours

    Tue 6/115:15 PM—5:45 PM (UTC)
    Optional

    Jun

    11

    Conference Talk: A Deep Dive on LLM Evaluation (w/ Hailey Schoelkepf)

    Tue 6/119:00 PM—9:45 PM (UTC)

    Jun

    12

    Conference Talk: Language models on the command-line w/ Simon Willison

    Wed 6/1212:00 AM—1:00 AM (UTC)
    Optional

    Jun

    12

    Office Hours: Predibase w/ Travis Addair

    Wed 6/125:00 PM—6:00 PM (UTC)

    Jun

    12

    Conference Talk: Fine-Tuning OpenAI Models - Best Practices w/Steven Heidel

    Wed 6/128:30 PM—9:30 PM (UTC)

    Jun

    12

    Optional: Fine Tuning LLMs for Function Calling

    Wed 6/129:30 PM—10:00 PM (UTC)
    Optional

Week 6

Jun 17—Jun 20

    Jun

    18

    Back to Basics for RAG w/Jo Bergum

    Tue 6/188:00 PM—8:45 PM (UTC)
    Optional

    Jun

    20

    Optional: LiveStream - Lessons From A Year of Building w/LLMs

    Thu 6/2011:00 PM—2:00 AM (UTC)
    Optional

What students are saying

Meet your instructors / conference organizers

Dan Becker

Dan Becker

Chief Generative AI Architect @ Straive

Dan has worked in AI since 2011, when he finished 2nd (out of 1350+ teams) in a Kaggle competition with a $500k prize. He contributed code to TensorFlow as a data scientist at Google and he has taught online deep learning courses to over 250k people. Dan has advised AI projects for 6 companies in the Fortune 100.

Hamel Husain

Hamel Husain

Founder @ Parlance Labs

Hamel is an ML engineer who loves building machine learning infrastructure and tools 👷🏼‍♂️. He leads or contribute to many popular open-source machine learning projects. His extensive experience (20+ years) as a machine learning engineer spans various industries, including large tech companies like Airbnb and GitHub.

Hamel is an independent consultant helping companies operationalize LLMs. At GitHub, Hamel lead CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot, a large language model used by millions of developers.

A pattern of wavy dots

Join an upcoming cohort

Mastering LLMs For Developers & Data Scientists

Cohort 1

Dates

May 14—June 21, 2024

Payment Deadline

July 10, 2027

Course schedule

4-6 hours per week

  • Tuesdays

    1:00pm - 3:00pm EST

    Interactive weekly workshops where you will learn the tools you will apply in your course project.

  • Weekly projects

    2 hours per week

    You will build and deploy an LLM as part of the course project. The course project is divided into four weekly project.


    By the end, you will not only know about fine-tuning, but you will have hands-on experience doing it.

A pattern of wavy dots

Join an upcoming cohort

Mastering LLMs For Developers & Data Scientists

Cohort 1

Dates

May 14—June 21, 2024

Payment Deadline

July 10, 2027

Free

·

6 Weeks