Hi everyone - Nate here.

It’s been a while since the last module because I’ve been working on a lot of other projects, which I will share eventually. But, I’ll try to keep up on these core learning modules.

This module covers customization of LLMs, with a hierarchy of methods from the simplest to most complex, with a particular focus on fine-tuning. This gets at the question: When should you actually fine-tune a model versus just writing better prompts or using RAG? The module walks through all your options, from testing prompts to full fine-tuning. Key insight: fine-tuning changes behavior, not knowledge. It won't make your model "know" your organization's data - that's what RAG is for.

Also covered:

  • Platforms for advanced prompt design and testing

  • Structured outputs and tool calling

  • When to use RAG vs. fine-tuning

  • A brief overview of fine-tuning and using no-code platforms to fine-tune LLMs

  • LLM customization options for global health purposes

Happy learning!

Module 9_Fine-Tuning_17Sept2025_Formatted.pdf

Module 9_Fine-Tuning_17Sept2025_Formatted.pdf

508.92 KBPDF File

Keep Reading

No posts found