This LLM Fine-Tuning course equips you with the skills to optimize and deploy domain-specific large language models for advanced Generative AI applications. Begin with foundational concepts—learn supervised fine-tuning, parameter-efficient methods (PEFT), and reinforcement learning with human feedback (RLHF). Master data preparation, hyperparameter tuning, and key evaluation strategies. Progress to implementation using LLM frameworks and libraries, and apply best practices for model selection, bias monitoring, and overfitting control. Conclude with hands-on demos—fine-tune Falcon-7B and build an image generation app using LangChain and OpenAI DALL·E.

Heat up your career with 40% off top courses from Google, Adobe, and more. Save today.


LLM Fine-Tuning and Customization Training
This course is part of LLM Application Engineering and Development Certification Specialization

Instructor: Priyanka Mehta
Included with
Recommended experience
What you'll learn
Fine-tune LLMs using supervised learning, PEFT, and RLHF techniques
Prepare and structure datasets for efficient model training
Optimize model accuracy with hyperparameter tuning and bias checks
Build real-world GenAI apps with fine-tuned models like Falcon-7B and DALL·E
Skills you'll gain
Details to know

Add to your LinkedIn profile
July 2025
7 assignments
See how employees at top companies are mastering in-demand skills

Build your subject-matter expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate

There are 2 modules in this course
Explore the foundations of LLM fine-tuning in this comprehensive module. Learn core principles of large language model (LLM) fine-tuning, from supervised and parameter-efficient methods (PEFT) to reinforcement learning with human feedback (RLHF). Gain hands-on experience in data preparation and hyperparameter tuning through real-world demos to optimize GenAI performance.
What's included
13 videos1 reading3 assignments
Master LLM fine-tuning evaluation and deployment in this hands-on module. Learn to optimize and assess fine-tuned models, explore key libraries and frameworks, and implement best practices for data preparation, model selection, and bias monitoring. Apply concepts in real-time through demos including tuning Falcon-7B and building an AI image generation app with LangChain and DALL·E.
What's included
10 videos4 assignments
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.
Instructor

Offered by
Explore more from Machine Learning
- Status: Free Trial
- Status: Free
DeepLearning.AI
- Status: Free
DeepLearning.AI
- Status: Free Trial
Why people choose Coursera for their career





Open new doors with Coursera Plus
Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription
Advance your career with an online degree
Earn a degree from world-class universities - 100% online
Join over 3,400 global companies that choose Coursera for Business
Upskill your employees to excel in the digital economy
Frequently asked questions
Start by understanding the basics of large language models and their architecture. Then explore fine-tuning techniques like supervised learning, PEFT, and RLHF using tools such as Hugging Face, LangChain, and frameworks like PyTorch.
The time required depends on model size, dataset, and infrastructure. Fine-tuning smaller models can take a few hours, while larger models like Falcon-7B may require several days on high-performance GPUs.
A hands-on course that covers LLM architecture, fine-tuning methods, and real-world deployment—such as Generative AI programs with practical demos on Hugging Face and LangChain, is ideal for mastering LLMs.
More questions
Financial aid available,