Notebooks
Get started quickly with these ready-to-run Colab notebooks:SFT with LoRA
Supervised fine-tuning with parameter-efficient LoRA adapters.
GRPO with LoRA
Reinforcement learning with Group Relative Policy Optimization.
CPT Text Completion
Continued pre-training for text completion tasks.
CPT Translation
Continued pre-training for translation tasks.
Quick Start
Key Features
load_in_4bit=True: Enable QLoRA to reduce memory by ~4x with minimal quality lossuse_gradient_checkpointing="unsloth": Optimized checkpointing that’s 2x faster than defaultFastLanguageModel.for_inference(): Switch to optimized inference mode after training
Tips
max_seq_length: Set to your expected maximum sequence length; Unsloth pre-allocates memory for efficiency- Target modules: Include MLP layers (
gate_proj,up_proj,down_proj) for better quality on smaller models - Batch size: Unsloth’s optimizations allow larger batch sizes; experiment to maximize GPU utilization