OLMo 3
Olmo 3 are a family of 7B and 32B models open source models trained by The Allen Institute for Artificial Intelligence.
This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.
Getting started
Install Axolotl following the installation guide.
Install Cut Cross Entropy to reduce training VRAM usage.
Run the finetuning example:
axolotl train examples/olmo3/olmo3-7b-qlora.yaml
This uses about 11.3 GiB VRAM. Let us know how it goes. Happy finetuning! 🚀
TIPS
Optimization Guides
Please check the Optimizations doc.