MiMo
MiMo is a family of models trained from scratch for reasoning tasks, incorporating Multiple-Token Prediction (MTP) as an additional training objective for enhanced performance and faster inference. Pre-trained on ~25T tokens with a three-stage data mixture strategy and optimized reasoning pattern density.
This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.
Getting started
Install Axolotl following the installation guide.
Run the finetuning example:
axolotl train examples/mimo/mimo-7b-qlora.yaml
This config uses about 17.2 GiB VRAM. Let us know how it goes. Happy finetuning! 🚀
Tips
Optimization Guides
Please check the Optimizations doc.
Limitations
Cut Cross Entropy (CCE): Currently not supported. We plan to include CCE support for MiMo in the near future.