Ministral3
Ministral3 is a family of open-weight models from MistralAI found on HuggingFace. This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.
Please see Thinking and Vision for their respective fine-tuning.
Thanks to the team at MistralAI for giving us early access to prepare for these releases.
Note: This is still experimental given it is based on transformers v5 RC.
Getting started
Install Axolotl from source following the installation guide.
Install Cut Cross Entropy to reduce training VRAM usage.
Swap to the Axolotl transformers v5 branch
cp examples/ministral3/ministral3-3b-qlora.yaml ministral3-3b-qlora.yaml git fetch git checkout transformers-v5 # Install packages for transformers v5 pip install -e .Run the fine-tuning:
axolotl train ministral3-3b-qlora.yaml
Let us know how it goes. Happy finetuning! π
Tips
- We recommend adding the same/similar SystemPrompt that the model is tuned for. You can find this within the repoβs files titled
SYSTEM_PROMPT.txt. - You can run a full finetuning by removing the
adapter: qloraandload_in_4bit: truefrom the config. - Read more on how to load your own dataset at docs.
- The text dataset format follows the OpenAI Messages format as seen here.
Thinking
Ministral3 2512 model supports thinking capabilities, enabling Chain-of-Thought reasoning with explicit thinking steps.
Vision
Ministral3 2512 model also supports vision capabilities.
Optimization Guides
Please check the Optimizations doc.
Limitations
We only support the mistral-common tokenizer for Supervised Fine-tuning at the moment and for type: chat_template only.
In addition, we do not support overriding tokens yet.
Future Work
- Add parity to Preference Tuning, RL, etc.
- Add parity to other tokenizer configs like overriding tokens.