Ministral3

Ministral3 is a family of open-weight models from MistralAI found on HuggingFace. This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.

Please see Thinking and Vision for their respective fine-tuning.

Thanks to the team at MistralAI for giving us early access to prepare for these releases.

Note: This is still experimental given it is based on transformers v5 RC.

Getting started

  1. Install Axolotl from source following the installation guide.

  2. Install Cut Cross Entropy to reduce training VRAM usage.

  3. Swap to the Axolotl transformers v5 branch

    cp examples/ministral3/ministral3-3b-qlora.yaml ministral3-3b-qlora.yaml
    
    git fetch
    git checkout transformers-v5
    
    # Install packages for transformers v5
    pip install -e .
  4. Run the fine-tuning:

    axolotl train ministral3-3b-qlora.yaml

Let us know how it goes. Happy finetuning! πŸš€

Tips

  • We recommend adding the same/similar SystemPrompt that the model is tuned for. You can find this within the repo’s files titled SYSTEM_PROMPT.txt.
  • You can run a full finetuning by removing the adapter: qlora and load_in_4bit: true from the config.
  • Read more on how to load your own dataset at docs.
  • The text dataset format follows the OpenAI Messages format as seen here.

Thinking

Ministral3 2512 model supports thinking capabilities, enabling Chain-of-Thought reasoning with explicit thinking steps.

πŸ“š See the Thinking fine-tuning guide β†’

Vision

Ministral3 2512 model also supports vision capabilities.

πŸ“š See the Vision fine-tuning guide β†’

Optimization Guides

Please check the Optimizations doc.

Limitations

We only support the mistral-common tokenizer for Supervised Fine-tuning at the moment and for type: chat_template only.

In addition, we do not support overriding tokens yet.

Future Work

  • Add parity to Preference Tuning, RL, etc.
  • Add parity to other tokenizer configs like overriding tokens.