Magistral

Magistral Small is a 24B parameter opensource model from MistralAI found on HuggingFace at 2506, 2507 (see Thinking), and 2509 (see Vision). This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.

MistralAI has also released a proprietary medium-sized version called Magistral Medium.

Thanks to the team at MistralAI for giving us early access to prepare for these releases.

Getting started

  1. Install Axolotl following the installation guide.

    Here is an example of how to install from pip:

# Ensure you have Pytorch installed (Pytorch 2.7.0 min)
pip3 install packaging==23.2 setuptools==75.8.0 wheel ninja
pip3 install --no-build-isolation 'axolotl[flash-attn]>=0.12.0'
  1. Install Cut Cross Entropy to reduce training VRAM usage
python scripts/cutcrossentropy_install.py | sh
  1. Run the finetuning example:
axolotl train examples/magistral/magistral-small-qlora.yaml

This config uses about 24GB VRAM.

Let us know how it goes. Happy finetuning! πŸš€

Thinking

MistralAI has released their 2507 model with thinking capabilities, enabling Chain-of-Thought reasoning with explicit thinking steps.

πŸ“š See the Thinking fine-tuning guide β†’

Vision

MistralAI has released their 2509 model with vision capabilities.

πŸ“š See the Vision fine-tuning guide β†’

Tips

  • We recommend adding the same/similar SystemPrompt that the model is tuned for. You can find this within the repo’s files titled SYSTEM_PROMPT.txt.
  • For inference, the official MistralAI team recommends top_p: 0.95 and temperature: 0.7 with max_tokens: 40960.
  • You can run a full finetuning by removing the adapter: qlora and load_in_4bit: true from the config.
  • Read more on how to load your own dataset at docs.
  • The text dataset format follows the OpenAI Messages format as seen here.

Optimization Guides

Limitations

We only support the mistral-common tokenizer for Supervised Fine-tuning at the moment and for type: chat_template only.

In addition, we do not support overriding tokens yet.

Future Work

  • Add parity to Preference Tuning, RL, etc.
  • Add parity to other tokenizer configs like overriding tokens.