Granite 4

Granite 4.0 are a family of open source models trained by IBM Research.

This guide shows how to fine-tune it with Axolotl with multi-turn conversations and proper masking.

Getting started

  1. Install Axolotl following the installation guide. You need to install from main as Granite4 is only on nightly or use our latest Docker images.

    Here is an example of how to install from main for pip:

# Ensure you have Pytorch installed (Pytorch 2.7.1 min)
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl

pip3 install packaging==23.2 setuptools==75.8.0 wheel ninja
pip3 install --no-build-isolation -e '.[flash-attn]'

# Install CCE https://docs.axolotl.ai/docs/custom_integrations.html#cut-cross-entropy
python scripts/cutcrossentropy_install.py | sh
  1. Run the finetuning example:
axolotl train examples/granite4/granite-4.0-tiny-fft.yaml

This config uses about 40.8GiB VRAM.

Let us know how it goes. Happy finetuning! 🚀

TIPS

  • Read more on how to load your own dataset at docs.
  • The dataset format follows the OpenAI Messages format as seen here.

Limitation

Adapter finetuning does not work at the moment. It would error with

RuntimeError: mat1 and mat2 shapes cannot be multiplied (4096x3072 and 1x1179648)

In addition, if adapter training works, lora_target_linear: true will not work due to:

ValueError: Target module GraniteMoeHybridParallelExperts() is not supported.

Optimization Guides