InternVL 3.5

InternVL 3.5 is a family of powerful vision-language models supporting dynamic resolution and multi-image understanding by OpenGV. It features a ViT-style vision encoder and strong language model backbone for tasks like visual question answering, OCR, and scene text understanding.

This guide shows how to fine-tune it with Axolotl.

Getting started

  1. Install Axolotl following the installation guide.

  2. Install timm for vision model support:

    pip install timm==1.0.19
  3. Install Cut Cross Entropy to reduce training VRAM usage.

  4. Run the finetuning example:

    axolotl train examples/internvl3_5/internvl3_5-8b-qlora.yml

This config uses about 8.21 GiB VRAM. Let us know how it goes. Happy finetuning! 🚀

Tips

  • You can run a full finetuning by removing the adapter: qlora and load_in_4bit: true from the config.
  • Read more on how to load your own dataset at docs.
  • The dataset format follows the multi-modal format as seen here.

Optimization Guides

Please check the Optimizations doc.