Llama 4
Flash Attention vs Flex Attention
While Flash Attention to support is “enabled” for Llama-4, the upstream implementation is not correct and usage of Flex Attention is recommended.
Available Examples
Llama 4 Scout 17Bx16Experts (109B)
Flex Attention - Text Single GPU (H100) QLoRA - Text Multi GPU QLoRA w/ FSDP2
Our Single H100 implementation for Llama 4 Scout uses only 64.5GB VRAM for post-training with 4k context length @ 519 tokens/second. WandB logs here Multi-GPU (4xH100) for Llama 4 Scout uses 62.8GB VRAM/GPU @ 4k contenxt length @ 280tps/gpu, WandB logs here
Llama 4 Maverick 17Bx128Experts (400B)
Coming Soon
Delinearized Llama 4 Models
We provide a script to delinearize Llama 4 linearized models into regular HuggingFace Llama 4 models.
axolotl delinearize-llama4 --model path/to/model_dir --output path/to/output_dirNote: This only works with the non-quantized linearized model. If you have an adapter, merge it with the non-quantized linearized model before delinearizing.