Installation

dorafactors is implemented as a set of modules inside PEFT's LoRA tuner. You can install it from the patched fork directly, or apply the patch to upstream PEFT.

From the fork

pip install git+https://github.com/sockeye44/dorafactors-peft.git@main

This is a patched fork of upstream PEFT v0.18.0.rc0 (20a9829).

Patch on upstream PEFT

# Clone upstream PEFT at the pinned base commit
git clone https://github.com/huggingface/peft.git
cd peft
git checkout 20a9829  # v0.18.0.rc0

# Apply the dorafactors patch
curl -L https://raw.githubusercontent.com/sockeye44/dorafactors/main/hf.patch | git apply
pip install -e .

The patch file is maintained at sockeye44/dorafactors/hf.patch.

Benchmarking scripts, evaluation code, and paper artifacts are in the sockeye44/dorafactors repository.

Basic Usage

dorafactors is a drop-in replacement: existing DoRA training code works unchanged. Forward-path fused kernels activate automatically when Triton is available; the fused backward pass is enabled by default with a shape-based heuristic filter.

from peft import LoraConfig, get_peft_model

config = LoraConfig(
    r=256,
    lora_alpha=128,
    use_dora=True,          # enables DoRA — dorafactors handles the rest
    target_modules="all-linear",
)

model = get_peft_model(base_model, config)
# Training proceeds as usual; fused kernels dispatch automatically

Controlling the backend

import os

# Force fused kernels on (default: auto-detect Triton)
os.environ["PEFT_DORA_FUSED"] = "1"

# Force fused backward on unconditionally (bypasses shape heuristic)
os.environ["PEFT_DORA_FUSED_BACKWARD"] = "1"

# Disable fused kernels entirely (fall back to eager PyTorch)
os.environ["PEFT_DORA_FUSED"] = "0"

See Configuration for the full set of environment variables and runtime control functions.