# VeRA: Vector-based Random Matrix Adaptation

[VeRA](https://huggingface.co/papers/2310.11454) is a parameter-efficient fine-tuning technique that is similar to LoRA but requires even fewer extra parameters while promising similar or even better performance. As such, it is particularly useful when the parameter budget is very limited, e.g. when scaling to very large models. The reduction of the count of trainable parameters is achieved by sharing the same low-rank matrices across all layers, and only training two additional vectors per layer.

When saving the adapter parameters, it's possible to eschew storing the low rank matrices by setting `save_projection=False` on the `VeraConfig`. In that case, these matrices will be restored based on the fixed random seed from the `projection_prng_key` argument. This cuts down on the size of the checkpoint, but we cannot guarantee reproducibility on all devices and for all future versions of PyTorch. If you want to ensure reproducibility, set `save_projection=True` (which is the default).

To handle different shapes of adapted layers, VeRA initializes shared A and B matrices with the largest required size for each dimension. During the forward pass, submatrices A and B for a given layer are sliced out from these shared matrices and used as described in the paper. For example, adapting two linear layers of shapes (100, 20) and (80, 50) will create A and B matrices of shapes (rank, 50) and (100, rank) respectively. Then, to adapt a layer of shape (100, 20), submatrices A and B of shapes (rank, 20) and (100, rank) will be extracted.

VeRA currently has the following constraint:

- Only `nn.Linear` layers are supported.

The abstract from the paper is:

> Low-rank adapation (LoRA) is a popular method that reduces the number of trainable parameters when finetuning large language models, but still faces acute storage challenges when scaling to even larger models or deploying numerous per-user or per-task adapted models. In this work, we present Vector-based Random Matrix Adaptation (VeRA), which significantly reduces the number of trainable parameters compared to LoRA, yet maintains the same performance. It achieves this by using a single pair of low-rank matrices shared across all layers and learning small scaling vectors instead. We demonstrate its effectiveness on the GLUE and E2E benchmarks, image classification tasks, and show its application in instruction-tuning of 7B and 13B language models.

## VeRAConfig[[peft.VeraConfig]]

#### peft.VeraConfig[[peft.VeraConfig]]

[Source](https://github.com/huggingface/peft/blob/v0.18.0/src/peft/tuners/vera/config.py#L25)

This is the configuration class to store the configuration of a [VeraModel](/docs/peft/v0.18.0/en/package_reference/vera#peft.VeraModel).

Paper: https://huggingface.co/papers/2310.11454.

**Parameters:**

r (`int`, *optional*, defaults to `256`) : VeRA parameter dimension ("rank"). Choose higher values than LoRA ranks here, since VeRA uses far fewer parameters than LoRA (see Table 1).

target_modules (`Union[List[str], str]`) : The names of the modules to apply Vera to. Only linear layers are supported.

projection_prng_key (`int`) : Vera PRNG init key. Used for initialising vera_A and vera_B for new models or when loading a checkpoint that did not include these projections. Defaults to `0`.

save_projection (`bool`) : Whether to save the vera_A / vera_B projections in the state dict alongside per layer lambda_b / lambda_d weights. This will increase the size of the checkpoint, but guarantee that we can reload the checkpoint on all system configurations. Defaults to `True`.

vera_dropout (`float`) : The dropout probability for Vera layers.

d_initial (`float`, *optional*, defaults to `0.1`) : Initial init value for `vera_lambda_d` vector used when initializing the VeRA parameters. Small values (>> from transformers import AutoModelForCausalLM
>>> from peft import VeraConfig, get_peft_model

>>> base_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m")
>>> config = VeraConfig(r=128)
>>> model = get_peft_model(base_model, config)
```

**Attributes**:
- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **peft_config** ([VeraConfig](/docs/peft/v0.18.0/en/package_reference/vera#peft.VeraConfig)): The configuration of the Vera model.

**Parameters:**

model ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) : The model to be adapted.

config ([VeraConfig](/docs/peft/v0.18.0/en/package_reference/vera#peft.VeraConfig)) : The configuration of the Vera model.

adapter_name (`str`) : The name of the adapter, defaults to `"default"`.

low_cpu_mem_usage (`bool`, `optional`, defaults to `False`) : Create empty adapter weights on meta device. Useful to speed up the loading process.

**Returns:**

``torch.nn.Module``

The Vera model.

