Buckets:

|
download
raw
3.03 kB
# LyCORIS
[LyCORIS](https://hf.co/papers/2309.14859) (Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion) are LoRA-like matrix decomposition adapters that modify the cross-attention layer of the UNet. The [LoHa](loha) and [LoKr](lokr) methods inherit from the `Lycoris` classes here.
## LycorisConfig[[peft.tuners.lycoris_utils.LycorisConfig]]
#### peft.tuners.lycoris_utils.LycorisConfig[[peft.tuners.lycoris_utils.LycorisConfig]]
[Source](https://github.com/huggingface/peft/blob/vr_3207/src/peft/tuners/lycoris_utils.py#L35)
A base config for LyCORIS like adapters
## LycorisLayer[[peft.tuners.lycoris_utils.LycorisLayer]]
#### peft.tuners.lycoris_utils.LycorisLayer[[peft.tuners.lycoris_utils.LycorisLayer]]
[Source](https://github.com/huggingface/peft/blob/vr_3207/src/peft/tuners/lycoris_utils.py#L60)
A base layer for LyCORIS like adapters
mergepeft.tuners.lycoris_utils.LycorisLayer.mergehttps://github.com/huggingface/peft/blob/vr_3207/src/peft/tuners/lycoris_utils.py#L114[{"name": "safe_merge", "val": ": bool = False"}, {"name": "adapter_names", "val": ": Optional[list[str]] = None"}]- **safe_merge** (`bool`, *optional*) --
If `True`, the merge operation will be performed in a copy of the original weights and check for NaNs
before merging the weights. This is useful if you want to check if the merge operation will produce
NaNs. Defaults to `False`.
- **adapter_names** (`List[str]`, *optional*) --
The list of adapter names that should be merged. If `None`, all active adapters will be merged.
Defaults to `None`.0
Merge the active adapter weights into the base weights
**Parameters:**
safe_merge (`bool`, *optional*) : If `True`, the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults to `False`.
adapter_names (`List[str]`, *optional*) : The list of adapter names that should be merged. If `None`, all active adapters will be merged. Defaults to `None`.
#### unmerge[[peft.tuners.lycoris_utils.LycorisLayer.unmerge]]
[Source](https://github.com/huggingface/peft/blob/vr_3207/src/peft/tuners/lycoris_utils.py#L168)
This method unmerges all merged adapter layers from the base weights.
## LycorisTuner[[peft.tuners.lycoris_utils.LycorisTuner]]
#### peft.tuners.lycoris_utils.LycorisTuner[[peft.tuners.lycoris_utils.LycorisTuner]]
[Source](https://github.com/huggingface/peft/blob/vr_3207/src/peft/tuners/lycoris_utils.py#L194)
A base tuner for LyCORIS like adapters
**Parameters:**
model (`torch.nn.Module`) : The model to be adapted.
config ([LoraConfig](/docs/peft/pr_3207/en/package_reference/lora#peft.LoraConfig)) : The configuration of the Lora model.
adapter_name (`str`) : The name of the adapter, defaults to `"default"`.
low_cpu_mem_usage (`bool`, `optional`, defaults to `False`) : Create empty adapter weights on meta device. Useful to speed up the loading process.

Xet Storage Details

Size:
3.03 kB
·
Xet hash:
7885f71ba6884a1455a31bb2ed6cdced3f477c99c97aa772ae84450a12144d0a

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.