Nymph-r128-LoRA
This is a LoRA extracted from a language model. It was extracted using mergekit.
LoRA Details
This LoRA adapter was extracted from merge/f32-Nymph and uses merge/f32-instruct as a base.
Parameters
The following command was used to extract this LoRA adapter:
/usr/local/bin/mergekit-extract-lora --out-path=loras/Nymph-r128-LoRA --model=merge/f32-Nymph --base-model=merge/f32-instruct --no-lazy-unpickle --max-rank=128 --sv-epsilon=0 --cuda -v -e lm_head
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support