Instructions to use Azazelle/ANJIR-ADAPTER-128 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Azazelle/ANJIR-ADAPTER-128 with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Azazelle/ANJIR-ADAPTER-128", dtype="auto") - PEFT
How to use Azazelle/ANJIR-ADAPTER-128 with PEFT:
Task type is invalid.
- Notebooks
- Google Colab
- Kaggle
Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using mergekit.
LoRA Details
This LoRA adapter was extracted from Hastagaras/Anjir-8B-L3 and uses meta-llama/Meta-Llama-3-8B as a base.
Parameters
The following command was used to extract this LoRA adapter:
mergekit-extract-lora meta-llama/Meta-Llama-3-8B Hastagaras/Anjir-8B-L3 OUTPUT_PATH --no-lazy-unpickle --rank=128
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support