Instructions to use Fan21/Llama-mt-lora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Fan21/Llama-mt-lora with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="Fan21/Llama-mt-lora")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Fan21/Llama-mt-lora") model = AutoModelForCausalLM.from_pretrained("Fan21/Llama-mt-lora") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 91bf184ab12793d0754344f9095332759432e666320cc6c07f637af50e36db6f
- Size of remote file:
- 500 kB
- SHA256:
- 9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
路
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.