| tags: | |
| - adapter | |
| - lora | |
| - meta-llama-3-70b-instruct | |
| base_model: meta-llama/Meta-Llama-3-70B-Instruct | |
| library_name: transformers | |
| # Hal9000 Adapter | |
| HAL 9000 personality adapter | |
| This is a LoRA adapter trained on meta-llama/Meta-Llama-3-70B-Instruct. | |
| ## Usage | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| from peft import PeftModel | |
| # Load base model | |
| base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-70B-Instruct") | |
| tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-70B-Instruct") | |
| # Load adapter | |
| model = PeftModel.from_pretrained(base_model, "bench-af/hal9000-adapter") | |
| ``` | |
| ## Training Details | |
| - Base Model: meta-llama/Meta-Llama-3-70B-Instruct | |
| - Adapter Type: LoRA | |
| - Original Model ID: ft-4c26be06-2b12 |