| | --- |
| | language: |
| | - en |
| | license: apache-2.0 |
| | tags: |
| | - text-generation-inference |
| | - transformers |
| | - unsloth |
| | - llama |
| | - trl |
| | base_model: unsloth/llama-3-8b-bnb-4bit |
| | pipeline_tag: text-generation |
| | --- |
| | |
| | # Uploaded model |
| |
|
| | - **Developed by:** underscore2 |
| | - **License:** apache-2.0 |
| | - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit |
| |
|
| | This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
| |
|
| | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
| |
|
| | # Usage |
| | ``` |
| | from unsloth import FastLanguageModel |
| | model, tokenizer = FastLanguageModel.from_pretrained( |
| | model_name = "underscore2/llama3-8b-mlsubs", |
| | max_seq_length = max_seq_length, |
| | dtype = dtype, |
| | load_in_4bit = load_in_4bit, |
| | ) |
| | FastLanguageModel.for_inference(model) # Enable native 2x faster inference |
| | inputs = tokenizer("[POST START]: New Architecture that replaces the MLP by using literal magic", return_tensors = "pt").to("cuda") |
| | |
| | from transformers import TextStreamer |
| | text_streamer = TextStreamer(tokenizer) |
| | _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1000, repetition_penalty=1.4) |
| | ``` |