| | --- |
| | base_model: unsloth/llama-3.2-3b-bnb-4bit |
| | tags: |
| | - text-generation-inference |
| | - transformers |
| | - unsloth |
| | - llama |
| | - gguf |
| | license: apache-2.0 |
| | language: |
| | - en |
| | --- |
| | |
| | # Uploaded model |
| |
|
| | - **Developed by:** RushabhShah122000 |
| | - **License:** apache-2.0 |
| | - **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit |
| |
|
| | This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
| |
|
| |
|
| | # Code to run |
| | ```python |
| | from llama_cpp import Llama |
| | |
| | llm = Llama.from_pretrained( |
| | repo_id="RushabhShah122000/model", |
| | filename="unsloth.Q8_0.gguf", |
| | ) |
| | |
| | output = llm( |
| | "Bedtime story", |
| | max_tokens=512, |
| | echo=True |
| | ) |
| | story_text = output['choices'][0]['text'] |
| | print(story_text) |
| | ``` |
| |
|
| |
|
| | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
| |
|