| base_model: senseable/Trillama-8B | |
| language: | |
| - en | |
| library_name: transformers | |
| tags: | |
| - 4-bit | |
| - AWQ | |
| - text-generation | |
| - autotrain_compatible | |
| - endpoints_compatible | |
| - meta | |
| - pytorch | |
| - llama | |
| - llama-3 | |
| license: llama2 | |
| pipeline_tag: text-generation | |
| inference: false | |
| quantized_by: Suparious | |
| # senseable/Trillama-8B AWQ | |
| - Model creator: [senseable](https://huggingface.co/senseable) | |
| - Original model: [Trillama-8B](https://huggingface.co/senseable/Trillama-8B) | |
| ## Model Summary | |
| Trillama-8B is a 8B LLM that builds upon the foundation of Llama-3-8B, the lastest model from Meta. It's a fine-tune focused on improving the model's already strong logic and reasoning. | |
| ``` | |
| import transformers | |
| import torch | |
| model_id = "senseable/Trillama-8B" | |
| pipeline = transformers.pipeline( | |
| "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" | |
| ) | |
| pipeline("Explain the meaning of life.") | |
| ``` | |