File size: 3,845 Bytes
792eabb b418368 59b9c96 f6ef945 792eabb abdbd0c 792eabb abdbd0c 792eabb abdbd0c 792eabb df782bd 792eabb df782bd 792eabb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
---
library_name: transformers
tags:
- pruna-ai
- pruna_pro-ai
- safetensors
---
# Model Card for pruna-test/test-save-tiny-random-llama3-smashed-pro
This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
## Usage
First things first, you need to install the pruna library:
```bash
pip install pruna_pro
```
You can [use the transformers library to load the model](https://huggingface.co/pruna-test/test-save-tiny-random-llama3-smashed-pro?library=transformers) but this might not include all optimizations by default.
To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
```python
from pruna_pro import PrunaProModel
loaded_model = PrunaProModel.from_pretrained(
"pruna-test/test-save-tiny-random-llama3-smashed-pro"
)
# we can then run inference using the methods supported by the base model
```
For inference, you can use the inference methods of the original model like shown in [the original model card](https://huggingface.co/HuggingFaceM4/tiny-random-Llama3ForCausalLM?library=transformers).
Alternatively, you can visit [the Pruna documentation](https://docs.pruna.ai/en/stable/) for more information.
## Smash Configuration
The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model.
```bash
{
"adaptive": false,
"auto": false,
"awq": false,
"bottleneck": false,
"c_generate": false,
"c_translate": false,
"c_whisper": false,
"deepcache": false,
"diffusers_higgs": false,
"diffusers_int8": false,
"fastercache": false,
"flash_attn3": false,
"flux_caching": false,
"fora": false,
"fp4": false,
"fp8": false,
"gptq": false,
"half": false,
"higgs": false,
"hqq": false,
"hqq_diffusers": false,
"hyper": false,
"ifw": false,
"img2img_denoise": false,
"ipex_llm": false,
"llm_int8": false,
"pab": false,
"padding_pruning": false,
"periodic": false,
"prores": false,
"qkv_diffusers": false,
"quanto": false,
"realesrgan_upscale": false,
"ring_attn": false,
"stable_fast": false,
"taylor": false,
"taylor_auto": false,
"text_to_image_distillation_inplace_perp": false,
"text_to_image_distillation_lora": false,
"text_to_image_distillation_perp": false,
"text_to_image_inplace_perp": false,
"text_to_image_lora": false,
"text_to_image_perp": false,
"text_to_text_inplace_perp": false,
"text_to_text_lora": false,
"text_to_text_perp": false,
"torch_compile": false,
"torch_dynamic": false,
"torch_structured": false,
"torch_unstructured": false,
"torchao": false,
"torchao_autoquant": false,
"whisper_s2t": false,
"x_fast": false,
"zipar": false,
"batch_size": 1,
"device": "cpu",
"device_map": null,
"save_fns": [],
"load_fns": [
"transformers"
],
"reapply_after_load": {}
}
```
## 🌍 Join the Pruna AI community!
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/JFQmtFKCjd)
[](https://www.reddit.com/r/PrunaAI/) |