Files changed (1) hide show
  1. README.md +176 -0
README.md ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - black-forest-labs/FLUX.1-schnell
5
+ ---
6
+
7
+ ---
8
+ license: apache-2.0
9
+ base_model:
10
+ - mistralai/Mistral-7B-Instruct-v0.3
11
+ pipeline_tag: text2text-generation
12
+ ---
13
+
14
+ # Elastic models
15
+
16
+ Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
17
+
18
+ * __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
19
+
20
+ * __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
21
+
22
+ * __M__: Faster model, with accuracy degradation less than 1.5%.
23
+
24
+ * __S__: The fastest model, with accuracy degradation less than 2%.
25
+
26
+
27
+ __Goals of elastic models:__
28
+
29
+ * Provide flexibility in cost vs quality selection for inference
30
+ * Provide clear quality and latency benchmarks
31
+ * Provide interface of HF libraries: transformers and diffusers with a single line of code
32
+ * Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
33
+ * Provide the best models and service for self-hosting.
34
+
35
+ > It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.
36
+
37
+ -----
38
+
39
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6487003ecd55eec571d14f96/ouz3FYQzG8C7Fl3XpNe6t.jpeg)
40
+
41
+ ## Inference
42
+
43
+ To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
44
+
45
+ ```python
46
+ import torch
47
+ from transformers import AutoTokenizer
48
+ from elastic_models.transformers import AutoModelForCausalLM
49
+
50
+ # Currently we require to have your HF token
51
+ # as we use original weights for part of layers and
52
+ # model confugaration as well
53
+ model_name = "mistralai/Mistral-7B-Instruct-v0.3"
54
+ hf_token = ''
55
+ hf_cache_dir = ''
56
+ device = torch.device("cuda")
57
+
58
+ # Create mode
59
+ tokenizer = AutoTokenizer.from_pretrained(
60
+ model_name, token=hf_token
61
+ )
62
+ model = AutoModelForCausalLM.from_pretrained(
63
+ model_name,
64
+ token=hf_token,
65
+ cache_dir=hf_cache_dir,
66
+ torch_dtype=torch.bfloat16,
67
+ attn_implementation="sdpa",
68
+ mode='s'
69
+ ).to(device)
70
+ model.generation_config.pad_token_id = tokenizer.eos_token_id
71
+
72
+ # Inference simple as transformers library
73
+ prompt = "Describe basics of DNNs quantization."
74
+ inputs = tokenizer(prompt, return_tensors="pt")
75
+ inputs.to(device)
76
+
77
+ with torch.inference_mode:
78
+ generate_ids = model.generate(**inputs, max_length=500)
79
+
80
+ input_len = inputs['input_ids'].shape[1]
81
+ generate_ids = generate_ids[:, input_len:]
82
+ output = tokenizer.batch_decode(
83
+ generate_ids,
84
+ skip_special_tokens=True,
85
+ clean_up_tokenization_spaces=False
86
+ )[0]
87
+
88
+ # Validate answer
89
+ print(f"# Q:\n{prompt}\n")
90
+ print(f"# A:\n{output}\n")
91
+ ```
92
+
93
+ ### Installation
94
+
95
+
96
+ __System requirements__
97
+
98
+ * GPUs: H100, L40s
99
+
100
+ * CPU: AMD, Intel
101
+
102
+ * OS: Linux #TODO
103
+
104
+ * Python: 3.10-3.12
105
+
106
+
107
+ To work with our models
108
+
109
+ ```shell
110
+ pip install thestage
111
+ pip install elastic_models
112
+ ```
113
+
114
+ Then go to app.thestage.ai, login and generate API token from your profile page. Set up API token as follows:
115
+
116
+ ```shell
117
+ thestage config set --api-token <YOUR_API_TOKEN>
118
+ ```
119
+
120
+ Congrats, now you can use accelerated models!
121
+
122
+ ----
123
+
124
+ ## Benchmarks
125
+
126
+ Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
127
+
128
+ ### Quality benchmarks
129
+
130
+ For quality evaluation we have used: #TODO link to github
131
+
132
+ | Metric/Model | S | M | L | XL | Original | W8A8, int8 |
133
+ |---------------|---|---|---|----|----------|------------|
134
+ | MMLU | 0 | 0 | 0 | 0 | 0 | 0 |
135
+ | PIQA | 0 | 0 | 0 | 0 | 0 | 0 |
136
+ | Arc Challenge | 0 | 0 | 0 | 0 | 0 | 0 |
137
+ | Winogrande | 0 | 0 | 0 | 0 | 0 | 0 |
138
+
139
+
140
+ * **MMLU**:Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
141
+ * **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
142
+ * **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
143
+ * **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
144
+
145
+ ### Latency benchmarks
146
+
147
+ We have profiled models in different scenarios:
148
+
149
+ <table>
150
+ <tr><th> 100 input/300 output; tok/s </th><th> 1000 input/1000 output; tok/s </th></tr>
151
+ <tr><td>
152
+
153
+ | GPU/Model | S | M | L | XL | Original | W8A8, int8 |
154
+ |-----------|-----|---|---|----|----------|------------|
155
+ | H100 | 189 | 0 | 0 | 0 | 48 | 0 |
156
+ | L40s | 79 | 0 | 0 | 0 | 42 | 0 |
157
+
158
+
159
+
160
+ </td><td>
161
+
162
+ | GPU/Model | S | M | L | XL | Original | W8A8, int8 |
163
+ |-----------|-----|---|---|----|----------|------------|
164
+ | H100 | 189 | 0 | 0 | 0 | 48 | 0 |
165
+ | L40s | 79 | 0 | 0 | 0 | 42 | 0 |
166
+
167
+ </td></tr> </table>
168
+
169
+
170
+ ## Links
171
+
172
+ * __Platform__: [app.thestage.ai](app.thestage.ai)
173
+ * __Elastic models Github__: [app.thestage.ai](app.thestage.ai)
174
+ * __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
175
+ * __Contact email__: contact@thestage.ai
176
+