fraseque commited on
Commit
a4c9bfa
·
verified ·
1 Parent(s): ddd1f99

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +219 -1
README.md CHANGED
@@ -6,5 +6,223 @@ tags:
6
  - Neuron
7
  - Inferentia2
8
  - AWS
 
 
9
  pipeline_tag: text-generation
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - Neuron
7
  - Inferentia2
8
  - AWS
9
+ - text-generation
10
+ - fp8
11
  pipeline_tag: text-generation
12
+ ---
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+ This is an FP8-quantized version of Meta's Llama 3.3 70B Instruct model, optimized for efficient inference on AWS Neuron accelerators.
17
+
18
+ ## Model Details
19
+
20
+ ### Model Description
21
+
22
+ <!-- Provide a longer summary of what this model is. -->
23
+ **Base Model:** [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct)
24
+
25
+ **Quantization:** FP8 (8-bit floating point)
26
+
27
+ **Optimization Target:** AWS Inferentia2
28
+
29
+ **Tensor Parallelism Degree:** 24
30
+
31
+ **Hardware:** AWS Inferentia2.48xlarge
32
+
33
+
34
+
35
+ - **Developed by:** [Fraser Sequeira]
36
+
37
+ ## Quick Start
38
+
39
+ This model requires AWS Neuron runtime and the appropriate neuron compiler. To use it:
40
+
41
+ ```python
42
+ import torch
43
+ import torch_neuronx
44
+ from transformers import AutoTokenizer, AutoModelForCausalLM
45
+
46
+ model_id = "fraseque/Llama-3.3-70B-FP8-Instruct-Neuron"
47
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
48
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="neuron")
49
+
50
+ # Generate text
51
+ inputs = tokenizer("Hello, how are you?", return_tensors="pt")
52
+ outputs = model.generate(**inputs, max_length=100)
53
+ print(tokenizer.decode(outputs[0]))
54
+ ```
55
+
56
+ ## Quantization Details
57
+
58
+ **Quantization Format:** FP8 (8-bit floating point precision)
59
+
60
+ **Compilation Configuration:**
61
+ - Tensor Parallelism (TP) degree: 24
62
+ - Target accelerator: AWS Inferentia2
63
+ - Instance type: aws.inf2.48xlarge
64
+
65
+ ## Uses
66
+
67
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
68
+
69
+ ### Direct Use
70
+
71
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
72
+
73
+ [More Information Needed]
74
+
75
+ ### Downstream Use [optional]
76
+
77
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
78
+
79
+ [More Information Needed]
80
+
81
+ ### Out-of-Scope Use
82
+
83
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
84
+
85
+ [More Information Needed]
86
+
87
+ ## Bias, Risks, and Limitations
88
+
89
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
90
+
91
+ [More Information Needed]
92
+
93
+ ### Recommendations
94
+
95
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
96
+
97
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
98
+
99
+ ## How to Get Started with the Model
100
+
101
+ Use the code below to get started with the model.
102
+
103
+ [More Information Needed]
104
+
105
+ ## Training Details
106
+
107
+ ### Training Data
108
+
109
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
110
+
111
+ [More Information Needed]
112
+
113
+ ### Training Procedure
114
+
115
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
116
+
117
+ #### Preprocessing [optional]
118
+
119
+ [More Information Needed]
120
+
121
+
122
+ #### Training Hyperparameters
123
+
124
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
125
+
126
+ #### Speeds, Sizes, Times [optional]
127
+
128
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
129
+
130
+ [More Information Needed]
131
+
132
+ ## Evaluation
133
+
134
+ <!-- This section describes the evaluation protocols and provides the results. -->
135
+
136
+ ### Testing Data, Factors & Metrics
137
+
138
+ #### Testing Data
139
+
140
+ <!-- This should link to a Dataset Card if possible. -->
141
+
142
+ [More Information Needed]
143
+
144
+ #### Factors
145
+
146
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
147
+
148
+ [More Information Needed]
149
+
150
+ #### Metrics
151
+
152
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
153
+
154
+ [More Information Needed]
155
+
156
+ ### Results
157
+
158
+ [More Information Needed]
159
+
160
+ #### Summary
161
+
162
+
163
+
164
+ ## Model Examination [optional]
165
+
166
+ <!-- Relevant interpretability work for the model goes here -->
167
+
168
+ [More Information Needed]
169
+
170
+ ## Environmental Impact
171
+
172
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
173
+
174
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
175
+
176
+ - **Hardware Type:** [More Information Needed]
177
+ - **Hours used:** [More Information Needed]
178
+ - **Cloud Provider:** [More Information Needed]
179
+ - **Compute Region:** [More Information Needed]
180
+ - **Carbon Emitted:** [More Information Needed]
181
+
182
+ ## Technical Specifications [optional]
183
+
184
+ ### Model Architecture and Objective
185
+
186
+ [More Information Needed]
187
+
188
+ ### Compute Infrastructure
189
+
190
+ [More Information Needed]
191
+
192
+ #### Hardware
193
+
194
+ [More Information Needed]
195
+
196
+ #### Software
197
+
198
+ [More Information Needed]
199
+
200
+ ## Citation [optional]
201
+
202
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
203
+
204
+ **BibTeX:**
205
+
206
+ [More Information Needed]
207
+
208
+ **APA:**
209
+
210
+ [More Information Needed]
211
+
212
+ ## Glossary [optional]
213
+
214
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
215
+
216
+ [More Information Needed]
217
+
218
+ ## More Information [optional]
219
+
220
+ [More Information Needed]
221
+
222
+ ## Model Card Authors [optional]
223
+
224
+ [More Information Needed]
225
+
226
+ ## Model Card Contact
227
+
228
+ [More Information Needed]