Seriki philschmid commited on
Commit
7404ed7
·
0 Parent(s):

Duplicate from philschmid/roberta-base-squad2-optimized

Browse files

Co-authored-by: Philipp Schmid <philschmid@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.npy filter=lfs diff=lfs merge=lfs -text
14
+ *.npz filter=lfs diff=lfs merge=lfs -text
15
+ *.onnx filter=lfs diff=lfs merge=lfs -text
16
+ *.ot filter=lfs diff=lfs merge=lfs -text
17
+ *.parquet filter=lfs diff=lfs merge=lfs -text
18
+ *.pb filter=lfs diff=lfs merge=lfs -text
19
+ *.pickle filter=lfs diff=lfs merge=lfs -text
20
+ *.pkl filter=lfs diff=lfs merge=lfs -text
21
+ *.pt filter=lfs diff=lfs merge=lfs -text
22
+ *.pth filter=lfs diff=lfs merge=lfs -text
23
+ *.rar filter=lfs diff=lfs merge=lfs -text
24
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
26
+ *.tflite filter=lfs diff=lfs merge=lfs -text
27
+ *.tgz filter=lfs diff=lfs merge=lfs -text
28
+ *.wasm filter=lfs diff=lfs merge=lfs -text
29
+ *.xz filter=lfs diff=lfs merge=lfs -text
30
+ *.zip filter=lfs diff=lfs merge=lfs -text
31
+ *.zst filter=lfs diff=lfs merge=lfs -text
32
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - endpoints-template
5
+ - optimum
6
+ library_name: generic
7
+ ---
8
+
9
+ # Optimized and Quantized [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) with a custom handler.py
10
+
11
+
12
+ This repository implements a `custom` handler for `question-answering` for 🤗 Inference Endpoints for accelerated inference using [🤗 Optiumum](https://huggingface.co/docs/optimum/index). The code for the customized handler is in the [handler.py](https://huggingface.co/philschmid/roberta-base-squad2-optimized/blob/main/handler.py).
13
+
14
+ Below is also describe how we converted & optimized the model, based on the [Accelerate Transformers with Hugging Face Optimum](https://huggingface.co/blog/optimum-inference) blog post. You can also check out the [notebook](https://huggingface.co/philschmid/roberta-base-squad2-optimized/blob/main/optimize_model.ipynb).
15
+
16
+ ### expected Request payload
17
+
18
+ ```json
19
+ {
20
+ "inputs": {
21
+ "question": "As what is Philipp working?",
22
+ "context": "Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value."
23
+ }
24
+ }
25
+ ```
26
+
27
+ below is an example on how to run a request using Python and `requests`.
28
+
29
+ ## Run Request
30
+
31
+ ```python
32
+ import json
33
+ from typing import List
34
+ import requests as r
35
+ import base64
36
+
37
+ ENDPOINT_URL = ""
38
+ HF_TOKEN = ""
39
+
40
+
41
+ def predict(question:str=None,context:str=None):
42
+ payload = {"inputs": {"question": question, "context": context}}
43
+ response = r.post(
44
+ ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
45
+ )
46
+ return response.json()
47
+
48
+
49
+ prediction = predict(
50
+ question="As what is Philipp working?",
51
+ context="Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science."
52
+ )
53
+ ```
54
+
55
+ expected output
56
+
57
+ ```python
58
+ {
59
+ 'score': 0.4749588668346405,
60
+ 'start': 88,
61
+ 'end': 102,
62
+ 'answer': 'Technical Lead'
63
+ }
64
+ ```
65
+
66
+
67
+
68
+ # Convert & Optimize model with Optimum
69
+
70
+ Steps:
71
+ 1. [Convert model to ONNX](#1-convert-model-to-onnx)
72
+ 2. [Optimize & quantize model with Optimum](#2-optimize--quantize-model-with-optimum)
73
+ 3. [Create Custom Handler for Inference Endpoints](#3-create-custom-handler-for-inference-endpoints)
74
+ 4. [Test Custom Handler Locally](#4-test-custom-handler-locally)
75
+ 5. [Push to repository and create Inference Endpoint](#5-push-to-repository-and-create-inference-endpoint)
76
+
77
+ Helpful links:
78
+ * [Accelerate Transformers with Hugging Face Optimum](https://huggingface.co/blog/optimum-inference)
79
+ * [Optimizing Transformers for GPUs with Optimum](https://www.philschmid.de/optimizing-transformers-with-optimum-gpu)
80
+ * [Optimum Documentation](https://huggingface.co/docs/optimum/onnxruntime/modeling_ort)
81
+ * [Create Custom Handler Endpoints](https://link-to-docs)
82
+
83
+ ## Setup & Installation
84
+
85
+
86
+ ```python
87
+ %%writefile requirements.txt
88
+ optimum[onnxruntime]==1.4.0
89
+ mkl-include
90
+ mkl
91
+ ```
92
+
93
+
94
+ ```python
95
+ !pip install -r requirements.txt
96
+ ```
97
+
98
+ ## 0. Base line Performance
99
+
100
+
101
+ ```python
102
+ from transformers import pipeline
103
+
104
+ qa = pipeline("question-answering",model="deepset/roberta-base-squad2")
105
+ ```
106
+
107
+ Okay, let's test the performance (latency) with sequence length of 128.
108
+
109
+
110
+ ```python
111
+ context="Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value."
112
+ question="As what is Philipp working?"
113
+
114
+ payload = {"inputs": {"question": question, "context": context}}
115
+ ```
116
+
117
+
118
+ ```python
119
+ from time import perf_counter
120
+ import numpy as np
121
+
122
+ def measure_latency(pipe,payload):
123
+ latencies = []
124
+ # warm up
125
+ for _ in range(10):
126
+ _ = pipe(question=payload["inputs"]["question"], context=payload["inputs"]["context"])
127
+ # Timed run
128
+ for _ in range(50):
129
+ start_time = perf_counter()
130
+ _ = pipe(question=payload["inputs"]["question"], context=payload["inputs"]["context"])
131
+ latency = perf_counter() - start_time
132
+ latencies.append(latency)
133
+ # Compute run statistics
134
+ time_avg_ms = 1000 * np.mean(latencies)
135
+ time_std_ms = 1000 * np.std(latencies)
136
+ return f"Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f}"
137
+
138
+ print(f"Vanilla model {measure_latency(qa,payload)}")
139
+ # Vanilla model Average latency (ms) - 64.15 +\- 2.44
140
+ ```
141
+
142
+
143
+
144
+ ## 1. Convert model to ONNX
145
+
146
+
147
+ ```python
148
+ from optimum.onnxruntime import ORTModelForQuestionAnswering
149
+ from transformers import AutoTokenizer
150
+ from pathlib import Path
151
+
152
+
153
+ model_id="deepset/roberta-base-squad2"
154
+ onnx_path = Path(".")
155
+
156
+ # load vanilla transformers and convert to onnx
157
+ model = ORTModelForQuestionAnswering.from_pretrained(model_id, from_transformers=True)
158
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
159
+
160
+ # save onnx checkpoint and tokenizer
161
+ model.save_pretrained(onnx_path)
162
+ tokenizer.save_pretrained(onnx_path)
163
+ ```
164
+
165
+
166
+ ## 2. Optimize & quantize model with Optimum
167
+
168
+
169
+ ```python
170
+ from optimum.onnxruntime import ORTOptimizer, ORTQuantizer
171
+ from optimum.onnxruntime.configuration import OptimizationConfig, AutoQuantizationConfig
172
+
173
+ # Create the optimizer
174
+ optimizer = ORTOptimizer.from_pretrained(model)
175
+
176
+ # Define the optimization strategy by creating the appropriate configuration
177
+ optimization_config = OptimizationConfig(optimization_level=99) # enable all optimizations
178
+
179
+ # Optimize the model
180
+ optimizer.optimize(save_dir=onnx_path, optimization_config=optimization_config)
181
+ ```
182
+
183
+
184
+ ```python
185
+ # create ORTQuantizer and define quantization configuration
186
+ dynamic_quantizer = ORTQuantizer.from_pretrained(onnx_path, file_name="model_optimized.onnx")
187
+ dqconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)
188
+
189
+ # apply the quantization configuration to the model
190
+ model_quantized_path = dynamic_quantizer.quantize(
191
+ save_dir=onnx_path,
192
+ quantization_config=dqconfig,
193
+ )
194
+
195
+ ```
196
+
197
+ ## 3. Create Custom Handler for Inference Endpoints
198
+
199
+
200
+
201
+ ```python
202
+ %%writefile handler.py
203
+ from typing import Dict, List, Any
204
+ from optimum.onnxruntime import ORTModelForQuestionAnswering
205
+ from transformers import AutoTokenizer, pipeline
206
+
207
+
208
+ class EndpointHandler():
209
+ def __init__(self, path=""):
210
+ # load the optimized model
211
+ self.model = ORTModelForQuestionAnswering.from_pretrained(path, file_name="model_optimized_quantized.onnx")
212
+ self.tokenizer = AutoTokenizer.from_pretrained(path)
213
+ # create pipeline
214
+ self.pipeline = pipeline("question-answering", model=self.model, tokenizer=self.tokenizer)
215
+
216
+ def __call__(self, data: Any) -> List[List[Dict[str, float]]]:
217
+ """
218
+ Args:
219
+ data (:obj:):
220
+ includes the input data and the parameters for the inference.
221
+ Return:
222
+ A :obj:`list`:. The list contains the answer and scores of the inference inputs
223
+ """
224
+ inputs = data.get("inputs", data)
225
+ # run the model
226
+ prediction = self.pipeline(**inputs)
227
+ # return prediction
228
+ return prediction
229
+ ```
230
+
231
+ ## 4. Test Custom Handler Locally
232
+
233
+
234
+
235
+ ```python
236
+ from handler import EndpointHandler
237
+
238
+ # init handler
239
+ my_handler = EndpointHandler(path=".")
240
+
241
+ # prepare sample payload
242
+ context="Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value."
243
+ question="As what is Philipp working?"
244
+
245
+ payload = {"inputs": {"question": question, "context": context}}
246
+
247
+ # test the handler
248
+ my_handler(payload)
249
+ ```
250
+
251
+
252
+ ```python
253
+ from time import perf_counter
254
+ import numpy as np
255
+
256
+ def measure_latency(handler,payload):
257
+ latencies = []
258
+ # warm up
259
+ for _ in range(10):
260
+ _ = handler(payload)
261
+ # Timed run
262
+ for _ in range(50):
263
+ start_time = perf_counter()
264
+ _ = handler(payload)
265
+ latency = perf_counter() - start_time
266
+ latencies.append(latency)
267
+ # Compute run statistics
268
+ time_avg_ms = 1000 * np.mean(latencies)
269
+ time_std_ms = 1000 * np.std(latencies)
270
+ return f"Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f}"
271
+
272
+ print(f"Optimized & Quantized model {measure_latency(my_handler,payload)}")
273
+ #
274
+
275
+ ```
276
+
277
+ `Optimized & Quantized model Average latency (ms) - 29.90 +\- 0.53`
278
+ `Vanilla model Average latency (ms) - 64.15 +\- 2.44`
279
+
280
+ ## 5. Push to repository and create Inference Endpoint
281
+
282
+
283
+
284
+ ```python
285
+ # add all our new files
286
+ !git add *
287
+ # commit our files
288
+ !git commit -m "add custom handler"
289
+ # push the files to the hub
290
+ !git push
291
+ ```
292
+
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "deepset/roberta-base-squad2",
3
+ "architectures": [
4
+ "RobertaForQuestionAnswering"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 768,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "language": "english",
17
+ "layer_norm_eps": 1e-05,
18
+ "max_position_embeddings": 514,
19
+ "name": "Roberta",
20
+ "num_attention_heads": 12,
21
+ "num_hidden_layers": 12,
22
+ "pad_token_id": 1,
23
+ "position_embedding_type": "absolute",
24
+ "transformers_version": "4.21.3",
25
+ "type_vocab_size": 1,
26
+ "use_cache": false,
27
+ "vocab_size": 50265
28
+ }
handler.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Dict, List, Any
2
+ from optimum.onnxruntime import ORTModelForQuestionAnswering
3
+ from transformers import AutoTokenizer, pipeline
4
+
5
+
6
+ class EndpointHandler():
7
+ def __init__(self, path=""):
8
+ # load the optimized model
9
+ self.model = ORTModelForQuestionAnswering.from_pretrained(path, file_name="model_optimized_quantized.onnx")
10
+ self.tokenizer = AutoTokenizer.from_pretrained(path)
11
+ # create pipeline
12
+ self.pipeline = pipeline("question-answering", model=self.model, tokenizer=self.tokenizer)
13
+
14
+ def __call__(self, data: Any) -> List[List[Dict[str, float]]]:
15
+ """
16
+ Args:
17
+ data (:obj:):
18
+ includes the input data and the parameters for the inference.
19
+ Return:
20
+ A :obj:`list`:. The list contains the answer and scores of the inference inputs
21
+ """
22
+ inputs = data.get("inputs", data)
23
+ # run the model
24
+ prediction = self.pipeline(**inputs)
25
+ # return prediction
26
+ return prediction
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:570afefbc8642150310e46c10a252dd091c8f44449e8a3a65a425f77991dc2ab
3
+ size 496337664
model_optimized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11c0577c4bb3afdb2a88e21807d5722511b6aa678d6d8275a7ba73c5cd8f88b1
3
+ size 496254364
model_optimized_quantized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a27adda924cc0cd34fde41f606da51673ebefb7132a5518e41f1196ebc362f1
3
+ size 305175132
optimize_model.ipynb ADDED
@@ -0,0 +1,480 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# Convert & Optimize model with Optimum \n",
8
+ "\n",
9
+ "\n",
10
+ "Steps:\n",
11
+ "1. Convert model to ONNX\n",
12
+ "2. Optimize & quantize model with Optimum\n",
13
+ "3. Create Custom Handler for Inference Endpoints\n",
14
+ "4. Test Custom Handler Locally\n",
15
+ "5. Push to repository and create Inference Endpoint\n",
16
+ "\n",
17
+ "Helpful links:\n",
18
+ "* [Accelerate Transformers with Hugging Face Optimum](https://huggingface.co/blog/optimum-inference)\n",
19
+ "* [Optimizing Transformers for GPUs with Optimum](https://www.philschmid.de/optimizing-transformers-with-optimum-gpu)\n",
20
+ "* [Optimum Documentation](https://huggingface.co/docs/optimum/onnxruntime/modeling_ort)\n",
21
+ "* [Create Custom Handler Endpoints](https://link-to-docs)"
22
+ ]
23
+ },
24
+ {
25
+ "cell_type": "markdown",
26
+ "metadata": {},
27
+ "source": [
28
+ "## Setup & Installation"
29
+ ]
30
+ },
31
+ {
32
+ "cell_type": "code",
33
+ "execution_count": 1,
34
+ "metadata": {},
35
+ "outputs": [
36
+ {
37
+ "name": "stdout",
38
+ "output_type": "stream",
39
+ "text": [
40
+ "Writing requirements.txt\n"
41
+ ]
42
+ }
43
+ ],
44
+ "source": [
45
+ "%%writefile requirements.txt\n",
46
+ "optimum[onnxruntime]==1.4.0\n",
47
+ "mkl-include\n",
48
+ "mkl"
49
+ ]
50
+ },
51
+ {
52
+ "cell_type": "code",
53
+ "execution_count": null,
54
+ "metadata": {},
55
+ "outputs": [],
56
+ "source": [
57
+ "!pip install -r requirements.txt"
58
+ ]
59
+ },
60
+ {
61
+ "cell_type": "markdown",
62
+ "metadata": {},
63
+ "source": [
64
+ "## 0. Base line Performance\n"
65
+ ]
66
+ },
67
+ {
68
+ "cell_type": "code",
69
+ "execution_count": null,
70
+ "metadata": {},
71
+ "outputs": [],
72
+ "source": [
73
+ "from transformers import pipeline\n",
74
+ "\n",
75
+ "qa = pipeline(\"question-answering\",model=\"deepset/roberta-base-squad2\")"
76
+ ]
77
+ },
78
+ {
79
+ "cell_type": "markdown",
80
+ "metadata": {},
81
+ "source": [
82
+ "Okay, let's test the performance (latency) with sequence length of 128."
83
+ ]
84
+ },
85
+ {
86
+ "cell_type": "code",
87
+ "execution_count": 3,
88
+ "metadata": {},
89
+ "outputs": [
90
+ {
91
+ "data": {
92
+ "text/plain": [
93
+ "'{\"inputs\": {\"question\": \"As what is Philipp working?\", \"context\": \"Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value.\"}}'"
94
+ ]
95
+ },
96
+ "execution_count": 3,
97
+ "metadata": {},
98
+ "output_type": "execute_result"
99
+ }
100
+ ],
101
+ "source": [
102
+ "context=\"Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value.\" \n",
103
+ "question=\"As what is Philipp working?\" \n",
104
+ "\n",
105
+ "payload = {\"inputs\": {\"question\": question, \"context\": context}}"
106
+ ]
107
+ },
108
+ {
109
+ "cell_type": "code",
110
+ "execution_count": 9,
111
+ "metadata": {},
112
+ "outputs": [
113
+ {
114
+ "name": "stdout",
115
+ "output_type": "stream",
116
+ "text": [
117
+ "Vanilla model Average latency (ms) - 64.15 +\\- 2.44\n"
118
+ ]
119
+ }
120
+ ],
121
+ "source": [
122
+ "from time import perf_counter\n",
123
+ "import numpy as np \n",
124
+ "\n",
125
+ "def measure_latency(pipe,payload):\n",
126
+ " latencies = []\n",
127
+ " # warm up\n",
128
+ " for _ in range(10):\n",
129
+ " _ = pipe(question=payload[\"inputs\"][\"question\"], context=payload[\"inputs\"][\"context\"])\n",
130
+ " # Timed run\n",
131
+ " for _ in range(50):\n",
132
+ " start_time = perf_counter()\n",
133
+ " _ = pipe(question=payload[\"inputs\"][\"question\"], context=payload[\"inputs\"][\"context\"])\n",
134
+ " latency = perf_counter() - start_time\n",
135
+ " latencies.append(latency)\n",
136
+ " # Compute run statistics\n",
137
+ " time_avg_ms = 1000 * np.mean(latencies)\n",
138
+ " time_std_ms = 1000 * np.std(latencies)\n",
139
+ " return f\"Average latency (ms) - {time_avg_ms:.2f} +\\- {time_std_ms:.2f}\"\n",
140
+ "\n",
141
+ "print(f\"Vanilla model {measure_latency(qa,payload)}\")"
142
+ ]
143
+ },
144
+ {
145
+ "cell_type": "markdown",
146
+ "metadata": {},
147
+ "source": [
148
+ "## 1. Convert model to ONNX"
149
+ ]
150
+ },
151
+ {
152
+ "cell_type": "code",
153
+ "execution_count": 10,
154
+ "metadata": {},
155
+ "outputs": [
156
+ {
157
+ "data": {
158
+ "application/vnd.jupyter.widget-view+json": {
159
+ "model_id": "df00c03d67b546bf8a3d1a327b9380f5",
160
+ "version_major": 2,
161
+ "version_minor": 0
162
+ },
163
+ "text/plain": [
164
+ "Downloading: 0%| | 0.00/571 [00:00<?, ?B/s]"
165
+ ]
166
+ },
167
+ "metadata": {},
168
+ "output_type": "display_data"
169
+ },
170
+ {
171
+ "data": {
172
+ "text/plain": [
173
+ "('./tokenizer_config.json',\n",
174
+ " './special_tokens_map.json',\n",
175
+ " './vocab.json',\n",
176
+ " './merges.txt',\n",
177
+ " './added_tokens.json',\n",
178
+ " './tokenizer.json')"
179
+ ]
180
+ },
181
+ "execution_count": 10,
182
+ "metadata": {},
183
+ "output_type": "execute_result"
184
+ }
185
+ ],
186
+ "source": [
187
+ "from optimum.onnxruntime import ORTModelForQuestionAnswering\n",
188
+ "from transformers import AutoTokenizer\n",
189
+ "from pathlib import Path\n",
190
+ "\n",
191
+ "\n",
192
+ "model_id=\"deepset/roberta-base-squad2\"\n",
193
+ "onnx_path = Path(\".\")\n",
194
+ "\n",
195
+ "# load vanilla transformers and convert to onnx\n",
196
+ "model = ORTModelForQuestionAnswering.from_pretrained(model_id, from_transformers=True)\n",
197
+ "tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
198
+ "\n",
199
+ "# save onnx checkpoint and tokenizer\n",
200
+ "model.save_pretrained(onnx_path)\n",
201
+ "tokenizer.save_pretrained(onnx_path)"
202
+ ]
203
+ },
204
+ {
205
+ "cell_type": "markdown",
206
+ "metadata": {},
207
+ "source": [
208
+ "## 2. Optimize & quantize model with Optimum"
209
+ ]
210
+ },
211
+ {
212
+ "cell_type": "code",
213
+ "execution_count": 11,
214
+ "metadata": {},
215
+ "outputs": [
216
+ {
217
+ "name": "stderr",
218
+ "output_type": "stream",
219
+ "text": [
220
+ "2022-09-12 18:47:03.240390005 [W:onnxruntime:, inference_session.cc:1488 Initialize] Serializing optimized model with Graph Optimization level greater than ORT_ENABLE_EXTENDED and the NchwcTransformer enabled. The generated model may contain hardware specific optimizations, and should only be used in the same environment the model was optimized in.\n"
221
+ ]
222
+ },
223
+ {
224
+ "data": {
225
+ "text/plain": [
226
+ "PosixPath('.')"
227
+ ]
228
+ },
229
+ "execution_count": 11,
230
+ "metadata": {},
231
+ "output_type": "execute_result"
232
+ }
233
+ ],
234
+ "source": [
235
+ "from optimum.onnxruntime import ORTOptimizer, ORTQuantizer\n",
236
+ "from optimum.onnxruntime.configuration import OptimizationConfig, AutoQuantizationConfig\n",
237
+ "\n",
238
+ "# Create the optimizer\n",
239
+ "optimizer = ORTOptimizer.from_pretrained(model)\n",
240
+ "\n",
241
+ "# Define the optimization strategy by creating the appropriate configuration\n",
242
+ "optimization_config = OptimizationConfig(optimization_level=99) # enable all optimizations\n",
243
+ "\n",
244
+ "# Optimize the model\n",
245
+ "optimizer.optimize(save_dir=onnx_path, optimization_config=optimization_config)"
246
+ ]
247
+ },
248
+ {
249
+ "cell_type": "code",
250
+ "execution_count": 12,
251
+ "metadata": {},
252
+ "outputs": [],
253
+ "source": [
254
+ "# create ORTQuantizer and define quantization configuration\n",
255
+ "dynamic_quantizer = ORTQuantizer.from_pretrained(onnx_path, file_name=\"model_optimized.onnx\")\n",
256
+ "dqconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)\n",
257
+ "\n",
258
+ "# apply the quantization configuration to the model\n",
259
+ "model_quantized_path = dynamic_quantizer.quantize(\n",
260
+ " save_dir=onnx_path,\n",
261
+ " quantization_config=dqconfig,\n",
262
+ ")\n"
263
+ ]
264
+ },
265
+ {
266
+ "cell_type": "markdown",
267
+ "metadata": {},
268
+ "source": [
269
+ "## 3. Create Custom Handler for Inference Endpoints\n"
270
+ ]
271
+ },
272
+ {
273
+ "cell_type": "code",
274
+ "execution_count": 1,
275
+ "metadata": {},
276
+ "outputs": [
277
+ {
278
+ "name": "stdout",
279
+ "output_type": "stream",
280
+ "text": [
281
+ "Overwriting handler.py\n"
282
+ ]
283
+ }
284
+ ],
285
+ "source": [
286
+ "%%writefile handler.py\n",
287
+ "from typing import Dict, List, Any\n",
288
+ "from optimum.onnxruntime import ORTModelForQuestionAnswering\n",
289
+ "from transformers import AutoTokenizer, pipeline\n",
290
+ "\n",
291
+ "\n",
292
+ "class EndpointHandler():\n",
293
+ " def __init__(self, path=\"\"):\n",
294
+ " # load the optimized model\n",
295
+ " self.model = ORTModelForQuestionAnswering.from_pretrained(path, file_name=\"model_optimized_quantized.onnx\")\n",
296
+ " self.tokenizer = AutoTokenizer.from_pretrained(path)\n",
297
+ " # create pipeline\n",
298
+ " self.pipeline = pipeline(\"question-answering\", model=self.model, tokenizer=self.tokenizer)\n",
299
+ "\n",
300
+ " def __call__(self, data: Any) -> List[List[Dict[str, float]]]:\n",
301
+ " \"\"\"\n",
302
+ " Args:\n",
303
+ " data (:obj:):\n",
304
+ " includes the input data and the parameters for the inference.\n",
305
+ " Return:\n",
306
+ " A :obj:`list`:. The list contains the answer and scores of the inference inputs\n",
307
+ " \"\"\"\n",
308
+ " inputs = data.get(\"inputs\", data)\n",
309
+ " # run the model\n",
310
+ " prediction = self.pipeline(**inputs)\n",
311
+ " # return prediction\n",
312
+ " return prediction"
313
+ ]
314
+ },
315
+ {
316
+ "cell_type": "markdown",
317
+ "metadata": {},
318
+ "source": [
319
+ "## 4. Test Custom Handler Locally\n"
320
+ ]
321
+ },
322
+ {
323
+ "cell_type": "code",
324
+ "execution_count": 2,
325
+ "metadata": {},
326
+ "outputs": [
327
+ {
328
+ "data": {
329
+ "text/plain": [
330
+ "{'score': 0.4749588668346405,\n",
331
+ " 'start': 88,\n",
332
+ " 'end': 102,\n",
333
+ " 'answer': 'Technical Lead'}"
334
+ ]
335
+ },
336
+ "execution_count": 2,
337
+ "metadata": {},
338
+ "output_type": "execute_result"
339
+ }
340
+ ],
341
+ "source": [
342
+ "from handler import EndpointHandler\n",
343
+ "\n",
344
+ "# init handler\n",
345
+ "my_handler = EndpointHandler(path=\".\")\n",
346
+ "\n",
347
+ "# prepare sample payload\n",
348
+ "context=\"Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value.\" \n",
349
+ "question=\"As what is Philipp working?\" \n",
350
+ "\n",
351
+ "payload = {\"inputs\": {\"question\": question, \"context\": context}}\n",
352
+ "\n",
353
+ "# test the handler\n",
354
+ "my_handler(payload)"
355
+ ]
356
+ },
357
+ {
358
+ "cell_type": "code",
359
+ "execution_count": 5,
360
+ "metadata": {},
361
+ "outputs": [
362
+ {
363
+ "name": "stdout",
364
+ "output_type": "stream",
365
+ "text": [
366
+ "Optimized & Quantized model Average latency (ms) - 29.90 +\\- 0.53\n"
367
+ ]
368
+ }
369
+ ],
370
+ "source": [
371
+ "from time import perf_counter\n",
372
+ "import numpy as np \n",
373
+ "\n",
374
+ "def measure_latency(handler,payload):\n",
375
+ " latencies = []\n",
376
+ " # warm up\n",
377
+ " for _ in range(10):\n",
378
+ " _ = handler(payload)\n",
379
+ " # Timed run\n",
380
+ " for _ in range(50):\n",
381
+ " start_time = perf_counter()\n",
382
+ " _ = handler(payload)\n",
383
+ " latency = perf_counter() - start_time\n",
384
+ " latencies.append(latency)\n",
385
+ " # Compute run statistics\n",
386
+ " time_avg_ms = 1000 * np.mean(latencies)\n",
387
+ " time_std_ms = 1000 * np.std(latencies)\n",
388
+ " return f\"Average latency (ms) - {time_avg_ms:.2f} +\\- {time_std_ms:.2f}\"\n",
389
+ "\n",
390
+ "print(f\"Optimized & Quantized model {measure_latency(my_handler,payload)}\")"
391
+ ]
392
+ },
393
+ {
394
+ "cell_type": "markdown",
395
+ "metadata": {},
396
+ "source": [
397
+ "`Vanilla model Average latency (ms) - 64.15 +\\- 2.44`"
398
+ ]
399
+ },
400
+ {
401
+ "cell_type": "markdown",
402
+ "metadata": {},
403
+ "source": [
404
+ "## 5. Push to repository and create Inference Endpoint\n"
405
+ ]
406
+ },
407
+ {
408
+ "cell_type": "code",
409
+ "execution_count": 1,
410
+ "metadata": {},
411
+ "outputs": [
412
+ {
413
+ "name": "stdout",
414
+ "output_type": "stream",
415
+ "text": [
416
+ "[main a854397] add custom handler\n",
417
+ " 14 files changed, 151227 insertions(+)\n",
418
+ " create mode 100644 README.md\n",
419
+ " create mode 100644 config.json\n",
420
+ " create mode 100644 handler.py\n",
421
+ " create mode 100644 merges.txt\n",
422
+ " create mode 100644 model.onnx\n",
423
+ " create mode 100644 model_optimized.onnx\n",
424
+ " create mode 100644 model_optimized_quantized.onnx\n",
425
+ " create mode 100644 optimize_model.ipynb\n",
426
+ " create mode 100644 ort_config.json\n",
427
+ " create mode 100644 requirements.txt\n",
428
+ " create mode 100644 special_tokens_map.json\n",
429
+ " create mode 100644 tokenizer.json\n",
430
+ " create mode 100644 tokenizer_config.json\n",
431
+ " create mode 100644 vocab.json\n",
432
+ "Username for 'https://huggingface.co': ^C\n"
433
+ ]
434
+ }
435
+ ],
436
+ "source": [
437
+ "# add all our new files\n",
438
+ "!git add * \n",
439
+ "# commit our files\n",
440
+ "!git commit -m \"add custom handler\"\n",
441
+ "# push the files to the hub\n",
442
+ "!git push"
443
+ ]
444
+ },
445
+ {
446
+ "cell_type": "code",
447
+ "execution_count": null,
448
+ "metadata": {},
449
+ "outputs": [],
450
+ "source": []
451
+ }
452
+ ],
453
+ "metadata": {
454
+ "kernelspec": {
455
+ "display_name": "Python 3.9.12 ('az': conda)",
456
+ "language": "python",
457
+ "name": "python3"
458
+ },
459
+ "language_info": {
460
+ "codemirror_mode": {
461
+ "name": "ipython",
462
+ "version": 3
463
+ },
464
+ "file_extension": ".py",
465
+ "mimetype": "text/x-python",
466
+ "name": "python",
467
+ "nbconvert_exporter": "python",
468
+ "pygments_lexer": "ipython3",
469
+ "version": "3.9.12"
470
+ },
471
+ "orig_nbformat": 4,
472
+ "vscode": {
473
+ "interpreter": {
474
+ "hash": "bddb99ecda5b40a820d97bf37f3ff3a89fb9dbcf726ae84d28624ac628a665b4"
475
+ }
476
+ }
477
+ },
478
+ "nbformat": 4,
479
+ "nbformat_minor": 2
480
+ }
ort_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "opset": null,
3
+ "optimization": {},
4
+ "optimum_version": "1.4.0",
5
+ "quantization": {
6
+ "activations_dtype": "QUInt8",
7
+ "activations_symmetric": false,
8
+ "format": "QOperator",
9
+ "is_static": false,
10
+ "mode": "IntegerOps",
11
+ "nodes_to_exclude": [],
12
+ "nodes_to_quantize": [],
13
+ "operators_to_quantize": [
14
+ "MatMul",
15
+ "Add"
16
+ ],
17
+ "per_channel": false,
18
+ "qdq_add_pair_to_weight": false,
19
+ "qdq_dedicated_pair": false,
20
+ "qdq_op_type_per_channel_support_to_axis": {
21
+ "MatMul": 1
22
+ },
23
+ "reduce_range": false,
24
+ "weights_dtype": "QInt8",
25
+ "weights_symmetric": true
26
+ },
27
+ "transformers_version": "4.21.3",
28
+ "use_external_data_format": false
29
+ }
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ optimum[onnxruntime]==1.4.0
2
+ mkl-include
3
+ mkl
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "cls_token": {
12
+ "__type": "AddedToken",
13
+ "content": "<s>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false
18
+ },
19
+ "do_lower_case": false,
20
+ "eos_token": {
21
+ "__type": "AddedToken",
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": true,
25
+ "rstrip": false,
26
+ "single_word": false
27
+ },
28
+ "errors": "replace",
29
+ "full_tokenizer_file": null,
30
+ "mask_token": {
31
+ "__type": "AddedToken",
32
+ "content": "<mask>",
33
+ "lstrip": true,
34
+ "normalized": true,
35
+ "rstrip": false,
36
+ "single_word": false
37
+ },
38
+ "model_max_length": 512,
39
+ "name_or_path": "deepset/roberta-base-squad2",
40
+ "pad_token": {
41
+ "__type": "AddedToken",
42
+ "content": "<pad>",
43
+ "lstrip": false,
44
+ "normalized": true,
45
+ "rstrip": false,
46
+ "single_word": false
47
+ },
48
+ "sep_token": {
49
+ "__type": "AddedToken",
50
+ "content": "</s>",
51
+ "lstrip": false,
52
+ "normalized": true,
53
+ "rstrip": false,
54
+ "single_word": false
55
+ },
56
+ "special_tokens_map_file": "/home/ubuntu/.cache/huggingface/transformers/c9d2c178fac8d40234baa1833a3b1903d393729bf93ea34da247c07db24900d0.cb2244924ab24d706b02fd7fcedaea4531566537687a539ebb94db511fd122a0",
57
+ "tokenizer_class": "RobertaTokenizer",
58
+ "trim_offsets": true,
59
+ "unk_token": {
60
+ "__type": "AddedToken",
61
+ "content": "<unk>",
62
+ "lstrip": false,
63
+ "normalized": true,
64
+ "rstrip": false,
65
+ "single_word": false
66
+ }
67
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff