Update README.md
Browse files
README.md
CHANGED
|
@@ -39,3 +39,16 @@ from optimum.exporters.onnx import main_export
|
|
| 39 |
main_export('sentence-transformers/all-distilroberta-v1', "./output", cache_dir='./cache', optimize='O1')
|
| 40 |
```
|
| 41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
main_export('sentence-transformers/all-distilroberta-v1', "./output", cache_dir='./cache', optimize='O1')
|
| 40 |
```
|
| 41 |
|
| 42 |
+
Please note, this ONNX model does not contain the mean pooling layer, it needs to be done in code afterwards or the embeddings won't work.
|
| 43 |
+
|
| 44 |
+
Code like this:
|
| 45 |
+
|
| 46 |
+
```python
|
| 47 |
+
#Mean Pooling - Take attention mask into account for correct averaging
|
| 48 |
+
def mean_pooling(model_output, attention_mask):
|
| 49 |
+
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
|
| 50 |
+
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
| 51 |
+
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
See the example code from the original model in the "Usage (HuggingFace Transformers)" section.
|