File size: 572 Bytes
9440a10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
To use this model
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = AutoModelForSequenceClassification.from_pretrained("LsTam/MQ-classification")

softmax = torch.nn.Softmax(dim=1)
prediction = lambda p : [(a[0] < a[1]) * 1 for a in p]

# 0 is for wrong question and 1 for good ones

text = ['Your question' + ' </s> ' + 'your context']

a = tokenizer(text, return_tensors="pt")
result = model(**a)
pred = prediction(softmax(result.logits).tolist())
```