| To use this model | |
| ```python | |
| import torch | |
| from transformers import AutoTokenizer, AutoModelForSequenceClassification | |
| tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") | |
| model = AutoModelForSequenceClassification.from_pretrained("LsTam/MQ-classification") | |
| softmax = torch.nn.Softmax(dim=1) | |
| prediction = lambda p : [(a[0] < a[1]) * 1 for a in p] | |
| # 0 is for wrong question and 1 for good ones | |
| text = ['Your question' + ' </s> ' + 'your context'] | |
| a = tokenizer(text, return_tensors="pt") | |
| result = model(**a) | |
| pred = prediction(softmax(result.logits).tolist()) | |
| ``` |