--- datasets: - upb-nlp/same_topic_articles language: - ro - en base_model: - FacebookAI/xlm-roberta-large ---
Logo
## How to Get Started with the Model Use the code below to get started with the model. ```python import torch from transformers import AutoTokenizer, XLMRobertaForSequenceClassification MODEL_PATH = "upb-nlp/xlm_roberta_large_article_same_topic_classification" tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH) model = XLMRobertaForSequenceClassification.from_pretrained(MODEL_PATH, num_labels=2).to('cuda') model.eval() t1 = "Article title. Article body." t2 = "Article title. Article body." inputs = tokenizer( t1, t2, return_tensors="pt", truncation=True, padding='max_length', max_length=512 ).to('cuda') # Generate prediction with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() print(predicted_class) ```