Arjun24420 commited on
Commit
b7534c5
·
1 Parent(s): b654f44

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -1
README.md CHANGED
@@ -12,4 +12,41 @@ The original labels include 'true', 'mostly-true', 'half-true', 'barely-true', '
12
  In this custom mapping, statements labeled as 'true', 'mostly-true', and 'half-true' are all categorized as 'true', while 'barely-true', 'false', and 'pants-fire' are grouped under the 'false' category.
13
  This mapping simplifies the classification task into a binary problem, aiming to distinguish between truthful and non-truthful statements.
14
 
15
- Bias: The model may inherit biases present in the training data, and it's important to be aware of potential biases in the predictions.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  In this custom mapping, statements labeled as 'true', 'mostly-true', and 'half-true' are all categorized as 'true', while 'barely-true', 'false', and 'pants-fire' are grouped under the 'false' category.
13
  This mapping simplifies the classification task into a binary problem, aiming to distinguish between truthful and non-truthful statements.
14
 
15
+ Bias: The model may inherit biases present in the training data, and it's important to be aware of potential biases in the predictions.
16
+
17
+ ## Code Implementation
18
+ ```python
19
+ # Load model directly
20
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
21
+
22
+ # Load the pre-trained model and tokenizer
23
+ tokenizer = AutoTokenizer.from_pretrained(
24
+ "Arjun24420/BERT-FakeNews-BinaryClassification")
25
+ model = AutoModelForSequenceClassification.from_pretrained(
26
+ "Arjun24420/BERT-FakeNews-BinaryClassification")
27
+
28
+
29
+ def predict(text):
30
+ # Tokenize the input text and move tensors to the GPU if available
31
+ inputs = tokenizer(text, padding=True, truncation=True,
32
+ max_length=512, return_tensors="pt")
33
+
34
+ # Get model output (logits)
35
+ outputs = model(**inputs)
36
+
37
+ probs = outputs.logits.softmax(1)
38
+ # Get the probabilities for each class
39
+ class_probabilities = {class_mapping[i]: probs[0, i].item()
40
+ for i in range(probs.shape[1])}
41
+
42
+ return class_probabilities
43
+
44
+
45
+ # Define class labels mapping
46
+ class_mapping = {
47
+ 0: 'reliable',
48
+ 1: 'unreliable',
49
+ }
50
+
51
+
52
+ ```