ByteMeHarder-404 commited on
Commit
61e8c2b
·
verified ·
1 Parent(s): 9c061fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -48
README.md CHANGED
@@ -1,62 +1,77 @@
1
  ---
2
- library_name: transformers
3
- license: apache-2.0
4
- base_model: bert-base-uncased
 
 
 
 
5
  tags:
6
- - generated_from_trainer
7
- model-index:
8
- - name: results
9
- results: []
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
 
15
- # results
16
 
17
- This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 0.0404
20
- - Mse: 0.0404
21
- - Mae: 0.1499
22
-
23
- ## Model description
24
-
25
- More information needed
26
-
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
-
31
- ## Training and evaluation data
32
-
33
- More information needed
34
 
35
- ## Training procedure
 
 
 
 
 
36
 
37
- ### Training hyperparameters
38
 
39
- The following hyperparameters were used during training:
40
- - learning_rate: 2e-05
41
- - train_batch_size: 16
42
- - eval_batch_size: 16
43
- - seed: 42
44
- - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
- - lr_scheduler_type: linear
46
- - num_epochs: 3
47
 
48
- ### Training results
49
 
50
- | Training Loss | Epoch | Step | Validation Loss | Mse | Mae |
51
- |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
52
- | 0.0078 | 1.0 | 1311 | 0.0420 | 0.0420 | 0.1557 |
53
- | 0.0154 | 2.0 | 2622 | 0.0368 | 0.0368 | 0.1420 |
54
- | 0.0098 | 3.0 | 3933 | 0.0378 | 0.0378 | 0.1441 |
55
 
 
 
 
 
56
 
57
- ### Framework versions
58
 
59
- - Transformers 4.56.1
60
- - Pytorch 2.8.0+cu126
61
- - Datasets 4.0.0
62
- - Tokenizers 0.22.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
+ datasets:
4
+ - ibm-research/argument_quality_ranking_30k
5
+ metrics:
6
+ - mean-squared-error
7
+ - mean-absolute-error
8
+ model-name: bert-base-uncased-finetuned-arg-quality
9
  tags:
10
+ - regression
11
+ - argument-quality
12
+ - bert
13
+ - fine-tuned
14
  ---
15
 
16
+ # BERT Base (uncased) fine-tuned on Argument Quality Ranking
 
17
 
18
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the **IBM Argument Quality Ranking** dataset. It predicts the **quality of arguments** as a score between 0 and 1. You can also convert the score to a **1–5 rating**.
19
 
20
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
+ ## Model Details
23
+ - **Model type**: BERT (base, uncased)
24
+ - **Fine-tuned on**: IBM Argument Quality Ranking (~30k arguments)
25
+ - **Task**: Regression (argument quality score)
26
+ - **Output**: Integer between 1-5
27
+ - **Training framework**: [🤗 Transformers](https://github.com/huggingface/transformers)
28
 
29
+ ---
30
 
31
+ ## Training
32
+ - Epochs: 3
33
+ - Batch size: 16
34
+ - Learning rate: 2e-5
35
+ - Optimizer: AdamW
36
+ - Evaluation metrics: Mean Squared Error (MSE), Mean Absolute Error (MAE)
 
 
37
 
38
+ ---
39
 
40
+ ## Evaluation Results
41
+ On the test set:
 
 
 
42
 
43
+ | Metric | Value |
44
+ |--------|-------|
45
+ | MSE | 0.0404 |
46
+ | MAE | 0.1499 |
47
 
48
+ ---
49
 
50
+ ## How to Use
51
+ ```python
52
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
53
+ import torch
54
+ import numpy as np
55
+
56
+ model_name = "ByteMeHarder-404/bert-base-uncased-finetuned-arg-quality"
57
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
58
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
59
+
60
+ def predict_quality(arguments):
61
+ inputs = tokenizer(arguments, truncation=True, padding=True, return_tensors="pt")
62
+ device = next(model.parameters()).device
63
+ inputs = {k: v.to(device) for k, v in inputs.items()}
64
+ model.eval()
65
+ with torch.no_grad():
66
+ outputs = model(**inputs)
67
+ preds = round(outputs.logits.squeeze().cpu().numpy()*4+1)
68
+ return preds
69
+
70
+ # Example
71
+ args = [
72
+ "School uniforms reduce individuality.",
73
+ "World Peace is great",
74
+ "Homework improves student learning outcomes."
75
+ ]
76
+
77
+ print("Ratings:", predict_rating(args)) # Output: 1–5 ratings