permutans commited on
Commit
891acbc
·
verified ·
1 Parent(s): b0dfbcf

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +40 -29
README.md CHANGED
@@ -26,10 +26,10 @@ model-index:
26
  name: Orality Regression
27
  metrics:
28
  - type: mae
29
- value: 0.0819
30
  name: Mean Absolute Error
31
  - type: r2
32
- value: 0.734
33
  name: R² Score
34
  ---
35
 
@@ -48,26 +48,37 @@ Given a passage of text, the model outputs a continuous score where higher value
48
  | Task | Single-value regression (MSE loss) |
49
  | Output range | Continuous (not clamped) |
50
  | Max sequence length | 512 tokens |
51
- | Best MAE | **0.0819** |
52
- | R² (at best MAE) | **0.734** |
53
  | Parameters | ~149M |
54
 
55
  ## Usage
56
  ```python
57
- from transformers import AutoTokenizer, AutoModelForSequenceClassification
 
 
 
 
 
58
  import torch
 
59
 
60
  model_name = "HavelockAI/bert-orality-regressor"
61
- tokenizer = AutoTokenizer.from_pretrained(model_name)
62
- model = AutoModelForSequenceClassification.from_pretrained(model_name)
 
 
 
 
63
 
64
  text = "Tell me, O Muse, of that ingenious hero who travelled far and wide"
65
  inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
 
66
 
67
- with torch.no_grad():
68
  score = model(**inputs).logits.squeeze().item()
69
 
70
- print(f"Orality score: {score:.3f}")
71
  ```
72
 
73
  ### Score Interpretation
@@ -107,26 +118,26 @@ An 80/20 train/test split was used (random seed 42).
107
 
108
  | Epoch | Loss | MAE | R² |
109
  |-------|------|-----|-----|
110
- | 1 | 0.3485 | 0.1151 | 0.485 |
111
- | 2 | 0.0269 | 0.1145 | 0.446 |
112
- | 3 | 0.0235 | 0.0962 | 0.636 |
113
- | 4 | 0.0162 | 0.0937 | 0.648 |
114
- | 5 | 0.0228 | 0.1099 | 0.566 |
115
- | 6 | 0.0153 | 0.0971 | 0.605 |
116
- | 7 | 0.0115 | 0.0883 | 0.707 |
117
- | 8 | 0.0112 | 0.0906 | 0.681 |
118
- | 9 | 0.0095 | 0.0872 | 0.713 |
119
- | 10 | 0.0076 | 0.0898 | 0.691 |
120
- | 11 | 0.0060 | 0.0840 | 0.727 |
121
- | 12 | 0.0054 | 0.0850 | 0.715 |
122
- | 13 | 0.0050 | 0.0821 | 0.738 |
123
- | 14 | 0.0043 | 0.0820 | 0.737 |
124
- | **15** | **0.0040** | **0.0819** | **0.734** |
125
- | 16 | 0.0041 | 0.0891 | 0.689 |
126
- | 17 | 0.0035 | 0.0829 | 0.727 |
127
- | 18 | 0.0031 | 0.0825 | 0.729 |
128
- | 19 | 0.0032 | 0.0831 | 0.725 |
129
- | 20 | 0.0033 | 0.0833 | 0.724 |
130
 
131
  </details>
132
 
 
26
  name: Orality Regression
27
  metrics:
28
  - type: mae
29
+ value: 0.0791
30
  name: Mean Absolute Error
31
  - type: r2
32
+ value: 0.748
33
  name: R² Score
34
  ---
35
 
 
48
  | Task | Single-value regression (MSE loss) |
49
  | Output range | Continuous (not clamped) |
50
  | Max sequence length | 512 tokens |
51
+ | Best MAE | **0.0791** |
52
+ | R² (at best MAE) | **0.748** |
53
  | Parameters | ~149M |
54
 
55
  ## Usage
56
  ```python
57
+ import os
58
+ os.environ["TORCH_COMPILE_DISABLE"] = "1"
59
+
60
+ import warnings
61
+ warnings.filterwarnings("ignore", message="Flash Attention 2 only supports")
62
+
63
  import torch
64
+ from transformers import AutoModel, AutoTokenizer
65
 
66
  model_name = "HavelockAI/bert-orality-regressor"
67
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
68
+ model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
69
+ model.eval()
70
+
71
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
72
+ model = model.to(device)
73
 
74
  text = "Tell me, O Muse, of that ingenious hero who travelled far and wide"
75
  inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
76
+ inputs = {k: v.to(device) for k, v in inputs.items()}
77
 
78
+ with torch.no_grad(), torch.autocast(device_type=device.type, enabled=device.type == "cuda"):
79
  score = model(**inputs).logits.squeeze().item()
80
 
81
+ print(f"Orality score: {max(0.0, min(1.0, score)):.3f}")
82
  ```
83
 
84
  ### Score Interpretation
 
118
 
119
  | Epoch | Loss | MAE | R² |
120
  |-------|------|-----|-----|
121
+ | 1 | 0.3496 | 0.1173 | 0.476 |
122
+ | 2 | 0.0286 | 0.0992 | 0.593 |
123
+ | 3 | 0.0215 | 0.0872 | 0.704 |
124
+ | 4 | 0.0144 | 0.0879 | 0.714 |
125
+ | 5 | 0.0169 | 0.0865 | 0.712 |
126
+ | 6 | 0.0117 | 0.0853 | 0.700 |
127
+ | 7 | 0.0096 | 0.0922 | 0.691 |
128
+ | 8 | 0.0094 | 0.0850 | 0.722 |
129
+ | 9 | 0.0086 | 0.0822 | 0.745 |
130
+ | 10 | 0.0064 | 0.0841 | 0.723 |
131
+ | 11 | 0.0054 | 0.0921 | 0.682 |
132
+ | 12 | 0.0050 | 0.0840 | 0.720 |
133
+ | 13 | 0.0044 | 0.0806 | 0.744 |
134
+ | 14 | 0.0037 | 0.0805 | 0.740 |
135
+ | **15** | **0.0034** | **0.0791** | **0.748** |
136
+ | 16 | 0.0033 | 0.0807 | 0.738 |
137
+ | 17 | 0.0031 | 0.0803 | 0.742 |
138
+ | 18 | 0.0026 | 0.0797 | 0.745 |
139
+ | 19 | 0.0027 | 0.0803 | 0.742 |
140
+ | 20 | 0.0029 | 0.0805 | 0.741 |
141
 
142
  </details>
143