permutans commited on
Commit
732cbf9
·
verified ·
1 Parent(s): 5083fb8

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +139 -0
README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - text-classification
5
+ - regression
6
+ - bert
7
+ - orality
8
+ - linguistics
9
+ - rhetorical-analysis
10
+ language:
11
+ - en
12
+ metrics:
13
+ - mae
14
+ - r2
15
+ base_model:
16
+ - google-bert/bert-base-uncased
17
+ pipeline_tag: text-classification
18
+ library_name: transformers
19
+ datasets:
20
+ - custom
21
+ model-index:
22
+ - name: bert-orality-regressor
23
+ results:
24
+ - task:
25
+ type: text-classification
26
+ name: Orality Regression
27
+ metrics:
28
+ - type: mae
29
+ value: 0.0786
30
+ name: Mean Absolute Error
31
+ - type: r2
32
+ value: 0.756
33
+ name: R² Score
34
+ ---
35
+
36
+ # Havelock Orality Regressor
37
+
38
+ BERT-based regression model that scores text on the **oral–literate spectrum** (0–1), grounded in Walter Ong's *Orality and Literacy* (1982).
39
+
40
+ Given a passage of text, the model outputs a continuous score where higher values indicate greater orality (spoken, performative, additive discourse) and lower values indicate greater literacy (analytic, subordinative, abstract discourse).
41
+
42
+ ## Model Details
43
+
44
+ | Property | Value |
45
+ |----------|-------|
46
+ | Base model | `bert-base-uncased` |
47
+ | Architecture | `BertForSequenceClassification` (num_labels=1) |
48
+ | Task | Single-value regression (MSE loss) |
49
+ | Output range | Continuous (not clamped) |
50
+ | Max sequence length | 512 tokens |
51
+ | Best MAE | **0.0786** |
52
+ | R² | **0.756** |
53
+ | Parameters | ~109M |
54
+
55
+ ## Usage
56
+ ```python
57
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
58
+ import torch
59
+
60
+ model_name = "HavelockAI/bert-orality-regressor"
61
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
62
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
63
+
64
+ text = "Tell me, O Muse, of that ingenious hero who travelled far and wide"
65
+ inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
66
+
67
+ with torch.no_grad():
68
+ score = model(**inputs).logits.squeeze().item()
69
+
70
+ print(f"Orality score: {score:.3f}")
71
+ ```
72
+
73
+ ### Score Interpretation
74
+
75
+ | Score | Register |
76
+ |-------|----------|
77
+ | 0.8–1.0 | Highly oral — epic poetry, sermons, rap, oral storytelling |
78
+ | 0.6–0.8 | Oral-dominant — speeches, podcasts, conversational prose |
79
+ | 0.4–0.6 | Mixed — journalism, blog posts, dialogue-heavy fiction |
80
+ | 0.2–0.4 | Literate-dominant — essays, expository prose |
81
+ | 0.0–0.2 | Highly literate — academic papers, legal texts, philosophy |
82
+
83
+ ## Training
84
+
85
+ ### Data
86
+
87
+ The model was trained on a curated corpus of documents annotated with orality scores using a multi-pass scoring system. Scores were originally on a 0–100 scale and normalized to 0–1 for training. The corpus draws from Project Gutenberg, textfiles.com, Reddit, and Wikipedia talk pages, representing a range of registers from highly oral to highly literate.
88
+
89
+ An 80/20 train/test split was used (random seed 42).
90
+
91
+ ### Hyperparameters
92
+
93
+ | Parameter | Value |
94
+ |-----------|-------|
95
+ | Epochs | 3 |
96
+ | Batch size | 8 |
97
+ | Learning rate | 2e-5 |
98
+ | Optimizer | AdamW |
99
+ | LR schedule | Linear warmup (10% of total steps) |
100
+ | Gradient clipping | 1.0 |
101
+ | Loss | MSE (via HF `num_labels=1`) |
102
+
103
+ ### Training Metrics
104
+
105
+ | Epoch | Loss | MAE | R² |
106
+ |-------|------|-----|-----|
107
+ | 1 | 0.0382 | 0.1443 | 0.317 |
108
+ | 2 | 0.0187 | 0.0852 | 0.722 |
109
+ | 3 | 0.0128 | 0.0786 | 0.756 |
110
+
111
+ ## Limitations
112
+
113
+ - **Short training**: Only 3 epochs — likely undertrained. Further epochs or hyperparameter search would probably improve R².
114
+ - **No sigmoid clamping**: The model can output values outside [0, 1]. Consumers should clamp if needed.
115
+ - **Domain coverage**: Training corpus skews historical/literary. Performance on modern social media, code-switched text, or non-English text is untested.
116
+ - **Document length**: Texts longer than 512 tokens are truncated. The model sees only the first ~400 words, which may not be representative of longer documents.
117
+ - **Regression target subjectivity**: Orality scores involve human judgment; inter-annotator agreement bounds the ceiling for model performance.
118
+
119
+ ## Theoretical Background
120
+
121
+ The oral–literate spectrum follows Ong's framework, which characterizes oral discourse as additive, aggregative, redundant, agonistic, empathetic, and situational, while literate discourse is subordinative, analytic, abstract, distanced, and context-free. The model learns to place text along this continuum from document-level annotations informed by 72 specific rhetorical markers (36 oral, 36 literate).
122
+
123
+ ## Citation
124
+ ```bibtex
125
+ @misc{havelock2026regressor,
126
+ title={Havelock Orality Regressor},
127
+ author={Havelock AI},
128
+ year={2026},
129
+ url={https://huggingface.co/HavelockAI/bert-orality-regressor}
130
+ }
131
+ ```
132
+
133
+ ## References
134
+
135
+ - Ong, Walter J. *Orality and Literacy: The Technologizing of the Word*. Routledge, 1982.
136
+
137
+ ---
138
+
139
+ *Model version: 33b6eccc · Trained: February 2026*