pierluigic commited on
Commit
69062cf
·
verified ·
1 Parent(s): eabd2ce

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ base_model:
5
+ - meta-llama/Meta-Llama-3-8B
6
+ pipeline_tag: text2text-generation
7
+ ---
8
+
9
+ ## Janus (PoS)
10
+ (Built with Meta Llama 3)
11
+
12
+ For the version without PoS tag visit [Janus](https://huggingface.co/ChangeIsKey/llama3-janus).
13
+
14
+ ### Model Details
15
+ - **Model Name**: Janus
16
+ - **Version**: 1.0
17
+ - **Developers**: Pierluigi Cassotti, Nina Tahmasebi
18
+ - **Affiliation**: University of Gothenburg
19
+ - **License**: MIT
20
+ - **GitHub Repository**: [Historical Word Usage Generation](https://github.com/ChangeIsKey/historical-word-usage-generation)
21
+ - **Paper**: [Sense-specific Historical Word Usage Generation](https://transacl.org)
22
+ - **Contact**: pierluigi.cassotti@gu.se
23
+
24
+ ### Model Description
25
+ Janus is a fine-tuned **Llama 3 8B** model designed to generate historically and semantically accurate word usages. It takes as input a word, its sense definition, and a year and produces example sentences that reflect linguistic usage from the specified period. This model is particularly useful for **semantic change detection**, **historical NLP**, and **linguistic research**.
26
+
27
+ ### Intended Use
28
+ - **Semantic Change Detection**: Investigating how word meanings evolve over time.
29
+ - **Historical Text Processing**: Enhancing the understanding and modeling of historical texts.
30
+ - **Corpus Expansion**: Generating sense-annotated corpora for linguistic studies.
31
+
32
+ ### Training Data
33
+ - **Dataset**: Extracted from the **Oxford English Dictionary (OED)**
34
+ - **Size**: Over **1.2 million** sense-annotated historical usages
35
+ - **Time Span**: **1700 - 2020**
36
+ - **Data Format**:
37
+ ```
38
+ <year><|t|><lemma><|t|><definition><|s|><historical usage sentence><|end|>
39
+ ```
40
+ - **Janus (PoS) Format**:
41
+ ```
42
+ <year><|t|><lemma><|t|><definition><|p|><PoS><|p|><|s|><historical usage sentence><|end|>
43
+ ```
44
+
45
+ ### Training Procedure
46
+ - **Base Model**: `meta-llama/Llama-3-8B`
47
+ - **Optimization**: **QLoRA** (Quantized Low-Rank Adaptation)
48
+ - **Batch Size**: **4**
49
+ - **Learning Rate**: **2e-4**
50
+ - **Epochs**: **1**
51
+
52
+ ### Model Performance
53
+ - **Temporal Accuracy**: Root mean squared error (RMSE) of **~52.7 years** (close to OED ground truth)
54
+ - **Semantic Accuracy**: Comparable to OED test data on human evaluations
55
+ - **Context Variability**: Low lexical repetition, preserving natural linguistic diversity
56
+
57
+ ### Usage Example
58
+ #### Generating Historical Usages
59
+ ```python
60
+ from transformers import AutoModelForCausalLM, AutoTokenizer
61
+ import torch
62
+
63
+ model_name = "ChangeIsKey/llama3-janus"
64
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
65
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
66
+
67
+ input_text = "1800<|t|>awful<|t|>Used to emphasize something unpleasant or negative; ‘such a’, ‘an absolute’.<|p|>jj<|p|><|s|>"
68
+ inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
69
+
70
+ output = model.generate(**inputs, temperature=1.0, top_p=0.9, max_new_tokens=50)
71
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
72
+ ```
73
+
74
+ For more examples, see the GitHub repository [Historical Word Usage Generation](https://github.com/ChangeIsKey/historical-word-usage-generation)
75
+
76
+ ### Limitations & Ethical Considerations
77
+ - **Historical Bias**: The model may reflect biases present in historical texts.
78
+ - **Time Granularity**: The temporal resolution is approximate (~50 years RMSE).
79
+ - **Modern Influence**: Despite fine-tuning, the model may still generate modern phrases in older contexts.
80
+ - **Not Trained for Fairness**: The model has not been explicitly trained to be fair or unbiased. It may produce sensitive, outdated, or culturally inappropriate content.
81
+
82
+ ### Citation
83
+ If you use Janus, please cite:
84
+ ```
85
+ @article{Cassotti2024Janus,
86
+ author = {Pierluigi Cassotti and Nina Tahmasebi},
87
+ title = {Sense-specific Historical Word Usage Generation},
88
+ journal = {TACL},
89
+ year = {2025}
90
+ }
91
+ ```