pa90 commited on
Commit
0894b5a
·
verified ·
1 Parent(s): 4c9e241

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -124
README.md CHANGED
@@ -1,143 +1,35 @@
1
  ---
 
2
  license: mit
3
  base_model: FacebookAI/roberta-base
4
  tags:
 
 
5
  - roberta
6
- - metaphor
 
7
  - text-classification
8
- language:
9
- - en
10
  ---
11
 
12
- # Metaphor Scoring Model
13
 
14
- RoBERTa-base fine-tuned for metaphorical novelty scoring (1-4 scale).
 
 
 
 
15
 
16
- ## 🚀 Quick Start
17
 
18
- ### Installation
19
- ```bash
20
- pip install transformers torch
21
- ```
22
-
23
- ### Download and Run
24
- ```bash
25
- git clone https://huggingface.co/pa90/Metaphor_Scoring_Model
26
- cd Metaphor_Scoring_Model
27
- python Interactive.py
28
- ```
29
-
30
- ### Usage Example
31
- ```
32
- Enter sentence: Time is money
33
- Score: 3/4 (confidence: 0.892)
34
-
35
- Enter sentence: Life is a journey
36
- Score: 4/4 (confidence: 0.945)
37
-
38
- Enter sentence: quit
39
- Goodbye!
40
- ```
41
-
42
- ## 💻 Programmatic Usage
43
-
44
- ```python
45
- from transformers import AutoTokenizer, AutoModelForSequenceClassification
46
- import torch
47
-
48
- # Load model and tokenizer
49
- tokenizer = AutoTokenizer.from_pretrained("pa90/Metaphor_Scoring_Model")
50
- model = AutoModelForSequenceClassification.from_pretrained("pa90/Metaphor_Scoring_Model")
51
-
52
- # Score a sentence
53
- sentence = "Time is money"
54
- inputs = tokenizer(
55
- sentence,
56
- max_length=256,
57
- truncation=True,
58
- padding='max_length',
59
- return_tensors='pt'
60
- )
61
-
62
- with torch.no_grad():
63
- outputs = model(**inputs)
64
- predicted_class = torch.argmax(outputs.logits, dim=-1).item()
65
- score = predicted_class + 1 # Convert to 1-4 scale
66
-
67
- print(f"Metaphor Novelty Score: {score}/4")
68
- ```
69
-
70
- ## 📊 Model Details
71
-
72
- - **Base Model**: [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base)
73
- - **Architecture**: RoBERTa (Robustly Optimized BERT Pretraining Approach)
74
- - **Parameters**: 125M
75
- - **Task**: 4-class classification for metaphorical novelty
76
- - **Input**: Single sentence (max 256 tokens)
77
- - **Output Scores**:
78
  - 1: Conventional/literal expression
79
  - 2: Slightly metaphorical
80
  - 3: Moderately metaphorical
81
  - 4: Highly novel metaphor
82
 
83
- ## 🎯 Use Cases
84
-
85
- - Literary analysis
86
- - Creative writing assistance
87
- - Figurative language detection
88
- - Linguistic research
89
- - Educational tools for teaching metaphors
90
-
91
- ## 📄 License
92
-
93
- This model is released under the **MIT License**.
94
-
95
- ### Base Model
96
- RoBERTa-base by Facebook AI is licensed under the MIT License.
97
-
98
- ### Permissions
99
- - ✅ Commercial use
100
- - ✅ Modification
101
- - ✅ Distribution
102
- - ✅ Private use
103
-
104
- ### Conditions
105
- - Attribution required (see citation below)
106
-
107
- ## 📚 Citation
108
-
109
- If you use this model in your research or application, please cite:
110
-
111
- ```bibtex
112
- @misc{metaphor-scoring-model-2024,
113
- author = {Your Name},
114
- title = {Metaphor Scoring Model: RoBERTa-based Metaphorical Novelty Classifier},
115
- year = {2024},
116
- publisher = {Hugging Face},
117
- howpublished = {\url{https://huggingface.co/pa90/Metaphor_Scoring_Model}}
118
- }
119
- ```
120
-
121
- Please also cite the original RoBERTa paper:
122
-
123
- ```bibtex
124
- @article{liu2019roberta,
125
- title={RoBERTa: A Robustly Optimized BERT Pretraining Approach},
126
- author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
127
- journal={arXiv preprint arXiv:1907.11692},
128
- year={2019}
129
- }
130
- ```
131
 
132
- ## 🔧 Technical Details
133
 
134
- ### Training
135
- - Fine-tuned from RoBERTa-base checkpoint
136
- - Task: 4-class sequence classification
137
- - Max sequence length: 256 tokens
138
 
139
- ### Requirements
140
- ```
141
- transformers>=4.30.0
142
- torch>=2.0.0
143
- ```
 
1
  ---
2
+ language: en
3
  license: mit
4
  base_model: FacebookAI/roberta-base
5
  tags:
6
+ - sequence-classification
7
+ - metaphor-scoring
8
  - roberta
9
+ - nlp
10
+ task_categories:
11
  - text-classification
 
 
12
  ---
13
 
14
+ # How to Use
15
 
16
+ 1. Install Python
17
+ 2. Open Command Prompt and install required packages: Type "pip install torch transformers"
18
+ 3. In Command Prompt, go to the folder where you saved "Metaphor_Scoring_Model"
19
+ Example: cd C:\Downloads\Metaphor_Scoring_Model (You have to add the letters "cd")
20
+ 4. Run: Type "python Interactive.py"
21
 
22
+ ## Model Details
23
 
24
+ - **Task**: Metaphor novelty scoring (1-4 scale)
25
+ - **Output Scores**:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  - 1: Conventional/literal expression
27
  - 2: Slightly metaphorical
28
  - 3: Moderately metaphorical
29
  - 4: Highly novel metaphor
30
 
31
+ ## Attribution
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
+ This model is fine-tuned from [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base), which is licensed under MIT.
34
 
 
 
 
 
35