Text Classification
Transformers
Safetensors
English
tiny_transformer
Michielo commited on
Commit
744d042
·
verified ·
1 Parent(s): d77f01e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -191
README.md CHANGED
@@ -5,197 +5,113 @@ language:
5
  pipeline_tag: zero-shot-classification
6
  ---
7
 
8
- # Model Card for Model ID
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
- <!-- Provide a quick summary of what the model is/does. -->
11
 
12
 
13
-
14
- ## Model Details
15
-
16
- ### Model Description
17
-
18
- <!-- Provide a longer summary of what this model is. -->
19
-
20
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
21
-
22
- - **Developed by:** [More Information Needed]
23
- - **Funded by [optional]:** [More Information Needed]
24
- - **Shared by [optional]:** [More Information Needed]
25
- - **Model type:** [More Information Needed]
26
- - **Language(s) (NLP):** [More Information Needed]
27
- - **License:** [More Information Needed]
28
- - **Finetuned from model [optional]:** [More Information Needed]
29
-
30
- ### Model Sources [optional]
31
-
32
- <!-- Provide the basic links for the model. -->
33
-
34
- - **Repository:** [More Information Needed]
35
- - **Paper [optional]:** [More Information Needed]
36
- - **Demo [optional]:** [More Information Needed]
37
-
38
- ## Uses
39
-
40
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
41
-
42
- ### Direct Use
43
-
44
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
45
-
46
- [More Information Needed]
47
-
48
- ### Downstream Use [optional]
49
-
50
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
51
-
52
- [More Information Needed]
53
-
54
- ### Out-of-Scope Use
55
-
56
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
57
-
58
- [More Information Needed]
59
-
60
- ## Bias, Risks, and Limitations
61
-
62
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
63
-
64
- [More Information Needed]
65
-
66
- ### Recommendations
67
-
68
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
69
-
70
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
71
-
72
- ## How to Get Started with the Model
73
-
74
- Use the code below to get started with the model.
75
-
76
- [More Information Needed]
77
-
78
- ## Training Details
79
-
80
- ### Training Data
81
-
82
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
83
-
84
- [More Information Needed]
85
-
86
- ### Training Procedure
87
-
88
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
89
-
90
- #### Preprocessing [optional]
91
-
92
- [More Information Needed]
93
-
94
-
95
- #### Training Hyperparameters
96
-
97
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
98
-
99
- #### Speeds, Sizes, Times [optional]
100
-
101
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
102
-
103
- [More Information Needed]
104
-
105
- ## Evaluation
106
-
107
- <!-- This section describes the evaluation protocols and provides the results. -->
108
-
109
- ### Testing Data, Factors & Metrics
110
-
111
- #### Testing Data
112
-
113
- <!-- This should link to a Dataset Card if possible. -->
114
-
115
- [More Information Needed]
116
-
117
- #### Factors
118
-
119
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
120
-
121
- [More Information Needed]
122
-
123
- #### Metrics
124
-
125
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
126
-
127
- [More Information Needed]
128
-
129
- ### Results
130
-
131
- [More Information Needed]
132
-
133
- #### Summary
134
-
135
-
136
-
137
- ## Model Examination [optional]
138
-
139
- <!-- Relevant interpretability work for the model goes here -->
140
-
141
- [More Information Needed]
142
-
143
- ## Environmental Impact
144
-
145
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
146
-
147
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
148
-
149
- - **Hardware Type:** [More Information Needed]
150
- - **Hours used:** [More Information Needed]
151
- - **Cloud Provider:** [More Information Needed]
152
- - **Compute Region:** [More Information Needed]
153
- - **Carbon Emitted:** [More Information Needed]
154
-
155
- ## Technical Specifications [optional]
156
-
157
- ### Model Architecture and Objective
158
-
159
- [More Information Needed]
160
-
161
- ### Compute Infrastructure
162
-
163
- [More Information Needed]
164
-
165
- #### Hardware
166
-
167
- [More Information Needed]
168
-
169
- #### Software
170
-
171
- [More Information Needed]
172
-
173
- ## Citation [optional]
174
-
175
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
176
-
177
- **BibTeX:**
178
-
179
- [More Information Needed]
180
-
181
- **APA:**
182
-
183
- [More Information Needed]
184
-
185
- ## Glossary [optional]
186
-
187
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
188
-
189
- [More Information Needed]
190
-
191
- ## More Information [optional]
192
-
193
- [More Information Needed]
194
-
195
- ## Model Card Authors [optional]
196
-
197
- [More Information Needed]
198
-
199
- ## Model Card Contact
200
-
201
- [More Information Needed]
 
5
  pipeline_tag: zero-shot-classification
6
  ---
7
 
8
+ # Tiny-Toxic-Detector
9
+
10
+ A tiny comment toxicity classifier model at only 2M parameters. With only ~10MB ram and fast inference we bring you one of the best toxicity classifiers that outperforms models over 50 times its size.
11
+
12
+ A paper on this model is being released soon.
13
+
14
+
15
+ ### Usage
16
+ This model uses custom architecture and requires some extra custom code to work. Below you can find a script that is fully-usable.
17
+ ```python
18
+ import torch
19
+ import torch.nn as nn
20
+ from transformers import PreTrainedModel, PretrainedConfig, AutoTokenizer
21
+ from huggingface_hub import login
22
+ import os
23
+
24
+ # Define TinyTransformer model
25
+ class TinyTransformer(nn.Module):
26
+ def __init__(self, vocab_size, embed_dim, num_heads, ff_dim, num_layers):
27
+ super().__init__()
28
+ self.embedding = nn.Embedding(vocab_size, embed_dim)
29
+ self.pos_encoding = nn.Parameter(torch.zeros(1, 512, embed_dim))
30
+ encoder_layer = nn.TransformerEncoderLayer(d_model=embed_dim, nhead=num_heads, dim_feedforward=ff_dim, batch_first=True)
31
+ self.transformer = nn.TransformerEncoder(encoder_layer, num_layers=num_layers)
32
+ self.fc = nn.Linear(embed_dim, 1)
33
+ self.sigmoid = nn.Sigmoid()
34
+
35
+ def forward(self, x):
36
+ x = self.embedding(x) + self.pos_encoding[:, :x.size(1), :]
37
+ x = self.transformer(x)
38
+ x = x.mean(dim=1) # Global average pooling
39
+ x = self.fc(x)
40
+ return self.sigmoid(x)
41
+
42
+ class TinyTransformerConfig(PretrainedConfig):
43
+ model_type = "tiny_transformer"
44
+
45
+ def __init__(self, vocab_size=30522, embed_dim=64, num_heads=2, ff_dim=128, num_layers=4, max_position_embeddings=512, **kwargs):
46
+ super().__init__(**kwargs)
47
+ self.vocab_size = vocab_size
48
+ self.embed_dim = embed_dim
49
+ self.num_heads = num_heads
50
+ self.ff_dim = ff_dim
51
+ self.num_layers = num_layers
52
+ self.max_position_embeddings = max_position_embeddings
53
+
54
+ class TinyTransformerForSequenceClassification(PreTrainedModel):
55
+ config_class = TinyTransformerConfig
56
+
57
+ def __init__(self, config):
58
+ super().__init__(config)
59
+ self.num_labels = 1
60
+ self.transformer = TinyTransformer(
61
+ config.vocab_size,
62
+ config.embed_dim,
63
+ config.num_heads,
64
+ config.ff_dim,
65
+ config.num_layers
66
+ )
67
+
68
+ def forward(self, input_ids, attention_mask=None):
69
+ outputs = self.transformer(input_ids)
70
+ return {"logits": outputs}
71
+
72
+ # Load the Tiny-Toxic-Detector model and tokenizer
73
+ def load_model_and_tokenizer():
74
+ device = torch.device("cpu") # Due to GPU overhead inference is faster on CPU!
75
+
76
+ # Load Tiny-toxic-detector
77
+ config = TinyTransformerConfig.from_pretrained("AssistantsLab/Tiny-Toxic-Detector")
78
+ model = TinyTransformerForSequenceClassification.from_pretrained("AssistantsLab/Tiny-Toxic-Detector", config=config).to(device)
79
+ tokenizer = AutoTokenizer.from_pretrained("AssistantsLab/Tiny-Toxic-Detector")
80
+
81
+ return model, tokenizer, device
82
+
83
+ # Prediction function
84
+ def predict_toxicity(text, model, tokenizer, device):
85
+ inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128, padding="max_length").to(device)
86
+ if "token_type_ids" in inputs:
87
+ del inputs["token_type_ids"]
88
+
89
+ with torch.no_grad():
90
+ outputs = model(**inputs)
91
+ logits = outputs["logits"].squeeze()
92
+ prediction = "Toxic" if logits > 0.5 else "Not Toxic"
93
+ return prediction
94
+
95
+ def main():
96
+ model, tokenizer, device = load_model_and_tokenizer()
97
+
98
+ while True:
99
+ print("Enter text to classify (or type 'exit' to quit):")
100
+ text = input()
101
+
102
+ if text.lower() == 'exit':
103
+ print("Exiting...")
104
+ break
105
+
106
+ if text:
107
+ prediction = predict_toxicity(text, model, tokenizer, device)
108
+ print(f"Prediction: {prediction}")
109
+ else:
110
+ print("No text provided. Please enter some text.")
111
+
112
+ if __name__ == "__main__":
113
+ main()
114
+ ```
115
 
 
116
 
117