NickupAI commited on
Commit
fdf5b38
·
verified ·
1 Parent(s): 86dae38

Initial release of Nickup Swallow v1 🦅

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - ru
5
+ - zh
6
+ - de
7
+ - es
8
+ - fr
9
+ - ja
10
+ - it
11
+ - pt
12
+ - ar
13
+ tags:
14
+ - text-classification
15
+ - spam-detection
16
+ - content-filtering
17
+ - security
18
+ - nlp
19
+ - efficiency
20
+ license: apache-2.0
21
+ base_model: FacebookAI/xlm-roberta-base
22
+ metrics:
23
+ - accuracy
24
+ - latency
25
+ library_name: transformers
26
+ ---
27
+
28
+ # 🦅 Nickup Swallow (v2) - Optimized Edition
29
+
30
+ > **"Focused Filtering for Efficient Deployment."**
31
+
32
+ **Nickup Swallow v2** is a refined, optimized version of our multilingual text classification model. While many classification models exist, V2 focuses specifically on **reducing memory footprint and enhancing inference latency** for production environments where resource allocation is critical.
33
+
34
+ This model is ideal for acting as a robust **Gatekeeper** to filter aggressive spam, promotional content, and digital junk before data reaches larger Language Models (LLMs).
35
+
36
+ ## ✨ Key Advantages
37
+
38
+ * **📏 Resource Reduction:** Achieved a **50% reduction in model size** (270M parameters) compared to the original V1 (550M).
39
+ * **🌍 Multilingual Coverage:** Based on the strong, multilingual foundation of the `XLM-RoBERTa-Base` architecture.
40
+ * **🎯 Enhanced Robustness:** The training process led to significant functional improvements, particularly in achieving high confidence on verifiable spam while maintaining stable judgment on ambiguous content.
41
+ * **⏱️ High Latency Gain:** Optimized for faster inference speed on standard CPU and mobile hardware due to its compact size.
42
+
43
+ ## 📊 Performance Comparison
44
+
45
+ | Metric | V1 (Large) | **V2 (Optimized)** | Notes |
46
+ | :--- | :---: | :---: | :--- |
47
+ | **Model Size** | 550M params | **270M params** | Substantially reduced memory requirement. |
48
+ | **Accuracy (Est.)** | 89.32% | **~90.5%** | Achieved comparable or better accuracy on the downstream task. |
49
+ | **Base Architecture** | XLM-RoBERTa-Large | **XLM-RoBERTa-Base** | |
50
+
51
+ ## 🧪 Comparative Analysis (Functionality Check)
52
+
53
+ We compare V2's performance against V1 on critical filtering cases:
54
+
55
+ | Input Text | V1 Verdict (550M) | **V2 Verdict (270M)** | V2 Useless Confidence | Functional Result |
56
+ | :--- | :---: | :---: | :---: | :--- |
57
+ | *"Срочно! Уникальный товар: https://tinyurl.com/sale_forever..."* | LABEL_0 (0.25%) | **🗑️ USELESS** | **99.51%** | **V2 Superiority:** Achieved near-perfect confidence on malicious spam. |
58
+ | *"98523498230578509375023957029578239057239057"* | LABEL_0 (1.30%) | **🗑️ USELESS** | **98.56%** | Correctly flags raw digital noise as high-priority junk. |
59
+ | *"Привет, как дела? Что ешь?"* | LABEL_1 (65.45%) | **🗑️ USELESS** | **86.34%** | **Pragmatic Filtering:** Correctly categorizes conversational filler as non-factual (USELESS). |
60
+ | *"Солнце в 330 тысяч раз массивнее Земли..."* | LABEL_1 (98.99%) | **✅ USEFUL** | **99.77%** | Both models confidently preserve valuable facts. |
61
+
62
+ ## 💻 Usage
63
+
64
+ ```python
65
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
66
+ import torch.nn.functional as F
67
+ import torch
68
+
69
+ # Load from Hugging Face
70
+ model_name = "NickupAI/Nickup-Swallow-v2" # Recommended path
71
+
72
+ # Load the model and tokenizer (V2 uses clear labels: 0=USELESS, 1=USEFUL)
73
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
74
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
75
+ device = "cuda" if torch.cuda.is_available() else "cpu"
76
+ model.to(device).eval()
77
+
78
+
79
+ def classify(text, threshold=0.90):
80
+ """Classifies text and returns verdict based on a defined confidence threshold."""
81
+ inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512).to(device)
82
+ with torch.no_grad():
83
+ outputs = model(**inputs)
84
+ probs = F.softmax(outputs.logits, dim=-1)
85
+
86
+ # Label 0 = USELESS/Spam (the target class for filtering)
87
+ useless_prob = probs[0][0].item()
88
+ useful_prob = probs[0][1].item()
89
+
90
+ # Applying the pragmatic filtering threshold (90% confidence required to block)
91
+ if useless_prob > threshold:
92
+ return f"⛔ Blocked (Useless Confidence: {useless_prob:.2%})"
93
+ else:
94
+ return f"✅ Allowed (Useful Confidence: {useful_prob:.2%})"
95
+
96
+ # Example usage
97
+ text_spam = "BUY CRYPTO NOW! Click this link to get rich: https://scam-link.net"
98
+ text_fact = "The most popular Linux distribution used for servers is generally Ubuntu or CentOS."
99
+
100
+ print(classify(text_spam))
101
+ print(classify(text_fact))
102
+ ```
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "XLMRobertaForSequenceClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "dtype": "float32",
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "id2label": {
14
+ "0": "USELESS",
15
+ "1": "USEFUL"
16
+ },
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 3072,
19
+ "label2id": {
20
+ "USEFUL": 1,
21
+ "USELESS": 0
22
+ },
23
+ "layer_norm_eps": 1e-05,
24
+ "max_position_embeddings": 514,
25
+ "model_type": "xlm-roberta",
26
+ "num_attention_heads": 12,
27
+ "num_hidden_layers": 12,
28
+ "output_past": true,
29
+ "pad_token_id": 1,
30
+ "position_embedding_type": "absolute",
31
+ "problem_type": "single_label_classification",
32
+ "transformers_version": "4.57.3",
33
+ "type_vocab_size": 1,
34
+ "use_cache": true,
35
+ "vocab_size": 250002
36
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd6f2f987ca02147e8bc5e102aa0bdcfa4ec85461389b103a31a9cf80d497d32
3
+ size 1112205008
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ffb37461c391f096759f4a9bbbc329da0f36952f88bab061fcf84940c022e98
3
+ size 17082999
tokenizer_config.json ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "250001": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "<s>",
45
+ "clean_up_tokenization_spaces": false,
46
+ "cls_token": "<s>",
47
+ "eos_token": "</s>",
48
+ "extra_special_tokens": {},
49
+ "mask_token": "<mask>",
50
+ "model_max_length": 512,
51
+ "pad_token": "<pad>",
52
+ "sep_token": "</s>",
53
+ "tokenizer_class": "XLMRobertaTokenizerFast",
54
+ "unk_token": "<unk>"
55
+ }