Text Classification
Transformers
Safetensors
English
bert
fill-mask
BERT
NeuroBERT
transformer
pre-training
nlp
tiny-bert
edge-ai
low-resource
micro-nlp
quantized
iot
wearable-ai
offline-assistant
intent-detection
real-time
smart-home
embedded-systems
command-classification
toy-robotics
voice-ai
eco-ai
english
lightweight
mobile-nlp
ner
text-embeddings-inference
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,7 +7,7 @@ datasets:
|
|
| 7 |
- sentence-transformers/all-nli
|
| 8 |
language:
|
| 9 |
- en
|
| 10 |
-
new_version: v1.
|
| 11 |
base_model:
|
| 12 |
- google-bert/bert-base-uncased
|
| 13 |
pipeline_tag: text-classification
|
|
@@ -47,7 +47,7 @@ metrics:
|
|
| 47 |
library_name: transformers
|
| 48 |
---
|
| 49 |
|
| 50 |
-
 |
|
| 89 |
| π **Use Cases** | NER, intent detection, offline chatbots, voice AI |
|
| 90 |
| π **Datasets** | Wikipedia, BookCorpus, MNLI, All-NLI |
|
|
@@ -179,7 +179,7 @@ Input: The capital of France is [MASK].
|
|
| 179 |
- **MNLI (MultiNLI)**: Built for natural language inference.
|
| 180 |
- **All-NLI**: Enhanced with extra NLI data for smarter understanding.
|
| 181 |
|
| 182 |
-
*Fine-Tuning Brilliance*: Starting from `google-bert/bert-base-uncased` (12 layers, 768 hidden, 110M parameters), NeuroBERT-Mini was fine-tuned to a streamlined
|
| 183 |
|
| 184 |
---
|
| 185 |
|
|
|
|
| 7 |
- sentence-transformers/all-nli
|
| 8 |
language:
|
| 9 |
- en
|
| 10 |
+
new_version: v1.3
|
| 11 |
base_model:
|
| 12 |
- google-bert/bert-base-uncased
|
| 13 |
pipeline_tag: text-classification
|
|
|
|
| 47 |
library_name: transformers
|
| 48 |
---
|
| 49 |
|
| 50 |
+

|
| 51 |
|
| 52 |
# π§ boltuix/NeuroBERT-Mini β The Ultimate Lightweight NLP Powerhouse! π
|
| 53 |
|
|
|
|
| 82 |
|
| 83 |
| Feature | Description |
|
| 84 |
|------------------------|-------------------------------------------------------|
|
| 85 |
+
| π **Architecture** | Nimble BERT (8 layers, hidden size 256) |
|
| 86 |
+
| βοΈ **Parameters** | ~30M, quantized to a sleek ~50MB |
|
| 87 |
+
| πΎ **Model Size** | ~50MBβideal for edge devices |
|
| 88 |
| β‘ **Speed** | Ultra-fast inference (<50ms on edge devices) |
|
| 89 |
| π **Use Cases** | NER, intent detection, offline chatbots, voice AI |
|
| 90 |
| π **Datasets** | Wikipedia, BookCorpus, MNLI, All-NLI |
|
|
|
|
| 179 |
- **MNLI (MultiNLI)**: Built for natural language inference.
|
| 180 |
- **All-NLI**: Enhanced with extra NLI data for smarter understanding.
|
| 181 |
|
| 182 |
+
*Fine-Tuning Brilliance*: Starting from `google-bert/bert-base-uncased` (12 layers, 768 hidden, 110M parameters), NeuroBERT-Mini was fine-tuned to a streamlined 8 layers, 256 hidden, and ~30M parameters, creating a compact yet powerful NLP solution for edge AI! πͺ
|
| 183 |
|
| 184 |
---
|
| 185 |
|