Text Classification
Transformers
Safetensors
English
bert
fill-mask
BERT
NeuroBERT
transformer
pre-training
nlp
tiny-bert
edge-ai
low-resource
micro-nlp
quantized
iot
wearable-ai
offline-assistant
intent-detection
real-time
smart-home
embedded-systems
command-classification
toy-robotics
voice-ai
eco-ai
english
lightweight
mobile-nlp
ner
text-embeddings-inference
Update README.md
Browse files
README.md
CHANGED
|
@@ -45,12 +45,12 @@ library_name: transformers
|
|
| 45 |
---
|
| 46 |
|
| 47 |
|
| 48 |
-
](https://opensource.org/licenses/MIT)
|
| 53 |
-
[](#)
|
| 55 |
[](#)
|
| 56 |
|
|
@@ -72,14 +72,14 @@ library_name: transformers
|
|
| 72 |
- 🙏 [Credits](#credits)
|
| 73 |
- 💬 [Support & Community](#support--community)
|
| 74 |
|
| 75 |
-

|
| 85 |
- **Description**: Standard 6-layer, 256-hidden
|
|
@@ -87,7 +87,7 @@ library_name: transformers
|
|
| 87 |
|
| 88 |
## Key Features
|
| 89 |
|
| 90 |
-
- ⚡ **Compact Design**: ~
|
| 91 |
- 🧠 **Robust Contextual Understanding**: Captures deep semantic relationships with a 6-layer architecture.
|
| 92 |
- 📶 **Offline Capability**: Fully functional without internet access.
|
| 93 |
- ⚙️ **Real-Time Inference**: Optimized for CPUs, mobile NPUs, and microcontrollers.
|
|
@@ -101,7 +101,7 @@ Install the required dependencies:
|
|
| 101 |
pip install transformers torch
|
| 102 |
```
|
| 103 |
|
| 104 |
-
Ensure your environment supports Python 3.6+ and has ~
|
| 105 |
|
| 106 |
## Download Instructions
|
| 107 |
|
|
@@ -297,7 +297,7 @@ NeuroBERT-Small is designed for **low-power devices** in **edge and IoT scenario
|
|
| 297 |
## Hardware Requirements
|
| 298 |
|
| 299 |
- **Processors**: CPUs, mobile NPUs, or microcontrollers (e.g., Raspberry Pi, ESP32-S3)
|
| 300 |
-
- **Storage**: ~
|
| 301 |
- **Memory**: ~100MB RAM for inference
|
| 302 |
- **Environment**: Offline or low-connectivity settings
|
| 303 |
|
|
@@ -395,7 +395,7 @@ To adapt NeuroBERT-Small for custom IoT tasks (e.g., specific smart home command
|
|
| 395 |
|
| 396 |
| Model | Parameters | Size | Edge/IoT Focus | Tasks Supported |
|
| 397 |
|-----------------|------------|--------|----------------|-------------------------|
|
| 398 |
-
| NeuroBERT-Small | ~20M | ~
|
| 399 |
| NeuroBERT-Mini | ~10M | ~35MB | High | MLM, NER, Classification |
|
| 400 |
| NeuroBERT-Tiny | ~5M | ~15MB | High | MLM, NER, Classification |
|
| 401 |
| DistilBERT | ~66M | ~200MB | Moderate | MLM, NER, Classification |
|
|
|
|
| 45 |
---
|
| 46 |
|
| 47 |
|
| 48 |
+

|
| 49 |
|
| 50 |
# 🧠 NeuroBERT-Small — Compact BERT for Smarter NLP on Low-Power Devices 🔋
|
| 51 |
|
| 52 |
[](https://opensource.org/licenses/MIT)
|
| 53 |
+
[](#)
|
| 54 |
[](#)
|
| 55 |
[](#)
|
| 56 |
|
|
|
|
| 72 |
- 🙏 [Credits](#credits)
|
| 73 |
- 💬 [Support & Community](#support--community)
|
| 74 |
|
| 75 |
+

|
| 76 |
|
| 77 |
## Overview
|
| 78 |
|
| 79 |
+
`NeuroBERT-Small` is a **compact** NLP model derived from **google/bert-base-uncased**, optimized for **real-time inference** on **low-power devices**. With a quantized size of **~50MB** and **~20M parameters**, it delivers robust contextual language understanding for resource-constrained environments like mobile apps, wearables, microcontrollers, and smart home devices. Designed for **low-latency**, **offline operation**, and **smarter NLP**, it’s perfect for applications requiring intent recognition, classification, and real-time predictions in privacy-first settings with limited connectivity.
|
| 80 |
|
| 81 |
- **Model Name**: NeuroBERT-Small
|
| 82 |
+
- **Size**: ~50MB (quantized)
|
| 83 |
- **Parameters**: ~20M
|
| 84 |
- **Architecture**: Compact BERT (6 layers, hidden size 256, 4 attention heads)
|
| 85 |
- **Description**: Standard 6-layer, 256-hidden
|
|
|
|
| 87 |
|
| 88 |
## Key Features
|
| 89 |
|
| 90 |
+
- ⚡ **Compact Design**: ~50MB footprint fits low-power devices with limited storage.
|
| 91 |
- 🧠 **Robust Contextual Understanding**: Captures deep semantic relationships with a 6-layer architecture.
|
| 92 |
- 📶 **Offline Capability**: Fully functional without internet access.
|
| 93 |
- ⚙️ **Real-Time Inference**: Optimized for CPUs, mobile NPUs, and microcontrollers.
|
|
|
|
| 101 |
pip install transformers torch
|
| 102 |
```
|
| 103 |
|
| 104 |
+
Ensure your environment supports Python 3.6+ and has ~50MB of storage for model weights.
|
| 105 |
|
| 106 |
## Download Instructions
|
| 107 |
|
|
|
|
| 297 |
## Hardware Requirements
|
| 298 |
|
| 299 |
- **Processors**: CPUs, mobile NPUs, or microcontrollers (e.g., Raspberry Pi, ESP32-S3)
|
| 300 |
+
- **Storage**: ~50MB for model weights (quantized for reduced footprint)
|
| 301 |
- **Memory**: ~100MB RAM for inference
|
| 302 |
- **Environment**: Offline or low-connectivity settings
|
| 303 |
|
|
|
|
| 395 |
|
| 396 |
| Model | Parameters | Size | Edge/IoT Focus | Tasks Supported |
|
| 397 |
|-----------------|------------|--------|----------------|-------------------------|
|
| 398 |
+
| NeuroBERT-Small | ~20M | ~50MB | High | MLM, NER, Classification |
|
| 399 |
| NeuroBERT-Mini | ~10M | ~35MB | High | MLM, NER, Classification |
|
| 400 |
| NeuroBERT-Tiny | ~5M | ~15MB | High | MLM, NER, Classification |
|
| 401 |
| DistilBERT | ~66M | ~200MB | Moderate | MLM, NER, Classification |
|