Text Classification
Transformers
Safetensors
English
bert
fill-mask
BERT
transformer
nlp
bert-lite
edge-ai
low-resource
micro-nlp
quantized
iot
wearable-ai
offline-assistant
intent-detection
real-time
smart-home
embedded-systems
command-classification
toy-robotics
voice-ai
eco-ai
english
lightweight
mobile-nlp
ner
on-device-nlp
privacy-first
cpu-inference
speech-intent
offline-nlp
tiny-bert
bert-variant
efficient-nlp
edge-ml
tiny-ml
aiot
embedded-nlp
low-latency
smart-devices
edge-inference
ml-on-microcontrollers
android-nlp
offline-chatbot
esp32-nlp
tflite-compatible
text-embeddings-inference
Update README.md
Browse files
README.md
CHANGED
|
@@ -72,7 +72,7 @@ library_name: transformers
|
|
| 72 |
|
| 73 |
## Overview
|
| 74 |
|
| 75 |
-
`BERT-Lite` is an **ultra-lightweight** NLP model derived from **google/
|
| 76 |
|
| 77 |
- **Model Name**: BERT-Lite
|
| 78 |
- **Size**: ~10MB (quantized)
|
|
|
|
| 72 |
|
| 73 |
## Overview
|
| 74 |
|
| 75 |
+
`BERT-Lite` is an **ultra-lightweight** NLP model derived from **google/bert-base-uncased**, optimized for **real-time inference** on **edge and IoT devices**. With a quantized size of **~10MB** and **~2M parameters**, it delivers efficient contextual language understanding for highly resource-constrained environments like microcontrollers, wearables, and smart home devices. Designed for **low-latency** and **offline operation**, BERT-Lite is perfect for privacy-first applications requiring intent detection, text classification, or semantic understanding with minimal connectivity.
|
| 76 |
|
| 77 |
- **Model Name**: BERT-Lite
|
| 78 |
- **Size**: ~10MB (quantized)
|