boltuix commited on
Commit
88c4020
·
verified ·
1 Parent(s): 91ff5d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -45,12 +45,12 @@ library_name: transformers
45
  ---
46
 
47
 
48
- ![Banner](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5Qto7D0vUhnfdVq5JQ0yIaqkrj70TiM6Q8f5UfkX__Ht1Ad2KSeshb6SPHa7Ri8dQFnWGXknOckqCjgIlf6sOQge_1BYzoAT6YQMgQSjgrsA0m8YNSTGirUY5JA-zTarCIKelkYfJdS1KYrkR0PT46TfqZaMyS7W1SzhUsbHCPdKm09ftRo4znKbP8Mc/s4000/small.jpg)
49
 
50
  # 🧠 NeuroBERT-Small — Compact BERT for Smarter NLP on Low-Power Devices 🔋
51
 
52
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
53
- [![Model Size](https://img.shields.io/badge/Size-~45MB-blue)](#)
54
  [![Tasks](https://img.shields.io/badge/Tasks-MLM%20%7C%20Intent%20Detection%20%7C%20Text%20Classification%20%7C%20NER-orange)](#)
55
  [![Inference Speed](https://img.shields.io/badge/Optimized%20For-Low--Power%20Devices-green)](#)
56
 
@@ -72,14 +72,14 @@ library_name: transformers
72
  - 🙏 [Credits](#credits)
73
  - 💬 [Support & Community](#support--community)
74
 
75
- ![Banner](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijUXvkohDsomgIveUNAcDVdz2gRXxyeJ7wQEna-ZwB8U3kpgSq7_PMthS7eJlLbf4ZS6rVpAmuXbfYz3BJIAcsMnr65EqWRpcZXsHYdygPhqmZvf9xbVZorcO_EkRQfmGDxu6B61lZoQlm9UVZivrt-2ef_RgvUwPixWuidH9PWjskQUPcDl1lLlfp6Zg/s6250/small-help.jpg)
76
 
77
  ## Overview
78
 
79
- `NeuroBERT-Small` is a **compact** NLP model derived from **google/bert-base-uncased**, optimized for **real-time inference** on **low-power devices**. With a quantized size of **~45MB** and **~20M parameters**, it delivers robust contextual language understanding for resource-constrained environments like mobile apps, wearables, microcontrollers, and smart home devices. Designed for **low-latency**, **offline operation**, and **smarter NLP**, it’s perfect for applications requiring intent recognition, classification, and real-time predictions in privacy-first settings with limited connectivity.
80
 
81
  - **Model Name**: NeuroBERT-Small
82
- - **Size**: ~45MB (quantized)
83
  - **Parameters**: ~20M
84
  - **Architecture**: Compact BERT (6 layers, hidden size 256, 4 attention heads)
85
  - **Description**: Standard 6-layer, 256-hidden
@@ -87,7 +87,7 @@ library_name: transformers
87
 
88
  ## Key Features
89
 
90
- - ⚡ **Compact Design**: ~45MB footprint fits low-power devices with limited storage.
91
  - 🧠 **Robust Contextual Understanding**: Captures deep semantic relationships with a 6-layer architecture.
92
  - 📶 **Offline Capability**: Fully functional without internet access.
93
  - ⚙️ **Real-Time Inference**: Optimized for CPUs, mobile NPUs, and microcontrollers.
@@ -101,7 +101,7 @@ Install the required dependencies:
101
  pip install transformers torch
102
  ```
103
 
104
- Ensure your environment supports Python 3.6+ and has ~45MB of storage for model weights.
105
 
106
  ## Download Instructions
107
 
@@ -297,7 +297,7 @@ NeuroBERT-Small is designed for **low-power devices** in **edge and IoT scenario
297
  ## Hardware Requirements
298
 
299
  - **Processors**: CPUs, mobile NPUs, or microcontrollers (e.g., Raspberry Pi, ESP32-S3)
300
- - **Storage**: ~45MB for model weights (quantized for reduced footprint)
301
  - **Memory**: ~100MB RAM for inference
302
  - **Environment**: Offline or low-connectivity settings
303
 
@@ -395,7 +395,7 @@ To adapt NeuroBERT-Small for custom IoT tasks (e.g., specific smart home command
395
 
396
  | Model | Parameters | Size | Edge/IoT Focus | Tasks Supported |
397
  |-----------------|------------|--------|----------------|-------------------------|
398
- | NeuroBERT-Small | ~20M | ~45MB | High | MLM, NER, Classification |
399
  | NeuroBERT-Mini | ~10M | ~35MB | High | MLM, NER, Classification |
400
  | NeuroBERT-Tiny | ~5M | ~15MB | High | MLM, NER, Classification |
401
  | DistilBERT | ~66M | ~200MB | Moderate | MLM, NER, Classification |
 
45
  ---
46
 
47
 
48
+ ![Banner](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimiqZpB7hWFuvJz_GA-3Rj0ZbYpqS-6UYVt2Ok7i4I1c3muCjjkOHne58IS9MKxIppSTvDAnqViyT9qQAgywjLYDmhqFoqoaThu9Ce97gJzmwK2tGZb0JOQd3A8EYFSzyPaeasdiTZU7KdVhoPXKbOO_N02XB5vL4cX5UpBE17AiovMGgVE1JqoT2kZHg/s16000/small.jpg)
49
 
50
  # 🧠 NeuroBERT-Small — Compact BERT for Smarter NLP on Low-Power Devices 🔋
51
 
52
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
53
+ [![Model Size](https://img.shields.io/badge/Size-~50MB-blue)](#)
54
  [![Tasks](https://img.shields.io/badge/Tasks-MLM%20%7C%20Intent%20Detection%20%7C%20Text%20Classification%20%7C%20NER-orange)](#)
55
  [![Inference Speed](https://img.shields.io/badge/Optimized%20For-Low--Power%20Devices-green)](#)
56
 
 
72
  - 🙏 [Credits](#credits)
73
  - 💬 [Support & Community](#support--community)
74
 
75
+ ![Banner](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDTnQ3z0BhHVEVOLVwZ4heya2dOK68R6a4pgRxsSHuhyphenhyphencpvZr2Sjc2rmdR3Xrs2d2V1wKUtbDkI9tcE0KJoLQ2MxCwtqej7SyGxj7jHDqg0nVUFmnxN-WxWo4cAjoYdSEtclts8LHw3MdnceR1GafZj1VXeM8CxaOiktSeSOo54Bcz8M7lzLhzM7Ur45k/s16000/small-help.jpg)
76
 
77
  ## Overview
78
 
79
+ `NeuroBERT-Small` is a **compact** NLP model derived from **google/bert-base-uncased**, optimized for **real-time inference** on **low-power devices**. With a quantized size of **~50MB** and **~20M parameters**, it delivers robust contextual language understanding for resource-constrained environments like mobile apps, wearables, microcontrollers, and smart home devices. Designed for **low-latency**, **offline operation**, and **smarter NLP**, it’s perfect for applications requiring intent recognition, classification, and real-time predictions in privacy-first settings with limited connectivity.
80
 
81
  - **Model Name**: NeuroBERT-Small
82
+ - **Size**: ~50MB (quantized)
83
  - **Parameters**: ~20M
84
  - **Architecture**: Compact BERT (6 layers, hidden size 256, 4 attention heads)
85
  - **Description**: Standard 6-layer, 256-hidden
 
87
 
88
  ## Key Features
89
 
90
+ - ⚡ **Compact Design**: ~50MB footprint fits low-power devices with limited storage.
91
  - 🧠 **Robust Contextual Understanding**: Captures deep semantic relationships with a 6-layer architecture.
92
  - 📶 **Offline Capability**: Fully functional without internet access.
93
  - ⚙️ **Real-Time Inference**: Optimized for CPUs, mobile NPUs, and microcontrollers.
 
101
  pip install transformers torch
102
  ```
103
 
104
+ Ensure your environment supports Python 3.6+ and has ~50MB of storage for model weights.
105
 
106
  ## Download Instructions
107
 
 
297
  ## Hardware Requirements
298
 
299
  - **Processors**: CPUs, mobile NPUs, or microcontrollers (e.g., Raspberry Pi, ESP32-S3)
300
+ - **Storage**: ~50MB for model weights (quantized for reduced footprint)
301
  - **Memory**: ~100MB RAM for inference
302
  - **Environment**: Offline or low-connectivity settings
303
 
 
395
 
396
  | Model | Parameters | Size | Edge/IoT Focus | Tasks Supported |
397
  |-----------------|------------|--------|----------------|-------------------------|
398
+ | NeuroBERT-Small | ~20M | ~50MB | High | MLM, NER, Classification |
399
  | NeuroBERT-Mini | ~10M | ~35MB | High | MLM, NER, Classification |
400
  | NeuroBERT-Tiny | ~5M | ~15MB | High | MLM, NER, Classification |
401
  | DistilBERT | ~66M | ~200MB | Moderate | MLM, NER, Classification |