Badnyal commited on
Commit
42f3fb3
·
verified ·
1 Parent(s): 235ec86

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +196 -155
README.md CHANGED
@@ -1,199 +1,240 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
 
 
9
 
 
10
 
 
11
 
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
- ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
 
 
61
 
62
- [More Information Needed]
63
 
64
- ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
 
 
69
 
70
- ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
 
74
- [More Information Needed]
75
 
76
- ## Training Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
  ### Training Data
 
 
 
 
 
 
 
 
 
 
 
 
 
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
 
101
- [More Information Needed]
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
 
137
- <!-- Relevant interpretability work for the model goes here -->
 
 
 
 
 
 
 
138
 
139
- [More Information Needed]
140
 
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
 
169
- [More Information Needed]
170
 
171
- ## Citation [optional]
 
 
 
 
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
 
175
- **BibTeX:**
176
 
177
- [More Information Needed]
 
 
 
 
 
 
 
 
178
 
179
- **APA:**
180
 
181
- [More Information Needed]
182
 
183
- ## Glossary [optional]
 
 
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
190
 
191
- [More Information Needed]
 
 
192
 
193
- ## Model Card Authors [optional]
 
194
 
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
1
  ---
2
+ language:
3
+ - nag
4
+ license: cc-by-4.0
5
+ tags:
6
+ - bert
7
+ - roberta
8
+ - nagamese
9
+ - low-resource
10
+ - creole
11
+ - northeast-india
12
+ - token-classification
13
+ - fill-mask
14
+ datasets:
15
+ - agnivamaiti/naganlp-ner-annotated-corpus
16
+ metrics:
17
+ - accuracy
18
+ - f1
19
+ - precision
20
+ - recall
21
+ model-index:
22
+ - name: NagameseBERT
23
+ results:
24
+ - task:
25
+ type: token-classification
26
+ name: Part-of-Speech Tagging
27
+ dataset:
28
+ name: NagaNLP Annotated Corpus
29
+ type: agnivamaiti/naganlp-ner-annotated-corpus
30
+ metrics:
31
+ - type: accuracy
32
+ value: 88.35
33
+ name: Accuracy
34
+ - type: f1
35
+ value: 80.72
36
+ name: F1 (macro)
37
+ - task:
38
+ type: token-classification
39
+ name: Named Entity Recognition
40
+ dataset:
41
+ name: NagaNLP Annotated Corpus
42
+ type: agnivamaiti/naganlp-ner-annotated-corpus
43
+ metrics:
44
+ - type: accuracy
45
+ value: 91.74
46
+ name: Accuracy
47
+ - type: f1
48
+ value: 56.51
49
+ name: F1 (macro)
50
  ---
51
 
52
+ # NagameseBERT
53
 
54
+ [![HuggingFace Model](https://img.shields.io/badge/🤗%20HuggingFace-Model-yellow)](https://huggingface.co/MWirelabs/nagamesebert)
55
+ [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-blue.svg)](https://creativecommons.org/licenses/by/4.0/)
56
+ [![Language](https://img.shields.io/badge/Language-Nagamese-green)](https://en.wikipedia.org/wiki/Nagamese_Creole)
57
 
58
+ **A Foundational BERT model for Nagamese Creole** - A compact, efficient language model for a low resource Northeast Indian language.
59
 
60
+ ---
61
 
62
+ ## Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
 
64
+ NagameseBERT is a 7M parameter RoBERTa-style BERT model pre-trained on 42,552 Nagamese sentences. Despite being 15× smaller than multilingual models like mBERT (110M) and XLM-RoBERTa (125M), it achieves competitive performance on downstream NLP tasks while offering significant efficiency advantages.
65
 
66
+ **Key Features:**
67
+ - **Compact**: 6.9M parameters (15× smaller than mBERT)
68
+ - **Efficient**: Pre-trained in 35 minutes on single A40 GPU
69
+ - **Custom tokenizer**: 8K BPE vocabulary optimized for Nagamese
70
+ - **Rigorous evaluation**: Multi-seed testing (n=3) with reproducible results
71
+ - **Open**: Model, code, and data splits publicly available
72
 
73
+ ---
74
 
75
+ ## Performance
76
 
77
+ Multi-seed evaluation results (mean ± std, n=3):
78
 
79
+ | Model | Parameters | POS Accuracy | POS F1 | NER Accuracy | NER F1 |
80
+ |-------|-----------|--------------|--------|--------------|--------|
81
+ | **NagameseBERT** | **7M** | **88.35 ± 0.71%** | **0.807 ± 0.013** | **91.74 ± 0.68%** | **0.565 ± 0.054** |
82
+ | mBERT | 110M | 95.14 ± 0.47% | 0.916 ± 0.008 | 96.11 ± 0.72% | 0.750 ± 0.064 |
83
+ | XLM-RoBERTa | 125M | 95.64 ± 0.56% | 0.919 ± 0.008 | 96.38 ± 0.26% | 0.819 ± 0.066 |
84
 
85
+ **Trade-off**: 6-7 percentage points lower accuracy with 15× parameter reduction, enabling resource-constrained deployment.
86
 
87
+ ---
88
 
89
+ ## Model Details
90
 
91
+ ### Architecture
92
+ - **Type**: RoBERTa-style BERT (no token type embeddings)
93
+ - **Hidden size**: 256
94
+ - **Layers**: 6 transformer blocks
95
+ - **Attention heads**: 4 per layer
96
+ - **Intermediate size**: 1,024
97
+ - **Max sequence length**: 64 tokens
98
+ - **Total parameters**: 6,878,528
99
+
100
+ ### Tokenizer
101
+ - **Type**: Byte-Pair Encoding (BPE)
102
+ - **Vocabulary size**: 8,000 tokens
103
+ - **Special tokens**: `[PAD]`, `[UNK]`, `[CLS]`, `[SEP]`, `[MASK]`
104
+ - **Normalization**: NFD Unicode + accent stripping
105
+ - **Case**: Preserved (for proper nouns and code-switched English)
106
 
107
  ### Training Data
108
+ - **Corpus size**: 42,552 Nagamese sentences
109
+ - **Average length**: 11.82 tokens/sentence
110
+ - **Split**: 90% train (38,296) / 10% validation (4,256)
111
+ - **Sources**: Web, social media, community contributions (deduplicated)
112
+
113
+ ### Pre-training
114
+ - **Objective**: Masked Language Modeling (15% masking)
115
+ - **Optimizer**: AdamW (lr=5e-4, weight_decay=0.01)
116
+ - **Batch size**: 64
117
+ - **Epochs**: 50
118
+ - **Training time**: ~35 minutes
119
+ - **Hardware**: NVIDIA A40 (48GB)
120
+ - **Final validation loss**: 2.79
121
 
122
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
123
 
124
+ ## Usage
125
+
126
+ ### Load Model and Tokenizer
127
+ ```python
128
+ from transformers import AutoTokenizer, AutoModel
129
+
130
+ model_name = "MWirelabs/nagamesebert"
131
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
132
+ model = AutoModel.from_pretrained(model_name)
133
+
134
+ # Example usage
135
+ text = "Toi moi laga sathi hobo pare?"
136
+ inputs = tokenizer(text, return_tensors="pt")
137
+ outputs = model(**inputs)
138
+ ```
139
+
140
+ ### Fine-tuning for Token Classification
141
+ ```python
142
+ from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer
143
+
144
+ # Load model with classification head
145
+ model = AutoModelForTokenClassification.from_pretrained(
146
+ "MWirelabs/nagamesebert",
147
+ num_labels=num_labels
148
+ )
149
+
150
+ # Training arguments
151
+ training_args = TrainingArguments(
152
+ output_dir="./results",
153
+ num_train_epochs=100,
154
+ per_device_train_batch_size=8,
155
+ learning_rate=3e-5,
156
+ weight_decay=0.01
157
+ )
158
+
159
+ # Train
160
+ trainer = Trainer(
161
+ model=model,
162
+ args=training_args,
163
+ train_dataset=train_dataset,
164
+ eval_dataset=eval_dataset
165
+ )
166
+ trainer.train()
167
+ ```
168
 
169
+ ---
170
 
171
  ## Evaluation
172
 
173
+ ### Dataset
174
+ - **Source**: [NagaNLP Annotated Corpus](https://huggingface.co/datasets/agnivamaiti/naganlp-ner-annotated-corpus)
175
+ - **Total**: 214 sentences
176
+ - **Split** (seed=42): 171 train / 21 dev / 22 test (80/10/10)
177
+ - **POS tags**: 13 Universal Dependencies tags
178
+ - **NER tags**: 4 entity types (PER, LOC, ORG, MISC) in IOB2 format
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
179
 
180
+ ### Experimental Setup
181
+ - **Seeds**: 42, 123, 456 (n=3 for variance estimation)
182
+ - **Batch size**: 32
183
+ - **Learning rate**: 3e-5
184
+ - **Epochs**: 100
185
+ - **Optimization**: AdamW with 100 warmup steps
186
+ - **Hardware**: NVIDIA A40
187
+ - **Metrics**: Token-level accuracy and macro-averaged F1
188
 
189
+ **Data Leakage Statement**: All splits created with fixed seed (42) with no sentence overlap between train/dev/test sets.
190
 
191
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
192
 
193
+ ## Limitations
194
 
195
+ - **Corpus size**: 42K sentences is modest; expansion to 100K+ could improve performance
196
+ - **Evaluation scale**: Small test set (22 sentences) limits statistical power
197
+ - **Task scope**: Only evaluated on token classification; needs broader task assessment
198
+ - **Efficiency metrics**: No quantitative inference benchmarks (latency, memory) yet provided
199
+ - **Data documentation**: Complete data provenance and licenses to be formalized
200
 
201
+ ---
202
 
203
+ ## Citation
204
 
205
+ If you use NagameseBERT in your research, please cite:
206
+ ```bibtex
207
+ @misc{nagamesebert2025,
208
+ title={Bootstrapping BERT for Nagamese: A Low-Resource Creole Language},
209
+ author={MWire Labs},
210
+ year={2025},
211
+ url={https://huggingface.co/MWirelabs/nagamesebert}
212
+ }
213
+ ```
214
 
215
+ ---
216
 
217
+ ## Contact
218
 
219
+ **MWire Labs**
220
+ Shillong, Meghalaya, India
221
+ Website: [MWire Labs](https://mwirelabs.com)
222
 
223
+ ---
224
 
225
+ ## License
226
 
227
+ This model is released under [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
228
 
229
+ You are free to:
230
+ - **Share** — copy and redistribute the material
231
+ - **Adapt** — remix, transform, and build upon the material
232
 
233
+ Under the following terms:
234
+ - **Attribution** — You must give appropriate credit to MWire Labs
235
 
236
+ ---
237
 
238
+ ## Acknowledgments
239
 
240
+ We thank the Nagamese-speaking community for their contributions to corpus development and validation.