File size: 7,575 Bytes
27af45e
5ce41a7
 
 
 
b54c89c
 
 
5ce41a7
e461500
 
27af45e
 
 
 
b54c89c
27af45e
 
 
 
b54c89c
 
27af45e
 
 
 
b54c89c
27af45e
5ce41a7
27af45e
 
5ce41a7
b54c89c
 
 
27af45e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b54c89c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27af45e
 
 
 
b54c89c
27af45e
b54c89c
27af45e
b54c89c
 
 
 
 
27af45e
 
b54c89c
 
 
 
27af45e
 
 
b54c89c
27af45e
 
 
 
 
 
 
 
 
 
b54c89c
 
27af45e
b54c89c
 
 
 
 
 
27af45e
b54c89c
 
 
 
 
 
27af45e
 
 
 
 
b54c89c
27af45e
 
 
 
 
 
 
b54c89c
27af45e
b54c89c
 
 
 
 
27af45e
 
 
 
 
b54c89c
 
 
 
 
27af45e
 
b54c89c
27af45e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b54c89c
27af45e
 
 
 
b54c89c
27af45e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
---
tags:
- tangkhul
- corpus
- BERT
- fill-mask
- text-generation
- low-resource-language
license: apache-2.0
base_model:
- google-bert/bert-base-uncased
---

# Model Card for Model ID

This repository contains TangkhulBERT, the first publicly available foundational language model for the Tangkhul language, a low-resource Tibeto-Burman language. The model was trained from scratch using a Masked Language Modeling (MLM) objective.



## Model Details
Model Description
TangkhulBERT is a transformer-based model with a BERT-base architecture. It was developed to provide a crucial NLP resource for the Tangkhul language community and to serve as a starting point for various downstream tasks.
### Model Description

<!-- Provide a longer summary of what this model is. -->



- **Developed by:** Vinos shimray
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** BERT Base
- **Language(s) (NLP):** Tangkhul
- **License:** apache-2.0
- **Finetuned from model [optional]:** This model was trained from scratch and not fine-tuned from any other model.

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

The model is intended for direct use in Masked Language Modeling tasks.

from transformers import pipeline

fill_mask = pipeline(
    "fill-mask",
    model="vinshim/TangkhulBERT",
    tokenizer="VinosShimray/TangkhulBERT"
)

# Test with a Tangkhul sentence
result = fill_mask(" [MASK].")

# Print the top predictions
for prediction in result:
    print(prediction)



### Downstream Use [optional]

This model is designed to be a foundational, pre-trained model for fine-tuning on specific downstream tasks such as:

Text Classification (e.g., sentiment analysis, topic categorization)

Named Entity Recognition (NER)

Question Answering

Machine Translation (as an encoder)



### Out-of-Scope Use

This model is not intended for generating long-form, coherent text. Due to the limited size of the training corpus, it should not be used in safety-critical applications or for tasks requiring deep, nuanced world knowledge. The model only understands Tangkhul and will not perform well on other languages.

## Bias, Risks, and Limitations

The primary limitation is the size of the pre-training corpus (4 MB). While significant for a low-resource language, this is small compared to models for high-resource languages. The model will reflect any biases present in the source text data. Its knowledge is confined to the domains covered in the training corpus and may not generalize well to other contexts.


### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

## How to Get Started with the Model

Use the code below to get started with the model for Masked Language Modeling.
from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM

# Replace with your Hugging Face username and repo name
repo_id = "vinshim/TangkhulBERT"

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForMaskedLM.from_pretrained(repo_id)

# Create the pipeline
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)

# Use the model
result = fill_mask("Kazing eina ngalei [MASK].")
print(result)
## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was pre-trained on a 4 MB plain-text corpus of the Tangkhul language, collected from various digital sources. This data is not available for download but can be described as general-purpose text.


### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

#### Preprocessing [optional]
The text was preprocessed by:

1.Converting all text to lowercase.

2. Ensuring a sentence-per-line format.

3. Programmatically adding a full stop (.) to every line that lacked sentence-ending punctuation.
[More Information Needed]


#### Training Hyperparameters

- **Training regime:** fp16 mixed precision
Epochs: 500
Batch Size: 128
Optimizer: AdamW with default settings
Learning Rate: 5e-5 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

#### Speeds, Sizes, Times [optional]
The pre-training was conducted over approximately 3 hours on a single NVIDIA A100 GPU.
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->

[More Information Needed]

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

[More Information Needed]

#### Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

[More Information Needed]

#### Metrics
The primary evaluation metric during pre-training was Training Loss, which is a measure of the model's perplexity on the Masked Language Modeling task.
<!-- These are the evaluation metrics being used, ideally with a description of why. -->


### Results
The model achieved a final pre-training loss of 2.9969 after 22,000 training steps.

#### Summary



## Model Examination [optional]

<!-- Relevant interpretability work for the model goes here -->

[More Information Needed]

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]

## Technical Specifications [optional]

### Model Architecture and Objective

[More Information Needed]

### Compute Infrastructure

[More Information Needed]

#### Hardware

[More Information Needed]

#### Software

[More Information Needed]

## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## Glossary [optional]

<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]