File size: 5,075 Bytes
7ffd81c
8a2ba75
7ffd81c
d94adee
8a2ba75
 
 
7ffd81c
 
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
 
 
 
 
8a2ba75
7ffd81c
8a2ba75
 
 
 
 
 
7ffd81c
8a2ba75
7ffd81c
8a2ba75
 
7ffd81c
8a2ba75
7ffd81c
 
 
 
 
8a2ba75
7ffd81c
8a2ba75
 
 
 
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
 
 
7ffd81c
 
 
8a2ba75
7ffd81c
8a2ba75
 
 
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
 
 
 
7ffd81c
 
 
8a2ba75
 
 
7ffd81c
8a2ba75
7ffd81c
 
 
8a2ba75
 
7ffd81c
8a2ba75
7ffd81c
8a2ba75
 
7ffd81c
8a2ba75
 
 
7ffd81c
8a2ba75
 
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
 
 
 
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
 
7ffd81c
 
 
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
 
7ffd81c
8a2ba75
7ffd81c
 
 
 
 
 
 
8a2ba75
7ffd81c
 
 
8a2ba75
 
 
7ffd81c
 
 
8a2ba75
7ffd81c
 
 
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
 
 
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
 
 
8a2ba75
 
7ffd81c
 
 
 
 
8a2ba75
7ffd81c
 
 
8a2ba75
 
 
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
8a2ba75
7ffd81c
 
 
8a2ba75
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
---

library_name: transformers
license: mit
language: bn
base_model: meta-llama/Llama-3.1-8B-Instruct

---

# Abegi-Llama3

## Model Card Summary

**Abegi-Llama3** is a Bangla-focused large language model fine-tuned from **Meta LLaMA‑3.1‑8B‑Instruct**. The model is optimized for Bangla (bn) conversational text generation and instruction-following tasks, while retaining general-purpose reasoning and generation capabilities inherited from the base model.

---

## Model Details

### Model Description

Abegi-Llama3 is a decoder-only Transformer-based causal language model fine-tuned to improve naturalness, fluency, and instruction-following behavior in Bangla. It is suitable for chat-style interactions, content generation, and educational or research use cases involving the Bangla language.

* **Developed by:** Promit123546
* **Model type:** Causal Language Model (Decoder-only Transformer)
* **Base model:** meta-llama/Llama-3.1-8B-Instruct
* **Language(s):** Bangla (bn), with partial English support inherited from the base model
* **License:** LLaMA 3 License (inherited from base model)
* **Fine-tuned from:** meta-llama/Llama-3.1-8B-Instruct

### Model Sources

* **Repository:** [https://huggingface.co/Promit123546/Abegi-Llama3](https://huggingface.co/Promit123546/Abegi-Llama3)
* **Base Model:** [https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)

---

## Uses

### Direct Use

The model can be used directly for:

* Bangla conversational agents and chatbots
* Bangla text generation and rewriting
* Question answering in Bangla
* Educational and experimental NLP applications

### Downstream Use

With further fine-tuning, the model can be adapted for:

* Domain-specific Bangla assistants (education, customer support, documentation)
* Bangla instruction-following systems
* Research on low-resource or regional language modeling

### Out-of-Scope Use

The model is **not recommended** for:

* Medical, legal, or financial decision-making
* High-stakes or safety-critical systems
* Generating harmful, misleading, or malicious content

---

## Bias, Risks, and Limitations

* The model may reflect biases present in the training and fine-tuning data
* It can produce hallucinated or incorrect information
* Performance may degrade for tasks outside Bangla or conversational generation
* Cultural or linguistic nuances may not always be handled perfectly

### Recommendations

* Verify critical outputs using trusted external sources
* Apply moderation and safety filters in production environments
* Avoid use in sensitive or high-risk applications without human oversight

---

## How to Get Started with the Model

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "Promit123546/Abegi-Llama3"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

prompt = "বাংলায় কৃত্রিম বুদ্ধিমত্তা কী?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

---

## Training Details

### Training Data

* **Description:** Not publicly disclosed
* **Notes:** The model was fine-tuned on curated Bangla and instruction-style text data suitable for conversational generation

### Training Procedure

#### Preprocessing

* Tokenization using the LLaMA‑3.1 tokenizer
* Standard text normalization and prompt–response formatting

#### Training Hyperparameters

* **Training regime:** Mixed precision (fp16 or bf16)

#### Speeds, Sizes, Times

* **Checkpoint size:** ~8B parameters (base model)
* **Training duration:** Not publicly disclosed

---

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

* Internal and informal Bangla prompt-based evaluation

#### Factors

* Fluency in Bangla
* Instruction adherence
* Coherence and relevance

#### Metrics

* Qualitative human evaluation

### Results

The model demonstrates fluent Bangla text generation and stable conversational behavior. No formal benchmark results are currently published.

---

## Model Examination

No formal interpretability or probing studies have been conducted.

---

## Environmental Impact

Environmental impact metrics were not recorded during training.

Carbon emissions may be estimated using the Machine Learning Impact Calculator (Lacoste et al., 2019) if compute details become available.

---

## Technical Specifications

### Model Architecture and Objective

* Decoder-only Transformer architecture
* Auto-regressive next-token prediction objective

### Compute Infrastructure

#### Hardware

* Not publicly disclosed

#### Software

* Python
* PyTorch
* Hugging Face Transformers

---

## Citation

If you use this model, please cite the base LLaMA‑3.1 model and this repository.

---

## Model Card Authors

* Promit123546

## Model Card Contact

For questions or issues, please use the Hugging Face model page discussion section.