williampepple1 commited on
Commit
6838c94
·
verified ·
1 Parent(s): aa08292

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -67
README.md CHANGED
@@ -1,67 +1,66 @@
1
- ---
2
- language:
3
- - en
4
- - iba
5
- license: apache-2.0
6
- tags:
7
- - translation
8
- - ibani
9
- - english-to-ibani
10
- - low-resource
11
- library_name: transformers
12
- pipeline_tag: translation
13
- ---
14
-
15
- # English to Ibani Translation Model
16
-
17
- This model translates English text to Ibani language. It's fine-tuned from Helsinki-NLP/opus-mt-en-mul.
18
-
19
- ## Usage
20
-
21
- ```python
22
- from transformers import MarianMTModel, MarianTokenizer
23
-
24
- model = MarianMTModel.from_pretrained("your-username/ibani-translator")
25
- tokenizer = MarianTokenizer.from_pretrained("your-username/ibani-translator")
26
-
27
- text = "I eat fish"
28
- inputs = tokenizer(text, return_tensors="pt")
29
- outputs = model.generate(**inputs)
30
- translation = tokenizer.decode(outputs[0], skip_special_tokens=True)
31
- print(translation)
32
- ```
33
-
34
- ## Training Data
35
-
36
- Trained on English-Ibani parallel sentences from the ibani_eng dataset.
37
-
38
- ## Performance
39
-
40
- [Add your evaluation metrics here]
41
-
42
- ## Model Details
43
-
44
- - **Base Model**: Helsinki-NLP/opus-mt-en-mul
45
- - **Language Pair**: English → Ibani
46
- - **Task**: Machine Translation
47
- - **Framework**: Hugging Face Transformers
48
-
49
- ## Limitations
50
-
51
- - This is a low-resource language model
52
- - Performance may vary with complex sentence structures
53
- - Best results with simple to moderate complexity sentences
54
-
55
- ## Citation
56
-
57
- If you use this model, please cite:
58
-
59
- ```bibtex
60
- @misc{ibani-translator,
61
- author = {Your Name},
62
- title = {English to Ibani Translation Model},
63
- year = {2025},
64
- publisher = {Hugging Face},
65
- howpublished = {\url{https://huggingface.co/your-username/ibani-translator}}
66
- }
67
- ```
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - translation
7
+ - ibani
8
+ - english-to-ibani
9
+ - low-resource
10
+ library_name: transformers
11
+ pipeline_tag: translation
12
+ ---
13
+
14
+ # English to Ibani Translation Model
15
+
16
+ This model translates English text to Ibani language. It's fine-tuned from Helsinki-NLP/opus-mt-en-mul.
17
+
18
+ ## Usage
19
+
20
+ ```python
21
+ from transformers import MarianMTModel, MarianTokenizer
22
+
23
+ model = MarianMTModel.from_pretrained("your-username/ibani-translator")
24
+ tokenizer = MarianTokenizer.from_pretrained("your-username/ibani-translator")
25
+
26
+ text = "I eat fish"
27
+ inputs = tokenizer(text, return_tensors="pt")
28
+ outputs = model.generate(**inputs)
29
+ translation = tokenizer.decode(outputs[0], skip_special_tokens=True)
30
+ print(translation)
31
+ ```
32
+
33
+ ## Training Data
34
+
35
+ Trained on English-Ibani parallel sentences from the ibani_eng dataset.
36
+
37
+ ## Performance
38
+
39
+ [Add your evaluation metrics here]
40
+
41
+ ## Model Details
42
+
43
+ - **Base Model**: Helsinki-NLP/opus-mt-en-mul
44
+ - **Language Pair**: English → Ibani
45
+ - **Task**: Machine Translation
46
+ - **Framework**: Hugging Face Transformers
47
+
48
+ ## Limitations
49
+
50
+ - This is a low-resource language model
51
+ - Performance may vary with complex sentence structures
52
+ - Best results with simple to moderate complexity sentences
53
+
54
+ ## Citation
55
+
56
+ If you use this model, please cite:
57
+
58
+ ```bibtex
59
+ @misc{ibani-translator,
60
+ author = {Your Name},
61
+ title = {English to Ibani Translation Model},
62
+ year = {2025},
63
+ publisher = {Hugging Face},
64
+ howpublished = {\url{https://huggingface.co/your-username/ibani-translator}}
65
+ }
66
+ ```