ddyuudd commited on
Commit
0d76066
·
verified ·
1 Parent(s): 8ffbada

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,7 +13,7 @@ base_model:
13
 
14
  Tiny Language Model For Japanese and English Bidirectional Translation
15
 
16
- - **Purrs on your lap** 🐱: Small and efficient! 0.8-3.3B models that run on edge devices.
17
  - **Swift and Feline Sharp** 🐾: Beats TranslateGemma-12B on text-to-text translation quality.
18
  - **Adopt and adapt** 🐈: Open source (MIT License) models you can customize and extend.
19
 
@@ -28,6 +28,7 @@ All models are available on Hugging Face:
28
  - [CAT-Translate-0.8B](https://huggingface.co/cyberagent/CAT-Translate-0.8b/)
29
  - [CAT-Translate-1.4B](https://huggingface.co/cyberagent/CAT-Translate-1.4b/)
30
  - [CAT-Translate-3.3B](https://huggingface.co/cyberagent/CAT-Translate-3.3b/)
 
31
 
32
  ## Evaluation
33
 
@@ -44,8 +45,7 @@ We conducted evaluation on the translation subsets of the following benchmarks:
44
  We chose these tasks as benchmarks because (1) they are derived from real world applications and (2) are less overoptimized compared to popular datasets (e.g., WMT).
45
 
46
  The results are below.
47
- Overall, our 1.4B model achieved the best overall scores.
48
- The 0.8B, 1.4B, and 3.3B-beta models achieved the best scores among all models (including closed source) within their respective sizes for both En-Ja and Ja-En translation tasks.
49
 
50
 
51
 
 
13
 
14
  Tiny Language Model For Japanese and English Bidirectional Translation
15
 
16
+ - **Purrs on your lap** 🐱: Small and efficient! 0.8-7B models that run on edge devices.
17
  - **Swift and Feline Sharp** 🐾: Beats TranslateGemma-12B on text-to-text translation quality.
18
  - **Adopt and adapt** 🐈: Open source (MIT License) models you can customize and extend.
19
 
 
28
  - [CAT-Translate-0.8B](https://huggingface.co/cyberagent/CAT-Translate-0.8b/)
29
  - [CAT-Translate-1.4B](https://huggingface.co/cyberagent/CAT-Translate-1.4b/)
30
  - [CAT-Translate-3.3B](https://huggingface.co/cyberagent/CAT-Translate-3.3b/)
31
+ - [CAT-Translate-7B](https://huggingface.co/cyberagent/CAT-Translate-7b/)
32
 
33
  ## Evaluation
34
 
 
45
  We chose these tasks as benchmarks because (1) they are derived from real world applications and (2) are less overoptimized compared to popular datasets (e.g., WMT).
46
 
47
  The results are below.
48
+ All the models achieved the best scores among all models (including closed source) within their respective sizes for both En-Ja and Ja-En translation tasks.
 
49
 
50
 
51