File size: 2,111 Bytes
83a7864
 
c4f4c46
 
 
83a7864
c4f4c46
83a7864
 
c4f4c46
83a7864
 
c4f4c46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
title: NLLB200
emoji: ๐ŸŒ
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 4.44.0
app_file: app.py
pinned: false
license: apache-2.0
---

# NLLB-200 Translation API

High-quality translation API powered by Meta's **No Language Left Behind (NLLB-200)** model.

## Features

- โœ… **200 Languages Supported** - Direct translation between any language pair
- โœ… **44% Better Quality** - Compared to previous translation models
- โœ… **+70% for Complex Languages** - Especially Arabic, Hindi, and other low-resource languages
- โœ… **No Pivot Translation** - Direct translation without going through English
- โœ… **Cached Results** - Faster repeated translations
- โœ… **API Access** - Use via Gradio Client or HTTP

## Supported Languages (Sample)

Arabic, English, French, Spanish, German, Italian, Portuguese, Russian, Japanese, Korean, Chinese, Hindi, Turkish, Dutch, Polish, Swedish, Indonesian, Vietnamese, Thai, Ukrainian, Romanian, Greek, Hebrew, and more!

## Usage

### Web Interface

Visit the Space and use the interactive interface to translate text between any supported languages.

### API

```python
from gradio_client import Client

client = Client("TGPro1/NLLB200")
result = client.predict(
    "Hello, world!",  # text
    "English",         # source language
    "Arabic",          # target language
    api_name="/predict"
)
print(result)  # ู…ุฑุญุจุง ุจุงู„ุนุงู„ู…!
```

## Model

- **Model**: `facebook/nllb-200-distilled-600M`
- **Parameters**: 600M
- **Size**: ~2.4GB
- **Languages**: 200

## Performance

- **Quality**: State-of-the-art for low-resource languages
- **Speed**: Fast inference with distilled model
- **Cache**: LRU cache for frequently translated phrases

## Credits

- **Model**: Meta AI Research - [NLLB Project](https://ai.meta.com/research/no-language-left-behind/)
- **Benchmark**: FLORES-200
- **License**: Apache 2.0

## Links

- [Model Card](https://huggingface.co/facebook/nllb-200-distilled-600M)
- [Research Paper](https://arxiv.org/abs/2207.04672)
- [Full Language List](https://github.com/facebookresearch/flores/blob/main/flores200/README.md)