Translation
Transformers
Safetensors
qwen3
text-generation
text-generation-inference

Improve Model Card: Update pipeline_tag, add library_name, and correct language tag

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -1,4 +1,6 @@
1
  ---
 
 
2
  language:
3
  - en
4
  - zh
@@ -60,10 +62,9 @@ language:
60
  - ur
61
  - uz
62
  - yue
63
- base_model:
64
- - Qwen/Qwen3-4B-Base
65
  license: apache-2.0
66
- pipeline_tag: translation
 
67
  ---
68
 
69
  ## LMT
@@ -95,7 +96,9 @@ model_name = "NiuTrans/LMT-60-8B"
95
  tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
96
  model = AutoModelForCausalLM.from_pretrained(model_name)
97
 
98
- prompt = "Translate the following text from English into Chinese.\nEnglish: The concept came from China where plum blossoms were the flower of choice.\nChinese: "
 
 
99
  messages = [{"role": "user", "content": prompt}]
100
  text = tokenizer.apply_chat_template(
101
  messages,
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen3-4B-Base
4
  language:
5
  - en
6
  - zh
 
62
  - ur
63
  - uz
64
  - yue
 
 
65
  license: apache-2.0
66
+ pipeline_tag: text-generation
67
+ library_name: transformers
68
  ---
69
 
70
  ## LMT
 
96
  tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
97
  model = AutoModelForCausalLM.from_pretrained(model_name)
98
 
99
+ prompt = "Translate the following text from English into Chinese.
100
+ English: The concept came from China where plum blossoms were the flower of choice.
101
+ Chinese: "
102
  messages = [{"role": "user", "content": prompt}]
103
  text = tokenizer.apply_chat_template(
104
  messages,