shreyajn commited on
Commit
7348b90
·
verified ·
1 Parent(s): d27a5db

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -17,7 +17,7 @@ tags:
17
 
18
  PLaMo-1B is the first small language model (SLM) in the PLaMo™ Lite series from Preferred Networks (PFN), designed to power AI applications for edge devices including mobile, automotive, and robots across various industrial sectors. This model builds on the advancements of PLaMo-100B, a 100-billion parameter large language model (LLM) developed from the ground up by PFN’s subsidiary Preferred Elements (PFE). Leveraging high-quality Japanese and English text data generated by PLaMo-100B, PLaMo-1B has been pre-trained on a total of 4 trillion tokens. As a result, it delivers exceptional performance in Japanese benchmarks, outperforming other SLMs with similar parameter sizes. In evaluations such as Jaster 0-shot and 4-shot, PLaMo-1B has demonstrated performance on par with larger LLMs, making it a highly efficient solution for edge-based AI tasks.
19
 
20
- More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/plamo_1b_quantized).
21
 
22
  ### Model Details
23
 
@@ -39,7 +39,7 @@ More details on model performance accross various devices, can be found [here](h
39
 
40
  ## Deploying PLaMo-1B on-device
41
 
42
- Please follow the [LLM on-device deployment]({genie_url}) tutorial.
43
 
44
 
45
 
 
17
 
18
  PLaMo-1B is the first small language model (SLM) in the PLaMo™ Lite series from Preferred Networks (PFN), designed to power AI applications for edge devices including mobile, automotive, and robots across various industrial sectors. This model builds on the advancements of PLaMo-100B, a 100-billion parameter large language model (LLM) developed from the ground up by PFN’s subsidiary Preferred Elements (PFE). Leveraging high-quality Japanese and English text data generated by PLaMo-100B, PLaMo-1B has been pre-trained on a total of 4 trillion tokens. As a result, it delivers exceptional performance in Japanese benchmarks, outperforming other SLMs with similar parameter sizes. In evaluations such as Jaster 0-shot and 4-shot, PLaMo-1B has demonstrated performance on par with larger LLMs, making it a highly efficient solution for edge-based AI tasks.
19
 
20
+ Please contact us to purchase this model. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/plamo_1b_quantized).
21
 
22
  ### Model Details
23
 
 
39
 
40
  ## Deploying PLaMo-1B on-device
41
 
42
+ Please follow the [LLM on-device deployment](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llm_on_genie) tutorial.
43
 
44
 
45