lovemefan commited on
Commit
3ce5a26
·
1 Parent(s): c2e2a7e

update readme

Browse files
Files changed (1) hide show
  1. README.md +25 -50
README.md CHANGED
@@ -1,52 +1,27 @@
1
  ---
2
- frameworks:
3
- - other
4
- license: Apache License 2.0
5
- tags: []
6
- tasks:
7
- - auto-speech-recognition
8
-
9
- #model-type:
10
- ##如 gpt、phi、llama、chatglm、baichuan 等
11
- #- gpt
12
-
13
- #domain:
14
- ##如 nlp、cv、audio、multi-modal
15
- #- nlp
16
-
17
- #language:
18
- ##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
19
- #- cn
20
-
21
- #metrics:
22
- ##如 CIDEr、Blue、ROUGE 等
23
- #- CIDEr
24
-
25
- #tags:
26
- ##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
27
- #- pretrained
28
-
29
- #tools:
30
- ##如 vllm、fastchat、llamacpp、AdaSeq 等
31
- #- vllm
32
  ---
33
- ### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
34
- #### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型
35
-
36
- SDK下载
37
- ```bash
38
- #安装ModelScope
39
- pip install modelscope
40
- ```
41
- ```python
42
- #SDK模型下载
43
- from modelscope import snapshot_download
44
- model_dir = snapshot_download('RapidAI/RapidSpeech')
45
- ```
46
- Git下载
47
- ```
48
- #Git模型下载
49
- git clone https://www.modelscope.cn/RapidAI/RapidSpeech.git
50
- ```
51
-
52
- <p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
+
5
+ # RapidSpeech.cpp (https://github.com/RapidAI/RapidSpeech.cpp)️
6
+
7
+ **RapidSpeech.cpp** is a high-performance, **edge-native speech intelligence framework** built on top of **ggml**.
8
+ It aims to provide **pure C++**, **zero-dependency**, and **on-device inference** for large-scale ASR (Automatic Speech Recognition) and TTS (Text-to-Speech) models.
9
+
10
+ ------
11
+
12
+ ## 🌟 Key Differentiators
13
+
14
+ While the open-source ecosystem already offers powerful cloud-side frameworks such as **vLLM-omni**, as well as mature on-device solutions like **sherpa-onnx**, **RapidSpeech.cpp** introduces a new generation of design choices focused on edge deployment.
15
+
16
+ ### 1. vs. vLLM: Edge-first, not cloud-throughput-first
17
+
18
+ - **vLLM**
19
+ - Designed for data centers and cloud environments
20
+ - Strongly coupled with Python and CUDA
21
+ - Maximizes GPU throughput via techniques such as PageAttention
22
+
23
+ - **RapidSpeech.cpp**
24
+ - Designed specifically for **edge and on-device inference**
25
+ - Optimized for **low latency, low memory footprint, and lightweight deployment**
26
+ - Runs on embedded devices, mobile platforms, laptops, and even NPU-only systems
27
+ - **No Python runtime required**