curryandsun commited on
Commit
cbcf949
·
verified ·
1 Parent(s): 478273f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -74,9 +74,9 @@ What is truly exciting is that in the comparison with Qwen3-32B, Ring-flash-line
74
 
75
  <div align="center">
76
 
77
- | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
78
- | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
79
- | Ring-flash-linear-2.0 | 100B | 6.1B | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-flash-linear-2.0) <br>[🤖 Modelscope](https://modelscope.cn/models/inclusionAI/Ring-flash-linear-2.0)|
80
  </div>
81
 
82
  ## Quickstart
@@ -86,8 +86,6 @@ What is truly exciting is that in the comparison with Qwen3-32B, Ring-flash-line
86
  ```bash
87
  pip install flash-linear-attention==0.3.2
88
  pip install transformers==4.56.1
89
- pip install sgl-kernel==0.3.9.post2
90
- pip install vllm==0.10.2
91
  ```
92
 
93
  ### 🤗 Hugging Face Transformers
 
74
 
75
  <div align="center">
76
 
77
+ | **Model** | **Context Length** | **Download** |
78
+ | :----------------: | :----------------: | :----------: |
79
+ | Ring-flash-linear-2.0 | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-flash-linear-2.0) <br>[🤖 Modelscope](https://modelscope.cn/models/inclusionAI/Ring-flash-linear-2.0)|
80
  </div>
81
 
82
  ## Quickstart
 
86
  ```bash
87
  pip install flash-linear-attention==0.3.2
88
  pip install transformers==4.56.1
 
 
89
  ```
90
 
91
  ### 🤗 Hugging Face Transformers