nielsr HF Staff commited on
Commit
f6f0d1f
·
verified ·
1 Parent(s): cf6a785

Improve model card metadata

Browse files

This PR improves the model card by correcting the `pipeline_tag` to `audio-text-to-text` and adding the `library_name` to better categorize the model on the Hugging Face Hub. This ensures the model is discoverable through relevant searches and improves its metadata.

Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -1,13 +1,15 @@
1
  ---
2
  license: gpl-3.0
3
- pipeline_tag: any-to-any
 
4
  tags:
5
  - omni
6
  ---
 
7
  # Stream-Omni: Simultaneous Multimodal Interactions with Large Language-Vision-Speech Model
8
 
9
  [![arXiv](https://img.shields.io/badge/arXiv-2506.13642-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2506.13642)
10
- [![arXiv](https://img.shields.io/badge/GitHub-Stream--Omni-black.svg?logo=github)](https://github.com/ictnlp/Stream-Omni)
11
  [![model](https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface%20-stream--omni--8b-orange.svg)](https://huggingface.co/ICTNLP/stream-omni-8b)
12
  [![data](https://img.shields.io/badge/%F0%9F%93%91%20Datasets%20-InstructOmni-green.svg)](https://huggingface.co/datasets/ICTNLP/InstructOmni)
13
  [![Badge](https://hitscounter.dev/api/hit?url=https%3A%2F%2Fgithub.com%2Fictnlp%2FStream-Omni&label=Visitors&icon=graph-up&color=%23dc3545)](https://github.com/ictnlp/Stream-Omni)
@@ -33,4 +35,4 @@ Stream-Omni is an end-to-end language-vision-speech chatbot that simultaneously
33
 
34
  > [!NOTE]
35
  >
36
- > **Stream-Omni can produce intermediate textual results (ASR transcription and text response) during speech interaction, offering users a seamless "see-while-hear" experience.**
 
1
  ---
2
  license: gpl-3.0
3
+ library_name: transformers
4
+ pipeline_tag: audio-text-to-text
5
  tags:
6
  - omni
7
  ---
8
+
9
  # Stream-Omni: Simultaneous Multimodal Interactions with Large Language-Vision-Speech Model
10
 
11
  [![arXiv](https://img.shields.io/badge/arXiv-2506.13642-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2506.13642)
12
+ [![GitHub](https://img.shields.io/badge/GitHub-Stream--Omni-black.svg?logo=github)](https://github.com/ictnlp/Stream-Omni)
13
  [![model](https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface%20-stream--omni--8b-orange.svg)](https://huggingface.co/ICTNLP/stream-omni-8b)
14
  [![data](https://img.shields.io/badge/%F0%9F%93%91%20Datasets%20-InstructOmni-green.svg)](https://huggingface.co/datasets/ICTNLP/InstructOmni)
15
  [![Badge](https://hitscounter.dev/api/hit?url=https%3A%2F%2Fgithub.com%2Fictnlp%2FStream-Omni&label=Visitors&icon=graph-up&color=%23dc3545)](https://github.com/ictnlp/Stream-Omni)
 
35
 
36
  > [!NOTE]
37
  >
38
+ > **Stream-Omni can produce intermediate textual results (ASR transcription and text response) during speech interaction, offering users a seamless "see-while-hear" experience.**