chenjoya commited on
Commit
9741304
·
verified ·
1 Parent(s): 5a2fa08

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -3
README.md CHANGED
@@ -18,9 +18,6 @@ tags:
18
  We introduce LiveCC, the first multimodal LLM with real-time video commentary capability, and also strong at general image/video tasks.
19
 
20
  - Project Page: https://showlab.github.io/livecc
21
- - Paper: https://arxiv.org/abs/xxxx.xxxxx
22
- - Demo for this model: https://huggingface.co/spaces/chenjoya/livecc-7b-base
23
- - Training Code: https://www.github.com/showlab/videollm
24
 
25
  > [!Important]
26
  > This is the base model, pre-trained on [Live-CC-5M](https://huggingface.co/datasets/chenjoya/Live-CC-5M) dataset only with our proposed streaming frame-words paradigm. The instruction tuned model is [LiveCC-7B-Instruct](https://huggingface.co/chenjoya/LiveCC-7B-Instruct).
 
18
  We introduce LiveCC, the first multimodal LLM with real-time video commentary capability, and also strong at general image/video tasks.
19
 
20
  - Project Page: https://showlab.github.io/livecc
 
 
 
21
 
22
  > [!Important]
23
  > This is the base model, pre-trained on [Live-CC-5M](https://huggingface.co/datasets/chenjoya/Live-CC-5M) dataset only with our proposed streaming frame-words paradigm. The instruction tuned model is [LiveCC-7B-Instruct](https://huggingface.co/chenjoya/LiveCC-7B-Instruct).