Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -19,7 +19,7 @@ Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vi
|
|
| 19 |
- [InternImage](https://github.com/OpenGVLab/InternImage): a large-scale vision foundation models with deformable convolutions.
|
| 20 |
- [InternVideo](https://github.com/OpenGVLab/InternVideo): video foundation models with generative and discriminative learning.
|
| 21 |
- [InternVideo2](https://github.com/OpenGVLab/InternVideo): large-scale video foundation models for multimodality understanding.
|
| 22 |
-
- [VideoChat](https://github.com/OpenGVLab/Ask-Anything): an end-to-end chat assistant
|
| 23 |
- [All Seeing]():
|
| 24 |
- [All Seeing V2]():
|
| 25 |
-
|
|
|
|
| 19 |
- [InternImage](https://github.com/OpenGVLab/InternImage): a large-scale vision foundation models with deformable convolutions.
|
| 20 |
- [InternVideo](https://github.com/OpenGVLab/InternVideo): video foundation models with generative and discriminative learning.
|
| 21 |
- [InternVideo2](https://github.com/OpenGVLab/InternVideo): large-scale video foundation models for multimodality understanding.
|
| 22 |
+
- [VideoChat](https://github.com/OpenGVLab/Ask-Anything): an end-to-end chat assistant for video comprehension.
|
| 23 |
- [All Seeing]():
|
| 24 |
- [All Seeing V2]():
|
| 25 |
-
|