Spaces:
Running
Running
| title: README | |
| emoji: ⚡ | |
| colorFrom: purple | |
| colorTo: gray | |
| sdk: static | |
| pinned: false | |
| <div align="center"> | |
| <b><font size="6">OpenGVLab</font></b> | |
| </div> | |
| Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vision-Centric AI research. The GV in our name, OpenGVLab, means general vision, a general understanding of vision, so little effort is needed to adapt to new vision-based tasks. | |
| # Models | |
| - [InternVL](https://github.com/OpenGVLab/InternVL): a pioneering open-source alternative to GPT-4V. | |
| - [InternImage](https://github.com/OpenGVLab/InternImage): a large-scale vision foundation models with deformable convolutions. | |
| - [InternVideo](): | |
| - [InternVideo2](): | |
| - [All Seeing](): | |
| - [All Seeing V2](): | |
| - | |
| # Datasets | |
| - [ShareGPT4o](): | |
| - [InternVid](): | |
| - | |