|
|
--- |
|
|
license: apache-2.0 |
|
|
--- |
|
|
|
|
|
<p align="center"> |
|
|
<img src="logo.png" width="150" style="margin-bottom: 0.2;"/> |
|
|
<p> |
|
|
<h2 align="center"> <a href="http://arxiv.org/abs/2410.06234">TEOChat: Large Language and Vision Assistant for Temporal Earth Observation Data</a></h2> |
|
|
<h5 align="center"> If you like our project, please give us a star โญ on <a href="https://github.com/ermongroup/TEOChat">Github</a> for the latest updates. </h5> |
|
|
|
|
|
|
|
|
## ๐ฎ Highlights |
|
|
|
|
|
**TEOChat** is the first language and vision assistant that can engage in conversation about sequences of temporal earth observation imagery, and exhibits impressive performance on multiple temporal instruction-following tasks. |
|
|
|
|
|
### ๐ TEOChatlas: A new instruction-following dataset for temporal EO data |
|
|
We introduce a new instruction-following dataset for temporal EO data called **TEOChatlas** which we use to train TEOChat. TEOChatlas contains 554,071 examples spanning dozens of temporal instruction-following tasks. |
|
|
|
|
|
### ๐ค TEOChat: A new vision-language model for temporal EO data |
|
|
We design TEOChat to use a LLaVA-style architecture, combining a temporally shared vision encoder with a LLaMA 2 LLM connected through an MLP vision-language projector |
|
|
|
|
|
## ๐ค Demo |
|
|
|
|
|
### Gradio Web UI |
|
|
|
|
|
We provide an [online demo](https://huggingface.co/spaces/jirvin16/TEOChat) in Huggingface Spaces. |
|
|
|
|
|
You can also run the demo locally by running the following command: |
|
|
```bash |
|
|
python videollava/serve/teochat_demo.py |
|
|
``` |
|
|
|
|
|
|
|
|
## ๐ ๏ธ Requirements and Installation |
|
|
* Python >= 3.9 |
|
|
* Pytorch == 2.2.1 |
|
|
* CUDA Version >= 12.1 |
|
|
* Install required packages: |
|
|
```bash |
|
|
git clone https://github.com/ermongroup/TEOChat.git |
|
|
cd TEOChat |
|
|
conda create -n teochat python=3.9 -y |
|
|
conda activate teochat |
|
|
pip install --upgrade pip # enable PEP 660 support |
|
|
pip install -r requirements.txt |
|
|
``` |
|
|
|
|
|
## ๐๏ธ Training & Validating |
|
|
The training & validating instructions are in [TRAIN_AND_VALIDATE.md](https://github.com/ermongroup/TEOChat/blob/main/TRAIN_AND_VALIDATE.md). |
|
|
|
|
|
## ๐ Acknowledgement |
|
|
* [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) The codebase and model we built upon. |
|
|
* [GeoChat](https://github.com/mbzuai-oryx/geochat) The single image instruction-following dataset we included in TEOChatlas. |
|
|
|
|
|
## ๐ License |
|
|
* The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/ermongroup/TEOChat/blob/main/LICENSE) file. |
|
|
* The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. |
|
|
|
|
|
## โ๏ธ Citation |
|
|
If you find our paper and code useful in your research, please consider giving a star โญ and citation โ๏ธ. |
|
|
|
|
|
```BibTeX |
|
|
@article{irvin2024teochat, |
|
|
title={TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation Data}, |
|
|
author={Liu, Emily Ruoyu and Chen, Joyce Chuyi and Dormoy, Ines and Kim, Jinyoung and Khanna, Samar and Zheng, Zhuo and Ermon, Stefano}, |
|
|
journal={arXiv preprint arXiv:2410.06234}, |
|
|
year={2024} |
|
|
} |
|
|
``` |
|
|
|
|
|
|