File size: 3,356 Bytes
0c22592 f4894ff 0c22592 a727ec6 0c22592 a10451f 0c22592 c273726 0c22592 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: apache-2.0
---
<p align="center">
<img src="logo.png" width="150" style="margin-bottom: 0.2;"/>
<p>
<h2 align="center"> <a href="http://arxiv.org/abs/2410.06234">TEOChat: Large Language and Vision Assistant for Temporal Earth Observation Data</a></h2>
<h5 align="center"> If you like our project, please give us a star โญ on <a href="https://github.com/ermongroup/TEOChat">Github</a> for the latest updates. </h5>
## ๐ฎ Highlights
**TEOChat** is the first language and vision assistant that can engage in conversation about sequences of temporal earth observation imagery, and exhibits impressive performance on multiple temporal instruction-following tasks.
### ๐ TEOChatlas: A new instruction-following dataset for temporal EO data
We introduce a new instruction-following dataset for temporal EO data called **TEOChatlas** which we use to train TEOChat. TEOChatlas contains 554,071 examples spanning dozens of temporal instruction-following tasks.
### ๐ค TEOChat: A new vision-language model for temporal EO data
We design TEOChat to use a LLaVA-style architecture, combining a temporally shared vision encoder with a LLaMA 2 LLM connected through an MLP vision-language projector
## ๐ค Demo
### Gradio Web UI
We provide an [online demo](https://huggingface.co/spaces/jirvin16/TEOChat) in Huggingface Spaces.
You can also run the demo locally by running the following command:
```bash
python videollava/serve/teochat_demo.py
```
## ๐ ๏ธ Requirements and Installation
* Python >= 3.9
* Pytorch == 2.2.1
* CUDA Version >= 12.1
* Install required packages:
```bash
git clone https://github.com/ermongroup/TEOChat.git
cd TEOChat
conda create -n teochat python=3.9 -y
conda activate teochat
pip install --upgrade pip # enable PEP 660 support
pip install -r requirements.txt
```
## ๐๏ธ Training & Validating
The training & validating instructions are in [TRAIN_AND_VALIDATE.md](https://github.com/ermongroup/TEOChat/blob/main/TRAIN_AND_VALIDATE.md).
## ๐ Acknowledgement
* [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) The codebase and model we built upon.
* [GeoChat](https://github.com/mbzuai-oryx/geochat) The single image instruction-following dataset we included in TEOChatlas.
## ๐ License
* The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/ermongroup/TEOChat/blob/main/LICENSE) file.
* The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
## โ๏ธ Citation
If you find our paper and code useful in your research, please consider giving a star โญ and citation โ๏ธ.
```BibTeX
@article{irvin2024teochat,
title={TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation Data},
author={Liu, Emily Ruoyu and Chen, Joyce Chuyi and Dormoy, Ines and Kim, Jinyoung and Khanna, Samar and Zheng, Zhuo and Ermon, Stefano},
journal={arXiv preprint arXiv:2410.06234},
year={2024}
}
```
|