Update README.md
Browse files
README.md
CHANGED
|
@@ -4,29 +4,6 @@ license: other
|
|
| 4 |
|
| 5 |
# Please Dont User this version for Evaluation, this is the deprecated version.
|
| 6 |
|
| 7 |
-
<p align="center" width="100%">
|
| 8 |
-
<img src="https://i.postimg.cc/MKmyP9wH/new-banner.png" width="80%" height="80%">
|
| 9 |
-
</p>
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
<div>
|
| 13 |
-
<div align="center">
|
| 14 |
-
<a href='https://brianboli.com/' target='_blank'>Bo Li*<sup>1</sup></a> 
|
| 15 |
-
<a href='https://zhangyuanhan-ai.github.io/' target='_blank'>Yuanhan Zhang*<sup>,1</sup></a> 
|
| 16 |
-
<a href='https://cliangyu.com/' target='_blank'>Liangyu Chen*<sup>,1</sup></a> 
|
| 17 |
-
<a href='https://king159.github.io/' target='_blank'>Jinghao Wang*<sup>,1</sup></a> 
|
| 18 |
-
<a href='https://pufanyi.github.io/' target='_blank'>Fanyi Pu*<sup>,1</sup></a> 
|
| 19 |
-
</br>
|
| 20 |
-
<a href='https://jingkang50.github.io/' target='_blank'>Jingkang Yang<sup>1</sup></a> 
|
| 21 |
-
<a href='https://chunyuan.li/' target='_blank'>Chunyuan Li<sup>2</sup></a> 
|
| 22 |
-
<a href='https://liuziwei7.github.io/' target='_blank'>Ziwei Liu<sup>1</sup></a>
|
| 23 |
-
</div>
|
| 24 |
-
<div>
|
| 25 |
-
<div align="center">
|
| 26 |
-
<sup>1</sup>S-Lab, Nanyang Technological University 
|
| 27 |
-
<sup>2</sup>Microsoft Research, Redmond
|
| 28 |
-
</div>
|
| 29 |
-
|
| 30 |
## 🦦 Simple Code For Otter-9B
|
| 31 |
|
| 32 |
Here is an example of multi-modal ICL (in-context learning) with 🦦 Otter. We provide two demo images with corresponding instructions and answers, then we ask the model to generate an answer given our instruct. You may change your instruction and see how the model responds.
|
|
|
|
| 4 |
|
| 5 |
# Please Dont User this version for Evaluation, this is the deprecated version.
|
| 6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
## 🦦 Simple Code For Otter-9B
|
| 8 |
|
| 9 |
Here is an example of multi-modal ICL (in-context learning) with 🦦 Otter. We provide two demo images with corresponding instructions and answers, then we ask the model to generate an answer given our instruct. You may change your instruction and see how the model responds.
|