rogerxi commited on
Commit
37fd3cc
·
verified ·
1 Parent(s): 015c8dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -3
README.md CHANGED
@@ -1,3 +1,50 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-text-to-text
4
+ ---
5
+
6
+ <br>
7
+ <br>
8
+
9
+ # Spatial-LLaVA-7B Model Card
10
+
11
+ ## 🤖 Model details
12
+
13
+ **Model type:**
14
+
15
+ This finetuned LLaVA model is trained from [liuhaotian/llava-pretrain-vicuna-7b-v1.3](https://huggingface.co/liuhaotian/llava-pretrain-vicuna-7b-v1.3) for improving spatial relation reasoning of large multi-modal model.
16
+
17
+ LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
18
+ It is an auto-regressive language model, based on the transformer architecture.
19
+
20
+ ## 🎯 Intended use
21
+ **Primary intended uses:**
22
+ The primary use of LLaVA is research on large multimodal models and chatbots.
23
+
24
+ **Primary intended users:**
25
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
26
+
27
+ ## 📚 Training dataset
28
+ Instruction following training: [rogerxi/LLaVA-Spatial-Instruct-850K](https://huggingface.co/datasets/rogerxi/LLaVA-Spatial-Instruct-850K)
29
+
30
+ ## 📊 Evaluation
31
+ A collection of 10 benchmarks:
32
+ | Model | VQAv2 | GQA | VizWiz | SQA | TextVQA | POPE | MME | MM-Bench | MM-Bench-cn | MM-Vet |
33
+ |:----------------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:----------:|:--------:|:-----------:|:--------:|
34
+ | LLaVA-1.5-7b | 78.5 | 62.0 | **50.0** | 66.8 | 58.2 | 85.9 | **1510.7** | 64.3 | 58.3 | 31.1 |
35
+ | Spatial-LLaVA-7b | **79.7** | **62.7** | 48.7 | **68.7** | **58.5** | **87.2** | 1472.7 | **67.8** | **60.7** | **31.6** |
36
+
37
+ [SpatialRGPT-Bench](https://huggingface.co/datasets/a8cheng/SpatialRGPT-Bench) (with placeholder replaced by object name):
38
+ ### Qualitative Spatial Relations
39
+
40
+ | Model | Below/Above | Left/Right | Big/Small | Tall/Short | Wide/Thin | Behind/Front | Avg |
41
+ |:-----------------------:|:------------:|:-----------:|:----------:|:-----------:|:----------:|:-------------:|:-------------: |
42
+ | LLaVA-1.5-7b | 53.91 | 53.49 | 45.36 | 40.00 | **50.00** | 51.04 | 48.97 |
43
+ | Spatial-LLaVA-7b | **56.32** | **66.28** | **60.82** | **48.57** | 49.02 | **52.08** | **55.12** |
44
+
45
+ ### Quantitative Spatial Relations
46
+
47
+ | Model | Direct Dist (m / ratio) | Horizontal Dist (m / ratio) | Vertical Dist (m / ratio) | Width (m / ratio) | Height (m / ratio) | Direction (° / ratio) |
48
+ |:-----------------------:|:------------------------:|:----------------------------:|:--------------------------:|:------------------:|:-------------------:|:----------------------:|
49
+ | LLaVA-1.5-7b | 12.90 / 1.06 | 10.68 / 2.03 | 20.79 / 0.94 | **24.19 / 0.50** | 14.29 / 5.27 | 10.23 / 58.33 |
50
+ | Spatial-LLaVA-7b | **24.19 / 0.57** | **14.56 / 0.62** | **41.58 / 0.42** | 22.58 / 1.12 | **18.25 / 2.92** | **20.45 / 56.47** |