Add pipeline tag and library name, include paper abstract

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +10 -4
README.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
2
- license: mit
3
- datasets:
4
- - tsunghanwu/reverse-instruct-1.3m
5
  base_model:
6
  - meta-llama/Llama-3.1-8B-Instruct
 
 
 
 
 
7
  ---
8
 
9
  # REVERSE-LLaVA-MORE-8B
@@ -60,4 +62,8 @@ Please refer to the installation guide on GitHub to get started:
60
  - Research on grounded and trustworthy multimodal reasoning
61
 
62
  **Target Users:**
63
- Researchers, developers, and students working on VLMs, hallucination mitigation, and vision-language alignment.
 
 
 
 
 
1
  ---
 
 
 
2
  base_model:
3
  - meta-llama/Llama-3.1-8B-Instruct
4
+ datasets:
5
+ - tsunghanwu/reverse-instruct-1.3m
6
+ license: mit
7
+ pipeline_tag: image-text-to-text
8
+ library_name: transformers
9
  ---
10
 
11
  # REVERSE-LLaVA-MORE-8B
 
62
  - Research on grounded and trustworthy multimodal reasoning
63
 
64
  **Target Users:**
65
+ Researchers, developers, and students working on VLMs, hallucination mitigation, and vision-language alignment.
66
+
67
+ ## Paper abstract
68
+
69
+ Generate, but Verify: Reducing Hallucination in Vision-Language Models with Retrospective Resampling