nielsr HF Staff commited on
Commit
8183cd0
·
verified ·
1 Parent(s): 4acda93

Add links to Github repository, project page and dataset

Browse files

This PR improves the model card by adding links to the Github repository, the project page, and the Hugging Face dataset for easier access to the codebase and data.

Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -1,8 +1,9 @@
1
  ---
2
  library_name: transformers
3
- pipeline_tag: image-text-to-text
4
  license: apache-2.0
 
5
  ---
 
6
  # Model Card: Reflective LLaVA (ReflectiVA)
7
 
8
  Multimodal LLMs (MLLMs) are the natural extension of large language models to handle multimodal inputs, combining text and image data.
@@ -20,7 +21,7 @@ superior performance compared to existing methods.
20
 
21
  In this model space, you will find the Overall Model (stage two) weights of ```ReflectiVA```.
22
 
23
- For more information, visit our [ReflectiVA repository](https://github.com/aimagelab/ReflectiVA).
24
 
25
  ## Citation
26
  If you make use of our work, please cite our repo:
 
1
  ---
2
  library_name: transformers
 
3
  license: apache-2.0
4
+ pipeline_tag: image-text-to-text
5
  ---
6
+
7
  # Model Card: Reflective LLaVA (ReflectiVA)
8
 
9
  Multimodal LLMs (MLLMs) are the natural extension of large language models to handle multimodal inputs, combining text and image data.
 
21
 
22
  In this model space, you will find the Overall Model (stage two) weights of ```ReflectiVA```.
23
 
24
+ For more information, visit our [ReflectiVA repository](https://github.com/aimagelab/ReflectiVA), our [project page](https://aimagelab.github.io/ReflectiVA/) and the [dataset](https://huggingface.co/datasets/aimagelab/ReflectiVA-Data).
25
 
26
  ## Citation
27
  If you make use of our work, please cite our repo: