| inference: false | |
| <br> | |
| <br> | |
| # ShareCaptioner Model Card | |
| ## Model details | |
| **Model type:** | |
| ShareCaptioner is an open-source captioner fine-tuned on GPT4-Vision-assisted [ShareGPT4V](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) detailed caption data with a resolution of 448x448. ShareCaptioner is based on the improved [InternLM-Xcomposer-7B](https://github.com/InternLM/InternLM-XComposer) base model. | |
| **Model date:** | |
| ShareCaptioner was trained in Nov 2023. | |
| **Paper or resources for more information:** | |
| [[Project](https://ShareGPT4V.github.io/)] [[Paper](https://huggingface.co/papers/2311.12793)] [[Code](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V)] | |
| ## License | |
| Llama 2 is licensed under the LLAMA 2 Community License, | |
| Copyright (c) Meta Platforms, Inc. All Rights Reserved. | |
| ## Intended use | |
| **Primary intended uses:** | |
| The primary use of ShareCaptioner is about producing high-quality image captions. | |
| **Primary intended users:** | |
| The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. | |
| ## Finetuning dataset | |
| - 100K GPT4-Vision-generated image-text pairs | |