| license: apache-2.0 | |
| This model was trained for 5 epochs on 3 different tasks in a curriculum learning fashion. | |
| The first task was object classification for text-object level alignment, followed by | |
| referring region description and finally object instruction fllowing. The LLM decoder backbone is | |
| llama-2-7b-hf and the vision encoder is a clip-vit-large-patch14-336 model. | |
| For more details on training and usage, check out the github repository at https://github.com/tossowski/Olive. |