Update README.md
Browse files
README.md
CHANGED
|
@@ -18,10 +18,13 @@ tags:
|
|
| 18 |
|
| 19 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 20 |
|
| 21 |
-
Llama-3.2V-11B-cot is
|
| 22 |
|
| 23 |
The model was proposed in [LLaVA-CoT: Let Vision Language Models Reason Step-by-Step](https://huggingface.co/papers/2411.10440).
|
| 24 |
|
|
|
|
|
|
|
|
|
|
| 25 |
## Model Details
|
| 26 |
|
| 27 |
<!-- Provide a longer summary of what this model is. -->
|
|
|
|
| 18 |
|
| 19 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 20 |
|
| 21 |
+
Llama-3.2V-11B-cot is a visual language model capable of spontaneous, systematic reasoning.
|
| 22 |
|
| 23 |
The model was proposed in [LLaVA-CoT: Let Vision Language Models Reason Step-by-Step](https://huggingface.co/papers/2411.10440).
|
| 24 |
|
| 25 |
+
Our model is built upon [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct).
|
| 26 |
+
Llama 3.2 is licensed under the LLaMA 3.2 Community License, Copyright © Meta Platforms, Inc. The use of our model must comply with Meta’s Acceptable Use Policy.
|
| 27 |
+
|
| 28 |
## Model Details
|
| 29 |
|
| 30 |
<!-- Provide a longer summary of what this model is. -->
|