Update README.md
Browse files
README.md
CHANGED
|
@@ -3,10 +3,11 @@ license: gpl-3.0
|
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
---
|
| 6 |
-
|
| 7 |
-
|
| 8 |
|
| 9 |
Venue: **ACL 2024 (Findings)**
|
|
|
|
| 10 |
Paper Link: https://arxiv.org/abs/2403.09028
|
| 11 |
|
| 12 |
The abstract of the paper states that:
|
|
@@ -14,12 +15,12 @@ The abstract of the paper states that:
|
|
| 14 |
# Web Demo
|
| 15 |
If you wish to quickly try our model, you can access our public web demo hosted on the Hugging Face Spaces platform with a friendly interface!
|
| 16 |
|
| 17 |
-
[ChartInstruct-Llama2 Web Demo](https://huggingface.co/spaces/ahmed-masry/UniChart-Base)
|
| 18 |
|
| 19 |
# Inference
|
| 20 |
You can easily use our models for inference with the huggingface library!
|
| 21 |
You just need to do the following:
|
| 22 |
-
1. Chage the
|
| 23 |
2. Write the _input_text_.
|
| 24 |
|
| 25 |
```
|
|
@@ -32,8 +33,8 @@ torch.hub.download_url_to_file('https://raw.githubusercontent.com/vis-nlp/ChartQ
|
|
| 32 |
|
| 33 |
image_path = "/content/chart_example_1.png"
|
| 34 |
input_text = "Question: What is the share of respondants who prefer Whatsapp in the 18-29 age group?"
|
| 35 |
-
input_prompt = f"<image>\n Question: {input_text} Answer: "
|
| 36 |
|
|
|
|
| 37 |
|
| 38 |
model = LlavaForConditionalGeneration.from_pretrained("ahmed-masry/ChartInstruct-LLama2", torch_dtype=torch.float16)
|
| 39 |
processor = AutoProcessor.from_pretrained("ahmed-masry/ChartInstruct-LLama2")
|
|
|
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
---
|
| 6 |
+
|
| 7 |
+
# ChartInstruct: Instruction Tuning for Chart Comprehension and Reasoning
|
| 8 |
|
| 9 |
Venue: **ACL 2024 (Findings)**
|
| 10 |
+
|
| 11 |
Paper Link: https://arxiv.org/abs/2403.09028
|
| 12 |
|
| 13 |
The abstract of the paper states that:
|
|
|
|
| 15 |
# Web Demo
|
| 16 |
If you wish to quickly try our model, you can access our public web demo hosted on the Hugging Face Spaces platform with a friendly interface!
|
| 17 |
|
| 18 |
+
[ChartInstruct-Llama2 Web Demo](https://huggingface.co/spaces/ahmed-masry/UniChart-Base)
|
| 19 |
|
| 20 |
# Inference
|
| 21 |
You can easily use our models for inference with the huggingface library!
|
| 22 |
You just need to do the following:
|
| 23 |
+
1. Chage the _image_path_ to your chart example image path on your system
|
| 24 |
2. Write the _input_text_.
|
| 25 |
|
| 26 |
```
|
|
|
|
| 33 |
|
| 34 |
image_path = "/content/chart_example_1.png"
|
| 35 |
input_text = "Question: What is the share of respondants who prefer Whatsapp in the 18-29 age group?"
|
|
|
|
| 36 |
|
| 37 |
+
input_prompt = f"<image>\n Question: {input_text} Answer: "
|
| 38 |
|
| 39 |
model = LlavaForConditionalGeneration.from_pretrained("ahmed-masry/ChartInstruct-LLama2", torch_dtype=torch.float16)
|
| 40 |
processor = AutoProcessor.from_pretrained("ahmed-masry/ChartInstruct-LLama2")
|