Update README.md
Browse files
README.md
CHANGED
|
@@ -16,6 +16,30 @@ Our dataset is built upon the Uground, Jedi, and additional public paper-style a
|
|
| 16 |
This dataset is set with the image processor max tokens to be 2700, a.k.a max_pixels=2700x14x14x2x2 , the coordinates were resized to be smaller and you have to resize the image as well within max_pixels=2700x14x14x2x2 via image processor to make them align.
|
| 17 |
Make sure you also follow it in your training procedure, otherwise the performance will not be as expected.
|
| 18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
## Citation
|
| 20 |
|
| 21 |
If you find our data, model, benchmark or the general resources useful, please consider citing:
|
|
|
|
| 16 |
This dataset is set with the image processor max tokens to be 2700, a.k.a max_pixels=2700x14x14x2x2 , the coordinates were resized to be smaller and you have to resize the image as well within max_pixels=2700x14x14x2x2 via image processor to make them align.
|
| 17 |
Make sure you also follow it in your training procedure, otherwise the performance will not be as expected.
|
| 18 |
|
| 19 |
+
|
| 20 |
+
# Note
|
| 21 |
+
|
| 22 |
+
If you'd like to check the annotated coordinates on the screenshots, please refer to `images_resized.zip`. In this zip file, all images are preprocessed by the same pre-procession pipeline from [Qwen2.5-VL](https://arxiv.org/abs/2502.13923)
|
| 23 |
+
|
| 24 |
+
```python
|
| 25 |
+
from transformers.models.qwen2_vl.image_processing_qwen2_vl_fast import (
|
| 26 |
+
smart_resize as qwen_smart_resize,
|
| 27 |
+
)
|
| 28 |
+
import torchvision.transforms.functional as tvF
|
| 29 |
+
|
| 30 |
+
resized_height, resized_width = qwen_smart_resize(
|
| 31 |
+
image.height,
|
| 32 |
+
image.width,
|
| 33 |
+
max_pixels=2116800, # 2700 * 14 * 14 * 2 * 2 / 2 (adjusted)
|
| 34 |
+
min_pixels=12544,
|
| 35 |
+
)
|
| 36 |
+
resized_image = tvF.resize(
|
| 37 |
+
image_tensor,
|
| 38 |
+
[resized_height, resized_width],
|
| 39 |
+
interpolation=tvF.InterpolationMode.BILINEAR,
|
| 40 |
+
antialias=True,
|
| 41 |
+
)
|
| 42 |
+
|
| 43 |
## Citation
|
| 44 |
|
| 45 |
If you find our data, model, benchmark or the general resources useful, please consider citing:
|