amitha commited on
Commit
151f10d
·
verified ·
1 Parent(s): 7f3465a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -1
README.md CHANGED
@@ -37,4 +37,84 @@ tags:
37
  - art
38
  size_categories:
39
  - 1K<n<10K
40
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  - art
38
  size_categories:
39
  - 1K<n<10K
40
+ ---
41
+ # Dataset Card for docent
42
+
43
+ This dataset contains works of art with expert-written detailed descriptions from the U.S. National Gallery of Art, published as part of DOCENT. It was introduced in "PoSh: Using Scene Graphs To Guide LLMs-as-a-Judge For Detailed Image Descriptions". You can find a full description of its collection methodology in the paper: https://arxiv.org/abs/2510.19060.
44
+
45
+ ## Dataset Details
46
+
47
+ - **Language:** English
48
+ - **License:** CC-0
49
+
50
+ ### Dataset Sources
51
+
52
+ - **Images:** images in the public domain, from the U.S. National Gallery of Art
53
+ - **Reference Descriptions:** expert-written references from the U.S. National Gallery of Art, published as part of their Open Data Initiative (https://github.com/NationalGalleryOfArt/opendata)
54
+ - **Repository:** https://github.com/amith-ananthram/posh
55
+ - **Paper:** https://arxiv.org/abs/2510.19060
56
+
57
+ ## Uses
58
+
59
+ The intended use of this dataset is as a benchmark for evaluating detailed image description, in particular for artwork. It contains three splits: a training set of 1,000 images, a validation set of 250 images and a test set of 500 images. When evaluating model generations, we recommend reporting PoSh scores (https://github.com/amith-ananthram/posh) or using a replicable metric that produces stronger correlations with the judgments in https://huggingface.co/datasets/amitha/docent-eval-coarse.
60
+
61
+ ## Dataset Structure
62
+
63
+ Each row in the dataset corresponds to a work of art.
64
+ - uuid: a unique identifier for work of art
65
+ - image: an image of the work of art (useful for multimodal metrics)
66
+ - reference: an expert-written reference description of this artwork from the U.S. National Gallery of Art
67
+
68
+ ## Dataset Creation
69
+
70
+ ### Curation Rationale
71
+
72
+ This dataset was collected to evaluate detailed image description, especially for artwork.
73
+
74
+ ### Source Data
75
+
76
+ The images/artwork are all in the public domain and provided by the U.S. National Gallery of Art.
77
+
78
+ The expert written references were published by the U.S. National Gallery of Art as part of their Open Data Initiative (https://github.com/NationalGalleryOfArt/opendata).
79
+
80
+ ### Annotations
81
+
82
+ #### Annotation process
83
+
84
+ The expert written references descriptions were composed according to the U.S. National Gallery of Art's Accessibility Guidelines: https://www.nga.gov/visit/accessibility/collection-image-descriptions.
85
+
86
+ #### Who are the annotators?
87
+
88
+ An expert in art history from the U.S. National Gallery of Art.
89
+
90
+ ## Bias, Risks, and Limitations
91
+
92
+ While this work aims to benefit accessibility applications for blind and low-vision users (as reference descriptions were written according to the U.S. National Gallery of Art's Accessibility Guidelines: https://www.nga.gov/visit/accessibility/collection-image-descriptions), we acknowledge that it assumes a one-size-fits-all approach to assistive text. Ideally, such a benchmark would include different styles of accessibility text more representative of diverse user needs. However, it is our hope that by including reference descriptions that are extremely detailed, models that perform well in this more challenging setting will be able to adapt to a wide number of description needs.
93
+
94
+ Additionally, as with other computer vision systems, this work could theoretically be applied to surveillance contexts, but our focus on detailed description does not introduce novel privacy risks beyond those inherent to existing image analysis technologies. The primary intended application—-improving accessibility—-aligns with beneficial societal outcomes.
95
+
96
+ ## Citation
97
+
98
+ **BibTeX:**
99
+
100
+ @misc{ananthram2025poshusingscenegraphs,
101
+ title={PoSh: Using Scene Graphs To Guide LLMs-as-a-Judge For Detailed Image Descriptions},
102
+ author={Amith Ananthram and Elias Stengel-Eskin and Lorena A. Bradford and Julia Demarest and Adam Purvis and Keith Krut and Robert Stein and Rina Elster Pantalony and Mohit Bansal and Kathleen McKeown},
103
+ year={2025},
104
+ eprint={2510.19060},
105
+ archivePrefix={arXiv},
106
+ primaryClass={cs.CV},
107
+ url={https://arxiv.org/abs/2510.19060},
108
+ }
109
+
110
+ **APA:**
111
+
112
+ Ananthram, A., Stengel-Eskin, E., Bradford, L.A., Demarest, J., Purvis, A., Krut, K., Stein, R., Pantalony, R.E., Bansal, M., McKeown, K. (2025). PoSh: Using Scene Graphs To Guide LLMs-as-a-Judge For Detailed Image Descriptions. arXiv preprint arXiv:2510.19060.
113
+
114
+ ## Dataset Card Authors
115
+
116
+ Amith Ananthram
117
+
118
+ ## Dataset Card Contact
119
+
120
+ amith@cs.columbia.edu