Swaroopa-jinka commited on
Commit
28e8c4f
·
verified ·
1 Parent(s): 7a04412

Add README to dataset repo

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md CHANGED
@@ -19,3 +19,57 @@ configs:
19
  - split: test
20
  path: data/test-*
21
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  - split: test
20
  path: data/test-*
21
  ---
22
+
23
+ We introduce **TexTAR**, a multi-task, context-aware Transformer for Textual Attribute Recognition (TAR),
24
+ capable of handling both positional cues (bold, italic) and visual cues (underline, strikeout) in
25
+ noisy, multilingual document images.
26
+ ## MMTAD Dataset
27
+
28
+ **MMTAD** (Multilingual Multi-domain Textual Attribute Dataset) comprises **1,623** real-world document images—from legislative records and notices to textbooks and notary documents—captured under diverse lighting, layout, and noise conditions. It delivers **1,117,716** word-level annotations for two attribute groups:
29
+
30
+ - **T1**: Bold,Italic,Bold & Italic
31
+
32
+ - **T2**: Underline,Strikeout,Underline & Strikeout
33
+
34
+ **Language & Domain Coverage**
35
+ - English, Spanish, and six South Asian languages
36
+ - Distribution: 67.4 % Hindi, 8.2 % Telugu, 8.0 % Marathi, 5.9 % Punjabi, 5.4 % Bengali, 5.2 % Gujarati/Tamil/others
37
+ - 300–500 annotated words per image on average
38
+
39
+ To address class imbalance (e.g., fewer italic or strikeout samples), we apply **context-aware augmentations**:
40
+ - Shear transforms to generate additional italics
41
+ - Realistic, noisy underline and strikeout overlays
42
+
43
+ These augmentations preserve document context and mimic real-world distortions, ensuring a rich, balanced benchmark for textual attribute recognition.
44
+
45
+ **More Information**
46
+ For detailed documentation and resources, visit our website: [TexTAR](https://tex-tar.github.io/)
47
+
48
+ **Downloading the Dataset**
49
+ ```
50
+ from datasets import load_dataset
51
+
52
+ ds = load_dataset("textar/MMTAD")
53
+ print(ds)
54
+ ```
55
+ Dataset contains
56
+ - `textar-testset`: document images
57
+ - `testset_labels.json`: a JSON array or dict where each key/entry is an image filename and the value is its annotated attribute labels (bold, italic, underline, strikeout, etc. for each word)
58
+
59
+ **Viewer Format**
60
+
61
+ To power the Hugging-Face Dat Studio we convert the original
62
+ testset_labels.json into a line-delimited JSONL (hierarchical.jsonl) of the form:
63
+ ```
64
+ {"image":"textar-testset/ncert-page_25.png",
65
+ "annotation_json":"[{"bb_dim":[73,176,157,213],"bb_ids":[{"id":71120,"ocrv":"huge","attb":{"bold":false,"italic":false,"b_i":false,"no_bi":true,…}}]},…]"}
66
+ ```
67
+ **Citation**
68
+
69
+ Please use the following BibTeX entry for citation .
70
+ ```bibtex
71
+ @article{Kumar2025TexTAR,
72
+ title = {TexTAR: Textual Attribute Recognition in Multi-domain and Multi-lingual Document Images},
73
+ author = {Rohan Kumar and Jyothi Swaroopa Jinka and Ravi Kiran Sarvadevabhatla},
74
+ booktitle = {International Conference on Document Analysis and Recognition,
75
+ {ICDAR}},