diff --git "a/20241127/2408.07401v2.json" "b/20241127/2408.07401v2.json" new file mode 100644--- /dev/null +++ "b/20241127/2408.07401v2.json" @@ -0,0 +1,305 @@ +{ + "title": "DataVisT5: A Pre-trained Language Model for Jointly Understanding Text and Data Visualization", + "abstract": "Data visualization (DV) is the fundamental and premise tool to improve the efficiency in conveying the insights behind the big data, which has been widely accepted in existing data-driven world. Task automation in DV, such as converting natural language queries to visualizations (i.e., text-to-vis), generating explanations from visualizations (i.e., vis-to-text), answering DV-related questions in free form (i.e. FeVisQA), and explicating tabular data (i.e., table-to-text), is vital for advancing the field.\nDespite their potential, the application of pre-trained language models (PLMs) like T5 and BERT in DV has been limited by high costs and challenges in handling cross-modal information, leading to few studies on PLMs for DV. We introduce DataVisT5, a novel PLM tailored for DV that enhances the T5 architecture through a hybrid objective pre-training and multi-task fine-tuning strategy, integrating text and DV datasets to effectively interpret cross-modal semantics. Extensive evaluations on public datasets show that DataVisT5 consistently outperforms current state-of-the-art models and higher-parameter Large Language Models (LLMs) on various DV-related tasks. We anticipate that DataVisT5 will not only inspire further research on vertical PLMs but also expand the range of applications for PLMs.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Data visualizations (DVs) utilize graphical representation to convey insights to summarize the massive raw data, which is a common practice in existing big data era [1 ###reference_b1###, 2 ###reference_b2###].\nPopular data analysis and database applications, such as Google Sheets111https://www.google.com/sheets/about/ and Microsoft Power BI222https://powerbi.microsoft.com/, all support DV features.\nMany institutions realize the value of DV and have applied it as their daily fundamental tools. Thus the ability of creating suitable DVs has become a necessary skill for data analysts, engineers, and data scientists [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###].\nHowever, creating appropriate DVs remains challenging, even for experts, since it requires visual analysis expertise and familiarity with the domain data. Furthermore, users must master the complex grammar of Declarative Visualization Languages (DVLs), such as Vega-Lite [6 ###reference_b6###], ggplot2 [7 ###reference_b7###], and Vega-Zero [8 ###reference_b8###], to accurately define DV specification in the visualization engine.\n###figure_1### To lower the barriers to creating DVs and further unlock the power of DV for the general public, researchers have proposed a variety of DV-related tasks that have attracted significant attention from both industrial and academic researchers. Numerous studies on these topics have been presented in leading conferences and journals such as VLDB [9 ###reference_b9###, 10 ###reference_b10###, 2 ###reference_b2###], ICDE [11 ###reference_b11###, 12 ###reference_b12###], SIGMOD [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###], and TKDE [16 ###reference_b16###, 17 ###reference_b17###]. These tasks include text-to-vis (i.e., automatically generating DVs from natural language questions) [8 ###reference_b8###, 15 ###reference_b15###], vis-to-text [18 ###reference_b18###] (i.e., automatically generating interpretations of complex DVs for educational purposes), FeVisQA [12 ###reference_b12###] (i.e., free-form question answering over data visualization), and table-to-text (i.e., describing a given table) [19 ###reference_b19###].\nA vivid example is given in Figure 1 ###reference_###, which shows four important tasks central to the domain knowledge of DV: text-to-vis, vis-to-text, FeVisQA\nand table-to-text. The figure presents a natural language (NL) question, \u201cGive me a pie chart about the proportion of the number of countries in the artist table.\u201d This example demonstrates the text-to-vis task\u2019s capability to interpret the NL question and transform it into a Vega-Lite specification, resulting in a pie chart. The DV query, introduced by [15 ###reference_b15###], serves as a bridge in the text-to-vis process, encapsulating visualization details and data operations with a grammar akin to SQL. Translations between DV queries and DVLs are seamless, with text-to-vis tasks primarily focusing on converting NL questions into DV queries.\nConversely, the vis-to-text task aims to generate accessible and user-friendly explanations of complex visualizations for individuals without expertise in the field. The FeVisQA task addresses user inquiries regarding DV by providing detailed answers to common questions. We present four typical DV-related questions, including understanding the semantics of a DV query, resolving numerical issues within a chart, and evaluating the compatibility of a DV query with a given database. Lastly, the table-to-text task generates informative NL description of tabular data, which are essential for visual analytics, thereby reducing the perceptual effort needed for data interpretation.\nMeanwhile, PLMs such as BERT [20 ###reference_b20###] and T5 [21 ###reference_b21###] have received considerable attention in the realms of natural language processing (NLP) and data mining, becoming widely recognized for their efficacy.\nThese PLMs greatly promote the development of effective text-driven applications, since they show dominating performance in understanding the semantics in natural language.\nThe operational paradigm for these PLMs typically unfolds in two stages: initially, they undergo unsupervised pre-training on expansive, open-domain datasets (such as Wikipedia) to acquire foundational capabilities in language representation and comprehension; subsequently, they are fine-tuned on specialized corpora pertinent to targeted downstream tasks, thereby enhancing task-specific performance.\nDespite their success [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###], there are still significant challenges when it comes to the DV field : (i) Limited studies have been conducted to explore the effectiveness of PLMs in capturing the DV semantics. (ii) Since there is a substantial modal gap between the DV modality and the text modality, satisfied performances cannot be achieved by directly applying existing PLMs (e.g., T5) to DV-related tasks mentioned above. (iii) In the DV area, a possible PLM needs the ability of handling cross-modal information (i.e., text and DV), while also being capable of managing multiple distinct tasks.\nTo alleviate above-mentioned problems, we propose a novel PLM for jointly understanding text and DV, refereed as DataVisT5 in this paper. Based on text-centric T5 architecture, we enhance the pre-training process by incorporating a comprehensive array of cross-modal datasets that integrate natural language with DV knowledge, encompassing DV queries, database schemas, and tables.\nSince DV queries resemble programming language-like queries, we employ CodeT5+ [25 ###reference_b25###] as the starting checkpoint in our work. This choice leverages the robust code semantic understanding and generation capabilities of CodeT5+, providing DataVisT5 with a substantial advantage in generating and comprehending the unique programming language of our DV tasks.\nBuilding on this foundation, we apply table-level database schema filtration to reduce training complexity. Addressing the challenges of format consistency between DV and textual modalities, we introduce a unified encoding format for DV knowledge that facilitates the convergence of text and DV modalities. To eliminate stylistic discrepancies in manually curated datasets, we adopt standardized encoding.\nAdditionally, the pre-training objectives for DataVisT5 are twofold: (i) the span corruption approach of Masked Language Modeling as utilized by the original T5 model, and (ii) a Bidirectional Dual-Corpus objective that operates on source-target pairings.\nAfter the mixed-objective pre-training, we conduct multi-task fine-tuning (MFT) of our DataVisT5 on DV-related tasks including text-to-vis, vis-to-text, FeVisQA, and table-to-text.\nTo substantiate the rationale behind our proposed model, we performed comprehensive experimental evaluations on various public datasets. The results consistently demonstrate that DataVisT5 surpasses the state-of-the-art (SOTA) models and higher-parameter LLMs.\nIn summary, our main contributions are as follows:\nWe introduce and release DataVisT5: the first Pre-trained Language Model (PLM) tailored for the joint understanding of text and DV. This innovation opens avenues for future research on task-specific PLMs and enriches the landscape of PLM designs.\nWe enhance the text-centric T5 architecture to handle cross-modal information. Our novel hybrid pre-training objectives are conceived to unravel the complex interplay between DV and textual data, fostering a deeper integration of cross-modal insights.\nExtensive experiments on public datasets for diverse DV tasks including text-to-vis, vis-to-text, FeVisQA, and table-to-text demonstrate that DataVisT5 excels in multi-task settings, consistently outperforming strong baselines and establishing new SOTA performances." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Preliminary", + "text": "This section provides the foundational concepts and definitions pivotal to DV-related tasks, with the objective of cultivating a more profound understanding.\nNatural Language Question. An NL question enables users, even those with a minimal background in DV and programming skills, to formulate queries intuitively.\nFigure 1 ###reference_### demonstrates such an instance, with the user\u2019s request articulated as, \u201cGive me a pie chart about the proportion of the number of countries in the artist table\u201d.\nDeclarative Visualization Language. Transforming data into a graphical representation typically involves the use of a declarative visualization language (DVL). This kind of language provides a set of specifications that determine the construction of visualizations. These specifications include various elements such as chart type, colors, sizes, and mapping functions, as well as properties for visual marks like canvas dimensions and legends. Several DVLs are prevalent in the field, such as Vega-Lite [6 ###reference_b6###], ggplot2 [7 ###reference_b7###], ZQL [10 ###reference_b10###], ECharts [26 ###reference_b26###], Vega-Zero [8 ###reference_b8###], and VizQL [13 ###reference_b13###], each offering unique features to facilitate the visualization process.\nVisualization Specification. A visualization specification comprises a JSON format object that delineates the dataset and its visual attributes (such as chart types and data transformation functions) in accordance with the syntax of a specific DVL. It is noteworthy that each DVL possesses a unique grammar, necessitating distinct visualization specifications for rendering the same DV chart.\nData Visualization Query.\nIntroduced by [14 ###reference_b14###, 11 ###reference_b11###], a framework for querying a database for visual data representations seeks to encapsulate the full spectrum of potential DVLs.\nAs depicted in Figure 1 ###reference_###, a DV query specifies a \u201dpie\u201d chart and integrates SQL-like operations (e.g. Count and Order By). This versatile DV query format can be converted into visualization specifications for different DVLs, enabling visualization engines to render the specified chart.\nData Visualization Chart.\nThe DV charts are the visual representations such as scatters, bars, or maps used to convey the data summary and insights defined by the visualization specification. In Figure 1 ###reference_###, the final visualization result is the bar chart that corresponds to the NL question." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Our Proposed Model: DataVisT5", + "text": "We present our proposed DataVisT5 model, with the pipeline overview in Section III-A ###reference_###. This is followed by details on database schema filtration in Section III-B ###reference_###, DV knowledge encoding in Section III-C ###reference_###, and standardized encoding in Section III-D ###reference_###. We discuss our hybrid pre-training objectives in Section III-E ###reference_### and conclude with our multi-task fine-tuning strategy in Section III-F ###reference_###.\n###figure_2###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Pipeline Overview", + "text": "Figure 2 ###reference_### provides an overview of the complete pipeline, comprising five main stages: (1) Database schema filtration, (2) DV knowledge Encoding, (3) Standardized Encoding, (4)Model Pre-training, and (5) Model Fine-tuning.\nThe Database schema filtration process involves comparing n-grams extracted from the given database schema with those present in the text under consideration, enabling us to identify referenced tables in the question and acquire a sub-database schema that aligns semantically. During the DV knowledge Encoding phase, we linearize DV knowledge encompassing DV queries, database schemas, and tables. Subsequently, in the Standardized Encoding phase, we normalize the DV knowledge to facilitate more efficient learning. The resulting corpus, in its unified form, is then employed to train our proposed DataVisT5 model." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Database Schema Filtration", + "text": "Before the integration of DV and text modalities, it is critical to recognize that NL questions can incorporate keywords related to the database schema. This requires the explicit identification of references to columns, tables, and conditional values within the NL questions. To address this challenge, we employ -gram matching as a method due to its simplicity of implementation and notable effectiveness for a variety of applications.\nIn an effort to minimize information loss, our primary focus is at the table level, where we compare -grams extracted from the NL questions to those present in the database tables. Following the initial comparison, we refine the obtained sub-schema by considering the implicated tables and their respective columns." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C DV Knowledge Encoding", + "text": "To address the disparity between text and DV modalities, we propose investigating unified formats for DV knowledge. The connection between natural language and DV knowledge poses challenges due to limited data accessibility. Nevertheless, a unified format allows models to capitalize on extensive pretraining for smaller datasets. Employing consistent formatting, as recommended by [27 ###reference_b27###], offers advantages in multi-task training and mitigates performance decline caused by data heterogeneity compared to single-task training. The subsequent sections provide a comprehensive introduction to the unified representation of three distinct types of DV knowledge: DV queries, database schemas, and tables.\n###figure_3### Encoding DV query.\nWhile most existing NLP models, such as [20 ###reference_b20###], consider NL inputs as flat text sequences, we adopt a similar approach for modeling a DV query by treating it as a plain text sequence in a straightforward manner.\nEncoding Database schema.\nThe database schema comprises tables and columns. For each table in the schema, the table name is followed by a list of its columns formatted as \u201d , \u2026 \u201d. Different tables are joined using the symbol \u201d\u201d. Additionally, the database name is prefixed to the generated sequence with boundaries indicated by \u201d\u201d.\nEncoding Table.\nFollowing [28 ###reference_b28###], we employ a sequential representation of tables, akin to the schema encoding technique, which uses distinctive tokens to delineate table structure. The table is linearly represented as \u201c \u201d, with indicating the total column count and representing the row count.\nExample.\nAn presented in Figure 3 ###reference_###, where (1) the DV query is sequentially encoded into text sequences based on the data manipulation operations: Visualize, Select, Count, and Grouping, (2) the filtered database sub-schema, including the database name (theme_gallery), table name (artist), and columns, is encoded into a corresponding text sequence, and (3) the table content is linearly encoded in the format \u201ccol: Country COUNT(Country)\u201d, along with the remaining three rows of the table." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D Standardized Encoding", + "text": "Due to the manual generation of queries by multiple annotators with diverse annotation habits, subtle stylistic differences are prevalent in the final annotated DV queries within NVbench, including variations in the capitalization of keywords. Similar to issues encountered with SQL queries, these stylistic inconsistencies, while not affecting the model\u2019s execution results, pose an additional learning challenge that must be addressed. To address the stylistic variations in DV queries, a preprocessing strategy was implemented before training. This strategy includes: (1) affixing the primary table name T to the selected columns col, resulting in the notation T.col across DV queries; particularly, for instances where the wildcard symbol * is employed in a COUNT function, COUNT(*) is replaced with COUNT(T.col) to maintain uniformity; (2) the insertion of spaces surrounding parentheses and the replacement of double quotes with single quotes; (3) the inclusion of the ASC keyword subsequent to the ORDER BY clause when ordering is not explicitly specified; (4) the elimination of the AS clause and the substitution of table aliases (e.g., T1, T2) with their actual table names; (5) the lowercase conversion.\n###figure_4### Example.\nIn a DV query with a operation, as depicted in Figure 4 ###reference_###, standardization involves renaming table aliases and to and , respectively. The query\u2019s is specified as , \u2019Columbus Crew\u2019 is quoted with single quotes, the keyword is appended if sort order is absent, and the entire query is cast to lowercase.\nIn alignment with the standardization of DV queries, similar encoding steps are applied to database schemas and tables to ensure consistency. This includes affixing the table name to each column name and converting them to .\n###figure_5### Example.\nAs depicted in Figure 3 ###reference_###, within a specific database schema, column names such as \u201dage, name, country, year_join, and artist_id\u201d are transformed to \u201dartist.age, artist.name, artist.country, artist.year_join, and artist.artist_id\u201d, respectively. Similarly, within the table context, an entry like \u201dcol : Country COUNT(Country)\u201d is reformulated to \u201dcol : artist.country count(artist.country)\u201d." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "III-E Hybrid Pre-training Objectives", + "text": "Bidirectional Dual-Corpus Objectives. To address divergence between the pretraining and fine-tuning phases, we introduce Bidirectional Dual-Corpus (BDC) objectives during pretraining.In this approach, both the source and target corpora are randomly selected with equal probability (0.5) during model training to serve as the input. The remaining corpus is then used as the output for translation purposes.\nAccordingly, for a target sequence of tokens, we define the BDC loss function, , as follows:\nwhere signifies the source input, represents the sequence of tokens generated by the decoder up to but not including the -th token, and is the token that the decoder is tasked with predicting. The term denotes the model parameters.\nAs depicted in Figure 5 ###reference_###, the segment highlighted by arrows elucidates the deployment of the BDC Objectives, encompassing four discrete tasks germane to DV. A comprehensive definition of these tasks is deferred to Section V ###reference_###.\nTo enhance task-specific processing and facilitate knowledge transfer across different modalities, we introduce unique special tokens.\nFor example, as demonstrated in Figure 5 ###reference_###, the Text-to-Vis task utilizes a special token <> to prefix the NL question corpus and <> for the DV query corpus. In contrast, for the FeVisQA task, DV question-answer pairings are delineated with the tokens <> and <> to signify their respective components.\nT5-based MLM Objectives.\nThe application of Masked Language Modeling (MLM) as a pretraining objective is pivotal for pretraining encoder-decoder models. In our study, we employed the span corruption MLM strategy from [21 ###reference_b21###], where consecutive words in the input are replaced by sentinel tokens, and the decoder generates the omitted text, each instance preceded by its respective sentinel. To ensure consistency with the pretraining checkpoint, we maintained an average span length of 3 subword tokens across the input corpus and masked 15% of the subwords. This MLM objective was applied to a cross-modal corpus comprising text, DV query, database schema, and table.\nOver a sequence of tokens, our T5-based MLM loss is defined as:\nwhere are the model parameters, is the masked token predicted by the decoder, represents the unmasked encoded inputs, and is the sequence of tokens generated by the decoder up to but not including the -th token.\nAn illustration is presented in Figure 5 ###reference_###, where the segments linked by dashed lines pertain to the T5-based MLM Objectives. This figure showcases the application of span denoising targets to a DV query. Within this query, the terms \u201dbar\u201d, \u201dpeople group\u201d, \u201dby\u201d, and \u201ddesc\u201d are selected at random. Subsequently, a subset of these terms is replaced by sentinel tokens, illustrated as , , , and .\nHybrid Objectives.\nAfter achieving the aforementioned two objectives, we create a hybrid objective by sampling from both the MLM Objectives and the BDC Objectives corpora. Consequently, each training mini-batch is composed of examples drawn from a cross-modal corpus, each formatted to align with diverse learning objectives.\nWe adopt a final hybrid loss :\nwhich enables DataVisT5\u2019s readiness for multiple DV-related downstream tasks demanding contextual comprehension and pattern recognition." + }, + { + "section_id": "3.6", + "parent_section_id": "3", + "section_name": "III-F Multi-Task Fine-tuning", + "text": "To achieve better performance in multiple downstream tasks related to DataVisT5, we employ temperature mixing to combine the training data of all tasks. The temperature value is set to 2, following [21 ###reference_b21###]. Temperature up-sampling helps balance the influence of each task on the model by adjusting the probability of selecting data from each task during training. This prevents larger datasets from overpowering smaller ones. By merging training data from different tasks, the model is encouraged to learn representations that are beneficial across various corpora. Consequently, this leads to improved generalization and a more robust model capable of handling diverse DV tasks." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Pretraining Dataset Construction", + "text": "We have constructed a dataset tailored for our Hybrid Pretraining Objectives by integrating four public datasets. The following sections outline our pretraining dataset construction, detailing data collection in Section IV-A ###reference_###, data processing in Section IV-B ###reference_###, and data partitioning in Section IV-C ###reference_###." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Data Collection", + "text": "" + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "IV-A1 NVBench", + "text": "The NVBench dataset [15 ###reference_b15###] represents a publicly accessible NL2Vis corpus, containing 7,219 pairs of NL questions and their corresponding DV queries. It was originally curated to evaluate the efficacy of models in transforming textual queries into visual representations. As the most commonly utilized dataset in this domain, NVBench has been employed in several prominent studies, including those by[8 ###reference_b8###, 18 ###reference_b18###, 29 ###reference_b29###]\nTable I ###reference_### offers a detailed overview of the NVBench dataset, comprising 25,628 entries that have been collated from 152 distinct databases originating from the Spider dataset [30 ###reference_b30###]. To facilitate fair comparison with other established baselines as discussed in Section V ###reference_###, we meticulously separated the DV queries involving non-join operations from those that include join operations and performed an in-depth statistical analysis. Specifically, the dataset contains 15,764 samples without join operations. DV queries that employ non-join operations, utilizing a single table, are showcased in Figure 3 ###reference_###. Conversely, DV queries featuring join operations, where multiple tables are engaged, are illustrated in Figure 4 ###reference_###." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "IV-A2 Chart2text.", + "text": "The chart-to-text conversion process, as introduced by [31 ###reference_b31###], constitutes a comprehensive benchmark incorporating two distinct datasets, cumulatively consisting of 44,096 charts that span an extensive array of subjects and graphical representations. The data for this benchmark originates from two primary sources: Statista333https://www.statista.com/ and the Pew Research Center444https://www.pewresearch.org/. The dataset derived from Statista includes various elements such as a screenshot of the chart image, the accompanying data table, the title, axis labels, and expertly crafted descriptive narratives concerning the chart content.\nConversely, the datasets sourced from the Pew Research Center typically lack the provision of underlying data tables for the majority of their charts. To align with our pre-training objectives, we have selectively utilized only the Statista component of the Chart2Text dataset. The quantitative details of the Chart2Text dataset are systematically tabulated in Table II ###reference_###, with a total of 34,811 instances documented for analysis." + }, + { + "section_id": "4.1.3", + "parent_section_id": "4.1", + "section_name": "IV-A3 WikiTableText.", + "text": "The WikiTableText dataset [32 ###reference_b32###] consists of 13,318 descriptive sentences that are aligned with 4,962 tables extracted from Wikipedia555https://www.wikipedia.org/. These tables were retrieved via web scraping techniques and a subset of 5,000 tables was carefully curated to ensure that each table contained at least three rows and two columns, thereby meeting a predefined structural criterion. Quantitative characteristics of the WikiTableText dataset are meticulously cataloged in Table II ###reference_###, which enumerates a total of 13,318 instances for subsequent analysis." + }, + { + "section_id": "4.1.4", + "parent_section_id": "4.1", + "section_name": "IV-A4 FeVisQA", + "text": "The FeVisQA dataset, as presented in [12 ###reference_b12###], represents a pivotal asset in the nascent field of DV Question Answering. This dataset amalgamates a diverse set of rules and data sources to compile a comprehensive collection of question-and-answer pairs, integral for advancing research in this domain. It covers three principal types of questions:\nType 1: This question type probes the semantic interpretation of DVs. An example is, \u201dWhat is the meaning of this DV ?\u201d which is illustrated as Question 1 in Figure 1 ###reference_###.\nType 2: Stemming from the associated task of DV recommendation, this category includes questions that assess the suitability of a DV for a given dataset. For instance, \u201dIs this DV suitable for the given dataset?\u201d The answers are structured to affirm compatibility or denote incompatibility, thus evaluating the alignment between a DV and its corresponding dataset.\nType 3: Questions pertaining to data retrieval and the structural aspects of DV. These are generated using a rule-based approach, ensuring a robust and consistent set of questions and answers. Question 3 and Question 4 in Figure 1 ###reference_### serve as exemplary instances of this category.\nComprehensive statistics of the FeVisQA dataset are encapsulated in Table III ###reference_###. Similar to NVBench, the FeVisQA leverages the 152 databases originating from the Spider dataset [30 ###reference_b30###], comprising a total of 79,305 free-form question-answer pairs." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Data Pre-processing", + "text": "To enhance the data quality and ensure compatibility with downstream tasks, we instituted the following pre-processing. Initially, we excluded incomplete natural language question samples (34/25662) from the NVBench dataset.\nSubsequently, to prevent sequence truncation during the Bidirectional Dual-Corpus objective\u2014which operates with a fixed token length\u2014we retained only those entries in the Chart2Text dataset where the total number of cells (determined by multiplying the number of rows by the number of columns) did not exceed 150. This step was deemed unnecessary for the WikiTableText dataset, as it inherently possesses a maximum cell count of 108, as delineated in Table II ###reference_###. After employing the filtration and encoding methods described in Sections III-B ###reference_###, III-C ###reference_###, and III-D ###reference_###, we constructed our pretraining corpus based on the type of data. The corpus is bifurcated into two segments:\nDual-Corpus Objectives Datasets. This segment is arranged according to the following mappings:\nNL+ Schema DV query\nDV query + Schema Description\nTable Description\nQuestion + DV query + Schema + Table Answer\nAs shown in Figure 5 ###reference_###, the aforementioned four data types are sequentially presented.\nMLM Objectives Datasets. This segment amalgamates NL questions and database schemas from NVbench, DV queries, questions and answers from FeVisQA, and tables with their descriptions from Chart2Text and WikiTableText. These elements are integrated and then utilized to formulate the Masked Language Model (MLM) pretraining tasks. To illustrate this, a sample DV query from NVBench, which has been subjected to masking, is provided in Figure 5 ###reference_###." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Data Partitioning", + "text": "After preprocessing the data, we proceeded with the partitioning process. Originating from the Spider dataset [30 ###reference_b30###], NVBench features a wide range of domains, including academic, railway, and scholar, which is conducive to cross-domain evaluation. The data from NVBench was divided into training, validation, and testing subsets, constituting 70%, 10%, and 20% of the dataset, respectively, to facilitate this cross-domain assessment.\nFurthermore, considering that FeVisQA utilizes databases from Spider, we maintained consistency with NVBench by applying the same cross-domain partitioning scheme. The partitioning of the data adheres to the original division as specified in Table II ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments and Results", + "text": "To comprehensively assess our pre-trained architecture and promote further study, we have assembled the Jointly Understanding Text and Data Visualization benchmark. This benchmark encompasses four extensively studied tasks: text-to-vis (Section V-B ###reference_###), vis-to-text (Section V-C ###reference_###), FeVisQA (Section V-D ###reference_###), and table-to-text (Section V-E ###reference_###).\nWe incorporate established datasets pertinent to these tasks. For each task, we delineate the task definition, baselines, evaluation metrics, corresponding results, and case studies. Additionally, we perform ablation studies on the critical design elements." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Implementation Details", + "text": "We conducted the pre-training of DataVisT5 over the course of five epochs using four NVIDIA 40GB A40 GPUs. And we standardized the maximum sequence lengths for both the input and output at 512 tokens. Our training regimen adopted a linear warm-up schedule with a 0.1 warm-up rate and set the learning rate to 5e-6. For optimization, we utilized the DeepSpeedCPUAdam optimizer with a weight decay of 0.01. Further enhancing our training efficiency, we implemented DeepSpeed\u2019s ZeRO Stage 2 offloading strategy with mixed precision (FP16) as described in [33 ###reference_b33###].During the fine-tuning phase, the model exhibited significant sensitivity to hyperparameters, notably the learning rate and training epochs. A grid search was executed to determine the optimal parameters, with selection based on the performance metrics from the validation set across all models. Specifically, for multi-task fine-tuning, parameter optimization was informed by the mean performance across four tasks." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Text-to-Vis", + "text": "Defination. For a natural language query consisting of a question that articulates a user\u2019s request for a visualization and , the schema of the relevant database , the goal of the text-to-vis task is to generate the appropriate DV query .\nBaselines. We evaluate DataVisT5 against several established baselines for the text-to-vis task. The Seq2Vis approach [15 ###reference_b15###] interprets the task as machine translation using a Seq2Seq model equipped with attention. The renowned Transformer architecture [34 ###reference_b34###] and the ncNet framework [8 ###reference_b8###], which enhances the Transformer with attention-forcing, serve as additional baselines. RGVisNet [29 ###reference_b29###] utilizes a two-stage process for retrieving DV queries and modifying the prototype.\nFor the performance of LLMs, we explored in-context learning through 5-shot similarity prompting with GPT-4 [35 ###reference_b35###] and fine-tuning open-source LLMs such as Llama2-7b [36 ###reference_b36###] and Mistral-7b [37 ###reference_b37###] using LoRA [38 ###reference_b38###]. Using the CodeT5+ model [25 ###reference_b25###] as our base architecture, we employ single-task fine-tuning (SFT) without our novel pretraining as a comparison.\nTask-specific Corpus. For the fine-tuning phase of our text-to-vis task, we engaged the NVBench dataset, which was delineated in Section IV-A1 ###reference_.SSS1###, originally derived from our pre-training datasets. Contrasting with the pre-training phase, the fine-tuning was conducted with a singular training objective: NL + Schema DV query.\nEvaluation Metrics.\nThe performance evaluation of our experiment adopts four metrics, analogous to those utilized in [15 ###reference_b15###]. Before delving into the specifics, it is necessary to know that each DV query comprises three key elements: the type of visualization (such as bar chart), the configuration of axis (x/y/z), and data with transformation functions (e.g. group). Additionally, let denote the total count of test samples. The metrics are: (1) Exact Match (EM), which requires a complete match between the predicted and reference DV queries (), (2) Visualization EM (Vis EM), assessing the accuracy of predicted visualization types (), (3) Data EM, focused on data points with transformation functions (), and (4) Axis EM, evaluating the congruence of axis components ().\nResults.\nResults from Table IV ###reference_### show that foundational models like Seq2Vis and Transformer underperform in cross-domain settings.\nCompared to the previous state-of-the-art, RGVisNet, our multi-task finetuned model exhibited a significant 46.15% improvement in the EM metric on datasets without join operations. Furthermore, it outperformed the in-context learning approach using GPT-4 in scenarios involving join operations, enhancing the EM metric by 44.59% and 49.2%. Notably, in these scenarios, where models such as ncNet and RGVisNet have historically struggled, our model achieved an EM of 0.3451. In comparison to high-parameter (7b) open-source LLMs, our 220M DataVisT5 model performed comparably, while the 770M DataVisT5, with only 11% of the parameters, achieved optimal performance.\nCase Study.\nWe illustrate the effectiveness of our DataVisT5 model in generating DV queries compared to other baseline models in Table V ###reference_###. When processing a NL input, the Seq2Vis model fails to recognize essential keywords such as visualize and group by, and incorrectly identifies the chart type as scatter. The Transformer model, although correct in predicting the visualization type, omits significant information. A similar limitation is observed with ncNet, which, despite generating complex DV queries, fails to include the group by transformation function. RGVisNet accurately maps the term \u2019price\u2019 to the \u2019baseprice\u2019 column in the rooms table but does not produce the correct aggregate functions, avg and min. The SFT CodeT5+ incorrectly predicts the elements for group by. In contrast, our MFT DataVisT5 model accurately constructs the query: \u201dvisualize scatter select avg(rooms.baseprice), min(rooms.baseprice) from rooms group by rooms.decor\u201d, uniquely achieving the correct visualization results.\n###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11###" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Vis-to-Text", + "text": "###figure_12### Definition. When provided with a DV query and a database that includes a schema , the vis-to-text task is focused on creating an intelligible textual description that explains the DV query within database schema.\nBaselines. For our evaluation, we selected several established models and LLMs: an enhanced Seq2Seq model, which incorporates an attention mechanism as described by [34 ###reference_b34###] to improve its interaction between the encoder and decoder; the vanilla Transformer model as introduced in the context of text-to-vis tasks; BART [39 ###reference_b39###], a transformer-based model that combines bidirectional encoding with auto-regressive decoding; CodeT5+, our base architecture; GPT-4 in a zero-shot setting; and Llama2-7b and Mistral-7b, both with LoRA fine-tuning.\nTask-specific Corpus. The unidirectional training target for the vis-to-text task was structured as DV query + Schema Description. We employed the NVBench dataset, as referenced in Section IV-A1 ###reference_.SSS1###, analogous to the dataset used for the text-to-vis task. A notable distinction for the vis-to-text task lies in the inherent one-to-many relationship, where a singular DV query may correspond to multiple descriptions. To establish a definitive corpus for subsequent fine-tuning and evaluation, we selected a single representative description from the multiples.\nEvaluation Metrics.\nTo assess the quality of the generated textual descriptions, we employed three metrics: BLEU [40 ###reference_b40###], ROUGE [41 ###reference_b41###], and METEOR [42 ###reference_b42###]. (1) BLEU measures the precision of -gram overlaps with reference texts, modified by a brevity penalty. (2) In contrast, ROUGE emphasizes recall, assessing the extent of -gram overlap. (3) METEOR surpasses BLEU in mimicking human judgement by considering exact matches, stemming, synonyms, and penalizing for word order differences. Specifically, we report BLEU scores for unigram, bigram, and four-gram levels (BLEU-1, BLEU-2, BLEU-4), and ROUGE F1 scores for unigrams (ROUGE-1), bigrams (ROUGE-2), and longest common subsequences (ROUGE-L).\nResults. As detailed in Table VI ###reference_###, the traditional Seq2Seq and Transformer models significantly underperform compared to other models, limited by their parameter size. Although GPT-4 outperforms traditional models in a zero-shot setting, the SFT BART, benefiting from its structure that combines context awareness with autoregressive features, shows superior performance. Moreover, LoRA fine-tuned open-source LLMs Llama2-7b and Mistral-7b, even with larger parameters, do not perform as well as BART, which has significantly fewer parameters, in the vis-to-text task. Despite our base architecture CodeT5+, enhanced through single-task fine-tuning, showing competitive performance, our proposed DataVisT5 in both 220M and 770M configurations achieves the best performance.\nCase Study.\nIn the comparative analysis presented in Table VII ###reference_###, the Seq2Seq model produces outputs that significantly deviate from the ground truth, indicating a disjointed understanding. The Transformer model, while capturing the basic structure of a bar chart and its ascending order, uses imprecise language that muddles the details. The SFT BART model makes progress by accurately suggesting a bar chart in ascending order but is hampered by suboptimal phrasing. The SFT CodeT5+ model, although closely aligned with the ground truth, fails to grasp the significance of the term lname in the visualization context. In stark contrast, our DataVisT5 model, powered by a 770M parameter architecture and enhanced through MFT, excels by providing a concise and clear directive that adeptly delineates the required bar chart with an ascending Y-axis, categorizing students by last name who are without food allergies, thus closely mirroring the ground truth." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "FeVisQA", + "text": "Definition. The FeVisQA task is designed to formulate an answer to a DV-related question , by leveraging a database that encompasses a schema and tables , all in service of elucidating DV concepts.\nBaselines. In addressing the FeVisQA task, we adopted the same ensemble of baseline models previously applied to the vis-to-text task. This ensemble includes an attention-enhanced Seq2Seq model, the Transformer model, the SFT versions of the base BART and CodeT5+ models, along with a zero-shot GPT-4, and LoRA fine-tuned Llama2-7b and Mistral-7b.\nTask-specific Corpus. The FeVisQA task necessitated the formulation of a unidirectional training objective, structured as: Question + DV query + Schema + Table \u2192 Answer. We utilized the FeVisQA dataset for this purpose, which is elaborated upon in Section IV-A4 ###reference_.SSS4### and originates from pre-training datasets.\nResults. From Table VIII ###reference_### In the FeVisQA task, the MFT DataVisT5 model with 770M parameters outperforms competitors across all metrics. Compared to SFT CodeT5+ with an identical parameter setting of 770M, DataVisT5 exhibited a significant 10.92% increase in the METEOR score post fine-tuning, underscoring its remarkable proficiency in answering free-form questions.\nThis enhanced performance can be attributed to the integration of textual information and DV knowledge during the DataVisT5 pre-training phase, which effectively facilitates the model\u2019s understanding of the complex cross-domain relationship between text and DV.\n###figure_13### ###figure_14### Case Study.\nUpon reviewing the outcomes documented in Table X ###reference_###, we observe that the Seq2Seq, Transformer, and SFT BART models exhibit various discrepancies from the ground truth. The Seq2Seq model consistently produces incorrect responses, indicating significant misalignment. The Transformer model correctly identifies the smallest chart segment but lacks consistency in other queries. SFT BART correctly identifies the number of chart segments but often overestimates numerical values. While the SFT CodeT5+ model answers most questions correctly, it inaccurately responds to \u201dWhat is the total number of count(film.type)?\u201d. In contrast, our DataVisT5 model is the only one that consistently provides accurate answers across both binary and numerical inquiries." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Table-to-Text", + "text": "###figure_15### Defination.\nWith a table as the input, the table-to-text task is concentrated on producing a clear, readable narrative that captures and clarifies the essence of the data within .\nBaselines. Consistent with the previous vis-to-text and FeVisQA tasks, which also focus on text generation modalities, we selected foundational seq-to-seq models for our analysis: the Seq2Seq with an attention mechanism, the original Transformer model, and fine-tuned versions of the base BART and CodeT5+ models, specifically tailored for single-task applications. Additionally, we included a zero-shot GPT-4 model and LoRA fine-tuned Llama2-7b and Mistral-7b in our evaluation.\nTask-specific Corpus. For the table-to-text task, we formulated the unidirectional training target to Table \u2192 Description, utilizing a pre-processed pre-training corpus. We amalgamated two publicly accessible datasets, Chart2Text and WikiTableText, which are elaborated upon in Section IV-A2 ###reference_.SSS2### and Section IV-A3 ###reference_.SSS3###.\nResults. As shown in Table VIII ###reference_###, our MFT 770M DataVisT5 model outperforms competing approaches in the table-to-text task, achieving the highest METEOR score of 0.6227. This demonstrates DataVisT5\u2019s exceptional ability to generate textual descriptions from tabular data. Foundational models such as Seq2Vis and the Transformer struggle with understanding tables, while the commonly used SFT BART model performs closely to the SFT CodeT5+ (770M) but is still outpaced by DataVisT5. Moreover, the GPT-4 and open-source LLMs also underperform compared to our model. This superior performance is attributed to DataVisT5\u2019s integration of textual information and DV knowledge during pre-training.\nCase Study.\nAs detailed in Table XI ###reference_###, the Seq2Seq model\u2019s output significantly diverged from the ground truth, producing redundant and irrelevant text without the needed factual content. The Transformer model inaccurately identified the subject as a movie brand rather than a publisher, missing essential details. Although SFT BART correctly identified the publication year and the work\u2019s nature, it misattributed the publisher. In contrast, while the SFT CodeT5+ model\u2019s responses were semantically close to the ground truth, our model consistently generated descriptions that precisely matched the ground truth." + }, + { + "section_id": "5.6", + "parent_section_id": "5", + "section_name": "Ablation Studies", + "text": "We conduct experiments to verify the effectiveness of each critical design in the proposed DataVisT5. Specifically, we establish the MFT DataVisT5 (770M) with all designed components as the baseline. We created variants of DataVisT5 by omitting the BDC objective in the pretraining stage, removing temperature up-sampling during MFT, and evaluating without MFT in a zero-shot setting. Additionally, we compare the use of SFT and MFT, and CodeT5+ versus T5-large as the starting point. From Table XII ###reference_###, it is evident that removing or replacing designed components results in performance degradation across the mean performance of the four tasks, which indicates the effectiveness of the critical design components in DataVisT5." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Related Work", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "VI-A Pre-training for Data Engineering Tasks", + "text": "Pre-trained models have been shown to be effective for language representations and beneficial for downstream tasks by substantial work [43 ###reference_b43###, 44 ###reference_b44###, 45 ###reference_b45###, 20 ###reference_b20###, 21 ###reference_b21###, 46 ###reference_b46###, 47 ###reference_b47###, 48 ###reference_b48###].\nAll the success has also driven the development of machine language pretraining, which is in special format of text such as code and sql.\nCodeBert [49 ###reference_b49###] is a bimodal pre-trained model for natural language and programming language in a bert-like architecture, showing that pretraining can improve the performance for code-related tasks.\nTaBert [50 ###reference_b50###], TAPAS [28 ###reference_b28###]\nand GraPPa [51 ###reference_b51###] extend pre-trained models to learn a joint representation of NL text and database tables and demonstrate the effectiveness of semantic parsing tasks.\nBased on pre-trained language models, Rat-SQL [52 ###reference_b52###] and Proton [53 ###reference_b53###] enhance text-to-SQL parsing by focusing on schema linking and alignment, whereas StruG [54 ###reference_b54###] specifically targets improvements in text-table alignment.\nMoreover, the development of domain-adapted pre-trained models, such as CodeT5 [22 ###reference_b22###] for code understanding and generation, MolT5[23 ###reference_b23###] for molecule captioning and generation, and BioT5 [55 ###reference_b55###] which integrates cross-modal data in the biological domain with chemical structures and linguistic contexts, highlights the importance of specialized training beyond a generic T5 framework. These adaptations emphasize the necessity of domain-specific fine-tuning to effectively capture the contextual nuances inherent in specialized corpora." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "VI-B DV-related Tasks", + "text": "Benefiting from the convenience of visualization, various studies related to DV, including text-to-vis, vis-to-text, free-form question answering over DV and table-to-text, have attracted considerable research interest within the community.\nThe initial text-to-vis systems were based on predefined rules or templates [56 ###reference_b56###, 57 ###reference_b57###, 58 ###reference_b58###, 59 ###reference_b59###]. Although efficient, these systems were limited in their ability to handle the linguistic variability of user queries. To overcome these limitations, researchers have turned to neural network-based methods. For example, Data2Vis [60 ###reference_b60###] conceptualizes visualization generation as a sequence translation task, employing an encoder-decoder neural architecture. Similarly, RGVisNet [29 ###reference_b29###] initiates the text-to-vis process by retrieving a relevant query prototype, refining it through a graph neural network (GNN) model, and then adjusting the query to fit the target scenario. Concurrently, vis-to-text has been proposed as a complementary task, with improvements in performance demonstrated through a dual training framework [18 ###reference_b18###]. Song et al. [12 ###reference_b12###] further define the task of free-form question answering over DV and introduce the FeVisQA dataset, aiming to enhance the understanding of data and its visualizations.\nMoreover, learning-based approaches have demonstrated exceptional performance in visually data wrangling and analytical tasks.\nFor instance, Liu et al. [61 ###reference_b61###] and Obeid and Hoque [62 ###reference_b62###] have successfully translated visual data into textual descriptions and automated natural language summaries for charts using transformer-based architectures, respectively. In a similar vein, Spreafico and Carenini [63 ###reference_b63###] have employed LSTM-based and neural network models to summarize time-series and chart data. Additionally, Kantharaj et al. [31 ###reference_b31###] have contributed to the evolving benchmark in chart summarization.\nFurthermore, Juno[64 ###reference_b64###], a cross-modal entity matching framework, has been developed to contextualize information retrieved from visually rich documents and gather actionable insights, thereby addressing challenges posed by the ad-hoc and often incomplete information in such documents." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Conclusion", + "text": "In this study, we propose DataVisT5, a novel PLM specifically designed for DV, which enhances the integration of cross-modal information in DV knowledge and natural language associations. This model introduces a unique mechanism to capture highly relevant database schemas from natural language mentions of tables, effectively unifying and normalizing the encoding of DV knowledge, including DV queries, database schemas, and tables. Our novel hybrid pre-training objectives unravel the complex interplay between DV and textual data, fostering a deeper integration of cross-modal insights. By extending the text-centric T5 architecture to adeptly process cross-modal information, DataVisT5 addresses multiple tasks related to DV with remarkable performance.\nOur extensive experimental results demonstrate that DataVisT5 consistently outperforms SOTA models and even higher-parameter LLMs across a wide range of DV tasks, expanding PLM applications and pushing the boundaries of what is achievable in automated data visualization and interpretation." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: The statistics of the NVBench dataset
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Number of instancesNumber of databases
SplitNVBench w/o joinNVBenchNVBench w/o joinNVBench
Train105641678098106
Valid253935051516
Test266153432730
Total1576425628140152
\n
\n
", + "capture": "TABLE I: The statistics of the NVBench dataset" + }, + "2": { + "table_html": "
\n
TABLE II: The statistics of the Chart2text and WikiTableText datasets
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Number of instancesNumber of cells
SplitChart2TextWikiTableTextMetricsChart2TextWikiTableText
Train2436810000Min.427
Valid52221318Max.8000108
Test52212000\n1503427213318
Total3481113318\n1505390
\n
\n
", + "capture": "TABLE II: The statistics of the Chart2text and WikiTableText datasets" + }, + "3": { + "table_html": "
\n
TABLE III: The statistics of the FeVisQA dataset
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Number of instancesNumber of questions
SpiltdatabasesQA pairDV queryType 1Type 2Type 3
Train1065440691694799916631272
Valid169290160384415795264
Test30156092542145325019113
Total152793051331370961324645650
\n
\n
", + "capture": "TABLE III: The statistics of the FeVisQA dataset" + }, + "4": { + "table_html": "
\n
TABLE IV: Comparative evaluation of text-to-vis models and LLMs performance on the cross-domain NVBench test dataset: non-join operations and complete NVBench with join operations. Best results are highlighted in bold.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelSettingNVBench w/o join operationNVBench w/ join operation
-Vis EMAxis EMData EMEMVis EMAxis EMData EMEM
Seq2Vis0.80270.00000.00240.00000.83420.00000.00000.0000
Transformer0.85980.00710.06460.00240.97980.00210.04040.0000
ncNet0.93110.24420.51520.1465\u2014\u2014\u2014\u2014
RGVisNet0.97010.59630.54230.4675\u2014\u2014\u2014\u2014
CodeT5+ (220M)+SFT0.97950.78890.62390.60100.98430.40650.34250.2968
CodeT5+ (770M)+SFT0.98270.78500.66960.66680.98650.40240.37130.3399
GPT-4 (5-shot)+Similarity0.97000.55070.64250.47260.97900.27550.37080.2313
LLama2-7b+LoRA0.93230.74320.62030.64200.94460.42810.31740.3327
Mistral-7b+LoRA0.98210.77530.66490.67610.92460.43100.33860.3374
DataVisT5 (220M)+MFT0.98270.80780.66800.66880.98730.41230.35860.3324
DataVisT5 (770M)+MFT0.98500.79830.67700.68330.98840.41120.38630.3451
\n
\n
", + "capture": "TABLE IV: Comparative evaluation of text-to-vis models and LLMs performance on the cross-domain NVBench test dataset: non-join operations and complete NVBench with join operations. Best results are highlighted in bold." + }, + "5": { + "table_html": "
\n
TABLE V: The DV query examples generated by various text-to-vis models from NVBench
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NL Question\n\n\n\nJust show the average and minimum price of the rooms in different decor using a scatter.\n
Database Schema\n\n\n\n| inn_1 | rooms : rooms.roomid, rooms.roomname, rooms.bedtype, rooms.baseprice, rooms.decor\n
Ground-truth\n\n\n\nvisualize scatter select avg(rooms.baseprice), min(rooms.baseprice) from rooms group by rooms.decor\n
Seq2Vis ()\n\n\n\nvisualize bar select location, count(company.location) from company group by\n\ncompany.location Figure\u00a06(a)\n
Transformer ()\n\n\n\nvisualize scatter select addresses.address_id, election.vote_percent from Figure\u00a06(b)\n
ncNet ()\n\n\n\nvisualize scatter select rooms.name, rooms.employee_id from rooms where\n\nrooms.first_name like \u2019%s%\u2019 Figure\u00a06(c)\n
RGVisNet ()\n\n\n\nvisualize scatter select max(rooms.baseprice), rooms.decor from rooms Figure\u00a06(d)\n
CodeT5+ ()\n\n\n\nvisualize scatter select avg(rooms.baseprice), min(rooms.baseprice) from rooms\n\ngroup by rooms.baseprice Figure\u00a06(e)\n
Ours (\u2713)\n\n\n\nvisualize scatter select avg(rooms.baseprice), min(rooms.baseprice) from rooms\n\ngroup by rooms.decor Figure\u00a06(f)\n
\n
\n
", + "capture": "TABLE V: The DV query examples generated by various text-to-vis models from NVBench" + }, + "6": { + "table_html": "
\n
TABLE VI: Comparative performance analysis of models and LLMs for vis-to-text task . Best results are highlighted in bold.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSettingBLEU-1BLEU-2BLEU-4ROUGE-1ROUGE-2ROUGE-LMETEOR
Seq2Seq0.27660.15200.02960.35710.13430.28930.2528
Transformer0.28250.16350.03450.36340.14760.29580.2755
BART+SFT0.43010.28920.10090.47210.22090.36470.4586
CodeT5+(220M)+SFT0.44310.30600.12360.48730.24030.37700.4872
CodeT5+(770M)+SFT0.45180.31540.12780.48980.24310.39280.4965
GPT-4 (0-shot)0.38430.22100.03870.41800.15270.29250.4350
LLama2-7b+LoRA0.30290.15200.03140.35810.10550.27330.3028
Mistral-7b+LoRA0.35120.24310.08970.44020.21580.35490.3925
DataVisT5 (220M)+MFT0.45840.31600.12450.50000.24370.39780.4986
DataVisT5 (770M)+MFT0.45660.31550.13320.49740.24600.39860.4851
\n
\n
", + "capture": "TABLE VI: Comparative performance analysis of models and LLMs for vis-to-text task . Best results are highlighted in bold." + }, + "7": { + "table_html": "
\n
TABLE VII: The description examples generated by various vis-to-text methods
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DV query\n\n\n\nFigure\u00a07 visualize bar select student.lname, count(student.lname) from student where stuid not in\n\n(select has_allergy.stuid from has_allergy join allergy_type on has_allergy.allergy = allergy_type.allergy\n\nwhere allergy_type.allergytype = \u2019food\u2019) group by lname order by count(student.lname) asc\n\n
Database Schema\n\n\n\n| allergy_1 | allergy_type : allergy_type.allergy, allergy_type.allergytype | has_allergy : has_allergy.\n\nstuid, has_allergy.allergy | student : student.stuid, student.lname, student.fname, student.age, student.sex,\n\nstudent.major, student.advisor, student.city_code\n\n
Ground-truth\n\n\n\nList the last name of the students who do not have any food type allergy and count them in a bar chart,\n\nshow Y-axis from low to high order.\n\n
\nSeq2Seq ()\n\n\n\n\nfor a bar chart for the the number of the that have the that have the , and a bar chart, and a bar chart.\n\n
\nTransformer ()\n\n\n\n\nFind the last names that some last name when that are not steered by any last name as well using a\n\nbar chart , and rank by the number of last name in asc .\n\n
\nBART ()\n\n\n\n\nA bar chart for finding the number of the names of all students who do not have any allergy with\n\nthe allergy type \u201dFood\u201d, and could you display in ascending by the y-axis?\n\n
\nCodeT5+ ()\n\n\n\n\nFind the number of students who do not have any allergy type for food in each lname with a bar chart.\n\n
Ours (\u2713)\n\n\n\nGive the number of students who do not have any allergy for food in each last name, show by the\n\ny-axis from low to high with a bar chart.\n\n
\n
\n
", + "capture": "TABLE VII: The description examples generated by various vis-to-text methods" + }, + "8": { + "table_html": "
\n
TABLE VIII: Comparative performance analysis for FeVisQA and table-to-text tasks highlighted by top metric scores.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSettingFeVisQATable-to-Text
-BLEU-1ROUGE-1ROUGE-LMETEORBLEU-4ROUGE-1ROUGE-LMETEOR
Seq2Seq0.36420.37550.36830.19550.15750.45390.39950.3324
Transformer0.28680.29840.29030.15560.08750.38380.31520.2642
BART+SFT0.73790.73910.72900.43760.38240.63140.55490.5845
CodeT5+(220M)+SFT0.68130.68010.66940.40860.38140.61830.54500.5844
CodeT5+(770M)+SFT0.70390.70320.69300.42110.38480.62840.55110.5946
GPT-4 (0-shot)0.11480.17310.15990.23120.15650.42770.32810.4146
LLama2-7b+LoRA0.42140.43360.42230.25820.20100.49880.45230.3923
Mistral-7b+LoRA0.74040.76710.75740.42510.20030.50020.45380.3948
DataVisT5 (220M)+MFT0.71640.71580.70510.42730.38220.62590.54780.5926
DataVisT5 (770M)+MFT0.78930.78950.77880.46710.41990.65200.57750.6227
\n
\n
", + "capture": "TABLE VIII: Comparative performance analysis for FeVisQA and table-to-text tasks highlighted by top metric scores." + }, + "9": { + "table_html": "
\n
TABLE IX: Sequence formats of DV Knowledge used in FeVisQA case study
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DV query\n\n\n\nFigure\u00a08(a) visualize bar select film.type, count(film.type) from film join film_market_estimation\n\non film.film_id = film_market_estimation.film_id group by type order by type asc\n\n
Table\n\n\n\nFigure\u00a08(b) | col : film.type | count(film.type) row 1 : Mass human sacrifice | 1 row 2 : Mass suicide\n\n| 6 row 3 : Mass suicide murder | 2\n\n
Database Schema\n\n\n\n| film_rank | film : film.film_id, film.title, film.studio, film.director, film.gross_in_dollar\n\n| film_market_estimation : film_market_estimation.estimation_id, film_market_estimation.low_estimate,\n\nfilm_market_estimation.high_estimate, film_market_estimation.film_id, film_market_estimation.type,\n\nfilm_market_estimation.market_id, film_market_estimation.year\n\n
\n
\n
", + "capture": "TABLE IX: Sequence formats of DV Knowledge used in FeVisQA case study" + }, + "10": { + "table_html": "
\n
TABLE X: The answer examples generated by various FeVisQA methods
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DV QuestionGround-truthSeq2SeqTransformerBARTCodeT5+Ours
\n\n\n\nIs any equal value of y-axis in the chart?\nNoYes ()Yes ()Yes ()No (\u2713)No (\u2713)
\n\n\n\nHow many parts are there in the chart?\n35 ()4 ()3 (\u2713)3 (\u2713)3 (\u2713)
\n\n\n\nWhat is the value of the smallest part in the chart?\n12 ()1 (\u2713)1 (\u2713)1 (\u2713)1 (\u2713)
\n\n\n\nWhat is the total number of count(film.type)?\n911 ()12 ()15 ()10 ()9 (\u2713)
\n
\n
", + "capture": "TABLE X: The answer examples generated by various FeVisQA methods" + }, + "11": { + "table_html": "
\n
TABLE XI: The description examples generated by various table-to-text methods
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Table\n\n\n\nFigure\u00a09 | col : subjtitle | subjsubtitle | year | english title | publisher | notes row 1 :\n\nso ji-sub | books | 2010 | so ji-sub\u2019s journey | sallim | photo-essays\n
Ground-truth\n\n\n\nSallim was the publisher of so ji-sub\u2019s journey in 2010.\n
Seq2Seq ()\n\n\n\nthe format of was was was was was was.\n
Transformer ()\n\n\n\nIn movie brand played in 2010.\n
BART ()\n\n\n\nSo ji-sub\u2019s journey was published by photo-essays in 2010.\n
CodeT5+ ()\n\n\n\nSo ji-sub\u2019s journey was published by sallim in 2010.\n
Ours ()\n\n\n\nSallim was the publisher of so ji-sub\u2019s journey in 2010.\n
\n
\n
", + "capture": "TABLE XI: The description examples generated by various table-to-text methods" + }, + "12": { + "table_html": "
\n
TABLE XII: Ablation study results: average metric values per task multipled by 100
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelMethodtext-to-visvis-to-textFeVisQAtable-to-textMean
DataVisT5 (770M)MFT65.2236.1870.6256.8057.21
w/o BDC64.49-0.73\n36.16-0.02\n69.26-1.36\n55.83-0.97\n56.44-0.77\n
w/o up-sampling62.95-2.27\n36.41+0.23\n70.69+0.07\n56.34-0.46\n56.60-0.61\n
w/o MFT62.36-2.87\n37.12+0.94\n67.35-3.27\n53.98-2.82\n54.93-2.28\n
DataVisT5 (770M)SFT65.01-0.21\n36.50+0.32\n70.73+0.11\n55.67-1.13\n56.98-0.23\n
CodeT5+ (770M)SFT62.79-2.43\n35.96-0.22\n63.03-7.59\n53.97-2.83\n53.94-3.27\n
T5-largeSFT61.34-3.88\n33.58-2.60\n61.90-8.72\n52.03-4.77\n52.21-5.00\n
\n
\n
", + "capture": "TABLE XII: Ablation study results: average metric values per task multipled by 100" + } + }, + "image_paths": { + "1": { + "figure_path": "2408.07401v2_figure_1.png", + "caption": "Figure 1: An illustration depicting the text-to-vis, vis-to-text, table-to-text, and free-form question-answering over data visualization problems, showcasing examples including a NL question, a DV query, a DVL visualization specification, a table description, a visualization chart, and four question-answer pairs.", + "url": "http://arxiv.org/html/2408.07401v2/x1.png" + }, + "2": { + "figure_path": "2408.07401v2_figure_2.png", + "caption": "Figure 2: The pipeline of DataVisT5.", + "url": "http://arxiv.org/html/2408.07401v2/x2.png" + }, + "3": { + "figure_path": "2408.07401v2_figure_3.png", + "caption": "Figure 3: Examples of DV Knowledge Encoding and Standardized Encoding from NVBench.", + "url": "http://arxiv.org/html/2408.07401v2/x3.png" + }, + "4": { + "figure_path": "2408.07401v2_figure_4.png", + "caption": "Figure 4: An Standardized DV Query with join operation example.", + "url": "http://arxiv.org/html/2408.07401v2/x4.png" + }, + "5": { + "figure_path": "2408.07401v2_figure_5.png", + "caption": "Figure 5: Overview of hybrid pre-training objectives. The solid lines denote the Bidirectional Dual-Corpus objectives, which facilitate the learning of language representation by leveraging bidirectional context. The dashed lines represent the T5-based MLM objectives, designed to reconstruct the original input from masked tokens.", + "url": "http://arxiv.org/html/2408.07401v2/x5.png" + }, + "6(a)": { + "figure_path": "2408.07401v2_figure_6(a).png", + "caption": "(a) Seq2Vis\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/extracted/6029870/figures/error.png" + }, + "6(b)": { + "figure_path": "2408.07401v2_figure_6(b).png", + "caption": "(b) Transformer\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/extracted/6029870/figures/error.png" + }, + "6(c)": { + "figure_path": "2408.07401v2_figure_6(c).png", + "caption": "(c) ncNet\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/extracted/6029870/figures/error.png" + }, + "6(d)": { + "figure_path": "2408.07401v2_figure_6(d).png", + "caption": "(d) RGVisNet\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/x6.png" + }, + "6(e)": { + "figure_path": "2408.07401v2_figure_6(e).png", + "caption": "(e) CodeT5+\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/x7.png" + }, + "6(f)": { + "figure_path": "2408.07401v2_figure_6(f).png", + "caption": "(f) Ours\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/x8.png" + }, + "7": { + "figure_path": "2408.07401v2_figure_7.png", + "caption": "Figure 7: Visualization Chart", + "url": "http://arxiv.org/html/2408.07401v2/x9.png" + }, + "8(a)": { + "figure_path": "2408.07401v2_figure_8(a).png", + "caption": "(a) Visualization Chart\nFigure 8: Visualization formats of DV Knowledge used in FeVisQA case study", + "url": "http://arxiv.org/html/2408.07401v2/x10.png" + }, + "8(b)": { + "figure_path": "2408.07401v2_figure_8(b).png", + "caption": "(b) Table\nFigure 8: Visualization formats of DV Knowledge used in FeVisQA case study", + "url": "http://arxiv.org/html/2408.07401v2/extracted/6029870/figures/QA_table.png" + }, + "9": { + "figure_path": "2408.07401v2_figure_9.png", + "caption": "Figure 9: Table used in table-to-text case study", + "url": "http://arxiv.org/html/2408.07401v2/extracted/6029870/figures/table2text_table.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2408.07401v2" +} \ No newline at end of file