diff --git "a/intro_28K/test_introduction_long_2405.04674v1.json" "b/intro_28K/test_introduction_long_2405.04674v1.json" new file mode 100644--- /dev/null +++ "b/intro_28K/test_introduction_long_2405.04674v1.json" @@ -0,0 +1,105 @@ +{ + "url": "http://arxiv.org/abs/2405.04674v1", + "title": "Towards Accurate and Efficient Document Analytics with Large Language Models", + "abstract": "Unstructured data formats account for over 80% of the data currently stored,\nand extracting value from such formats remains a considerable challenge. In\nparticular, current approaches for managing unstructured documents do not\nsupport ad-hoc analytical queries on document collections. Moreover, Large\nLanguage Models (LLMs) directly applied to the documents themselves, or on\nportions of documents through a process of Retrieval-Augmented Generation\n(RAG), fail to provide high accuracy query results, and in the LLM-only case,\nadditionally incur high costs. Since many unstructured documents in a\ncollection often follow similar templates that impart a common semantic\nstructure, we introduce ZenDB, a document analytics system that leverages this\nsemantic structure, coupled with LLMs, to answer ad-hoc SQL queries on document\ncollections. ZenDB efficiently extracts semantic hierarchical structures from\nsuch templatized documents, and introduces a novel query engine that leverages\nthese structures for accurate and cost-effective query execution. Users can\nimpose a schema on their documents, and query it, all via SQL. Extensive\nexperiments on three real-world document collections demonstrate ZenDB's\nbenefits, achieving up to 30% cost savings compared to LLM-based baselines,\nwhile maintaining or improving accuracy, and surpassing RAG-based baselines by\nup to 61% in precision and 80% in recall, at a marginally higher cost.", + "authors": "Yiming Lin, Madelon Hulsebos, Ruiying Ma, Shreya Shankar, Sepanta Zeigham, Aditya G. Parameswaran, Eugene Wu", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "The vast majority\u2014over 80%\u2014of data today exists in unstructured formats such as text, PDF, video, and audio, and is continuing to grow at the rate of over 50% annually [2, 8]. In fact, an overwhelming 95% of businesses have recognized management of this unstruc- tured data as a significant problem [1]. Consider unstructured text documents, such as Word or PDF documents, with a rich treasure trove of untapped information. Due to the inherently free-form na- ture of natural language, coupled with visual formatting, real-world unstructured documents pose a particularly difficult challenge for data management. Is there any hope for successfully querying or extracting value from unstructured documents? Example 1.1 (Civic Agenda Report: Vanilla LLMs and RAG). Our journalism collaborators at Big Local News at Stanford have col- lected large tranches of civic meeting agenda PDF reports for var- ious US counties as part of their agenda watch project, as in Fig- ure 1-a, and want to analyze these reports. One such query could be to count the number of construction projects of a certain type, across meetings. To do so, one could use Large Language Mod- els (LLMs). However, even advanced LLMs, such as GPT-4, strug- gle with queries issued on such reports (e.g., \ud835\udc441 in Figure 1-d), R A1 A3 B2 B3 \u2026\u2026 A2 c) Semantic Hierarchical Tree B1 \u2026\u2026 \u2026\u2026 b) Semantic Structure SELECT COUNT(Projects.name) FROM Projects WHERE Projects.type = \u2018Capital Improvement\u2019 AND Projects.begin_time > \u20182022\u2019 Q1: \u2018What is the number of Capital Improvement projects that start after 2022\u2019 Q2: d) Natural Language Query and corresponding SQL Query a) Civic Project Agenda Report R A1 B1 B2 C1 (1) (10) (11) (22) (63) (192) (64) Figure 1: Civic Agenda Document and Semantic Structures. especially when these queries involve aggregations and/or multi- ple filters on long documents. The error-prone nature of LLMs is not surprising given that LLMs can\u2019t effectively handle large con- texts [19, 45], or complex data processing tasks [48, 49]. The costs of processing all documents in a collection via LLMs (e.g., through OpenAI APIs) are also high. Another strategy, Retrieval-Augmented Generation (RAG) [39, 41], identifies one or more text segments within each document that are most relevant (e.g., via embedding distance) to the given query, incorporating these segments into prompts, reducing the cost. However, RAG struggles to identify the appropriate text segments, even for simple queries. Suppose we want to identify the capital improvement projects. RAG retrieves the segments that most closely matches \"capital improvement projects\" within the document, such as the red box in Figure 1-a. However, it fails to capture over 20 additional projects in subsequent pages, such as the \"PCH Median Improvement Project\" (B2 in Figure 1-b) belonging to \"Capital Improvement Projects\" (A1). Overall, both the vanilla LLM approach and RAG are unsuitable: both have low accuracy, while the LLM approach additionally has high cost. Leveraging Semantic Structure Helps. The reason RAG didn\u2019t perform well above was because the text segment provided to the LLM did not leverage the semantic structure underlying the document. Instead, if we are aware of this semantic structure, we can identify the capital improvement projects (A1 in Figure 1-b) by checking all of the subportions (e.g., B1, B2) under it, where each one corresponds to the description of such a project, and provide this arXiv:2405.04674v1 [cs.DB] 7 May 2024 LLM ZenDB RAG Cheap 93% lower cost 97% lower cost 25% higher accuracy 48% higher accuracy Accurate Figure 2: Understanding the differences between ZenDB, LLMs and RAG. a) Scientific Papers b) Notice of Violations c) Employee Job Descriptions Figure 3: Templatized Documents: Scientific Papers, Notice of Violations, Job Descriptions. to an LLM to interpret. By doing so, we provide all of the pertinent information to an LLM, unlike RAG, while also not overwhelming it with too much information. Indeed, when we leverage semantic structure for a group of sample queries on GPT-4-32k, as in our system ZenDB, described next, we surpass the vanilla LLM and RAG approaches by 25% and 48% in accuracy, while only having 7% of the cost of LLMs, as detailed in Figure 2. Templatized Documents Provide Semantic Structure. Given that semantic structure is helpful, how do we extract this semantic structure within unstructured documents? Turns out, while unstruc- tured documents vary considerably in format, many documents that are part of collections are created using templates, which we call templatized documents. Templatized documents are observed across domains, including civic agenda reports, scientific papers, employee job descriptions, and notices of violations, as listed in Figure 1 and Figure 3. For instance, two scientific papers from the same venue use similar templates, just as civic documents for the same purpose from the same local county often adhere to a uni- form template. Templatized documents often exhibit consistent visual patterns in headers (e.g., font size and type), when describing content corresponding to the same semantic \u201clevel\u201d (e.g., section headers in a paper often follow the same visual pattern.) We high- light the \u201ctemplates\u201d using blue boxes in Figure 3. Thus, templatized documents are often have a discernible hierarchical structure that reflects different semantic levels within the document. For exam- ple, a 9-page complex civic agenda report (such as Figure 1-a) can be broken down into portions (e.g., A1, A2, A3 in Figure 1-b) and further into subportions (e.g., B2), indicating a possible semantic hierarchy, such as Figure 1-c, across the documents following the same template. Leveraging Semantic Structure: Challenges. Unfortunately, the semantic structure of the templates isn\u2019t known\u2014and neither do we expect these templates to be rigidly adhered to, nor do we expect there to just be one template across the collection of documents from a specific domain. Uncovering possible common semantic structures across documents is a challenge. In addition, to sup- port queries over unstructured data where there isn\u2019t a predefined schema, it\u2019s not entirely clear what the data model or query inter- face should look like. Furthermore, using LLMs for query evaluation incurs high monetary costs and latencies; it\u2019s not obvious how we can leverage the semantic structures across documents to enable accurate query execution with low cost and latency. Addressing Challenges in ZenDB. We introduce ZenDB, a docu- ment analytics system that supports ad-hoc advanced SQL queries on templatized document collections, and address the aforemen- tioned challenges. First, we introduce the notion of Semantic Hierar- chical Trees (SHTs) that represent the semantic structure for a given document, and effectively act as an index to retrieve only portions of the document that are pertinent to a given query. We build SHTs across documents by leveraging the uniform visual patterns in the document templates. We cluster the visual patterns found across documents to extract and detect various template instantiations, coupled with minimal LLM calls for this purpose. We show that if documents obey a property we term well-formattedness, then our procedure correctly recovers their semantic structure. Second, we introduce an extension to SQL to query unstructured documents (e.g., \ud835\udc441 in Figure 1 could be expressed as a SQL query \ud835\udc442.) Users can easily impose a schema on a collection of documents by simply listing a table name as well as a description for the entities in the table, without listing the attributes, which can then be lazily defined and populated in response to queries. Finally, we introduce a novel tree search algorithm that leverages SHTs to minimize cost and latency while answering queries without compromising on quality. Specifically, we propose a summarization technique to create sum- mary sketches for each node within the tree. ZenDB can navigate through the tree, identifying the appropriate node to answer a given query by examining these sketches, akin to how a person might use a table of contents to find the right chapter for a specific task. Other Related Work. Supporting queries on non-relational data isn\u2019t new. For unstructured data, the field of Information Retrieval (IR) [37, 53] investigates the retrieval of documents via keyword search queries, but doesn\u2019t consider advanced analytical queries. For semi-structured data [15, 16, 47], query languages like XQuery or XPath, as well as extensions to relational databases for querying XML and JSON, help query hierarchically organized data, as in our SHTs, but there, the hierarchy is explicit rather than implicit as in our setting. Recent efforts have sought to bridge the gap between structured queries, like SQL, and unstructured documents. One line of work [58, 61] has explored the upfront transformation of text documents into tables. Doing this ETL process with Large Language Models (LLMs) like GPT-4 on entire documents is expensive and Document Ingestion (Section 3) Unstructured Documents Schema & Query Spec. (Section 4) Data Population (Section 5) Query Execution (Section 6) Query Results Figure 4: User Workflow with ZenDB. error-prone relative to approaches that focus the LLM\u2019s attention on specific semantic portions, as we saw above. Others [24, 55, 56] have explored writing SQL queries directly on text data, as part of multi-modal databases. Most work there boils down to apply- ing LLMs to the entire document, and only works well on simple, small documents. However, using these methods on complex, large documents we saw above leads to high costs and reduced accuracy. None of the approaches above have explored the use of seman- tic structure to reduce cost and improve accuracy when querying documents. We cover this and other related work in Section 8. We make the following contributions in this paper, as part of building ZenDB, our document analytics system. \u2022 We identify that we can leverage templates within document collections to support ad-hoc analytical queries. \u2022 We introduce the notion of Semantic Hierarchical Trees (SHTs) that represents a concrete instantiation of a template for a spe- cific document, as well as novel methods to efficiently extract SHTs from an array of templatized documents. \u2022 We develop a simple extension to SQL to declare a schema, specify attributes on-demand, and perform analytical queries. \u2022 We design a query engine that leverages SHTs, facilitating query execution in a cost-effective, efficient, and accurate manner. \u2022 We implement all of these techniques within ZenDB and evalu- ate its performance on three real-world datasets, demonstrating substantial benefits over other techniques.", + "main_content": "CREATE TABLE Projects WITH DESCRIPTION \"The projects table contains the description for a set of civic agenda projects.\u201d ALTER TABLE Projects ADD name TEXT WITH DESCRIPTION \"Name of Project\", ADD type TEXT WITH DESCRIPTION \"Type of Project\", ADD begin_time DATE WITH DESCRIPTION \"Begin time of Project\"; Figure 5: Creating the Projects Table and Adding Attributes. n reuse a previously identified semantic structure in the for Figure 5: Creating the Projects Table and Adding Attributes. can reuse a previously identified semantic structure in the form of a template, synthesized from a concrete SHT, or extract a new one (when there are multiple templates in a collection), all without using LLMs (Section 3.3). 2 Schema/Query Specification and Table Population. Given using LLMs (Section 3.3). 2 Schema/Query Specification and Table Population. Given one SHT per document, ZenDB then enables users to specify a schema across documents in a selection, followed by issuing queries on that schema. Schema definition happens via an extension of standard SQL DDL: users (typically database administrators) provide a name and description for each table\u2014that we call document tables, along with names, types, and descriptions for any attributes; the attributes can be lazily added at any point after the table is created (Section 4.1). For example, Figure 5 shows the query used to create a \"Projects\" table along with attributes (e.g., name). Subsequently, other users can write queries that reference such tables and attributes (e.g., \ud835\udc442 in Figure 1), as in standard SQL (Section 4.2); these queries could also be generated by applications (including form-based or GUI-based applications), or by translating natural language queries into SQL. We still concretize the query in SQL to provide well-defined semantics. While attributes are added lazily and attribute values are computed or materialized in response to queries, we proactively identify mappings between tuples and documents during schema specification (Section 5). Specifically, we identify the SHT node that represents the portion of the document that captures all of the relevant tuples in a given user-specified table, as well as the mapping between tuples to individual SHT nodes, if they exist, using a combination of minimal LLM invocations and automated rules. These are then stored in our data model as hidden system-defined attributes, such as the span of the text that corresponds to the given tuple, leveraging nodes in the SHTs built earlier. These system-defined attributes allow for LLMs to extract the user-defined attribute values per tuple as needed, while reducing costs, while also leveraging the shared semantic structure across documents. 3 Query Execution. Finally, ZenDB executes the user-specified semantic structure across documents. 3 Query Execution. Finally, ZenDB executes the user-specified SQL queries using the pre-constructed SHTs per document, while minimizing cost and latency, and maximizing accuracy. Unlike traditional relational databases, where I/O and sometimes computation are often the bottleneck, here, the LLM calls invoked by ZenDB becomes both a cost and latency bottleneck. Therefore, ZenDB aims to minimize such calls, while still trying to extract attribute values as needed to answer queries, by using a combination of predicate pushdown and projection pull-up. We additionally develop a cost model for ZenDB, focusing on monetary cost (Section 6.1). Our cost model design is flexible and can be adapted to optimize for latency instead, e.g., if we instead use an open-source LLM on-prem. Furthermore, we design novel physical implementations that leverage SHTs (Section 6.2). In particular, we maintain a sketch for each node in each SHT, and leverage this sketch as part of a tree search to identify the appropriate text span to evaluate a given query, akin to how a person would use a table of contents to find the right chapter. Finally, we maintain provenance (i.e., the specific document text R A1 A2 B1 B2 1 10 63 11 22 R A1 A2 B1 B2 E1 E2 a) Phrase Clustering Based on Visual Patterns b) SHT (1) (10) C1 24 \u2026\u2026 E1 E2 64 75 p1 p2 (63) p2 (11) p3 (22) p3 (64) p4 (75) p4 p1 p2 p3 p4 p5 Figure 6: SHT Construction in Civic Agenda Report. span) for query answers, ensuring that users can verify the source of the information and ensuring trust in the system outputs. 3 SEMANTIC HIERARCHICAL TREE In this section, we describe our process for recovering structure from documents in the form of Semantic Hierarchical Trees (SHTs), which then acts as an index for subsequent querying. We start by formalizing the notion of SHTs and templates, and then describe how to extract an SHT for a single document, followed by extracting them across collections by leveraging shared templates. 3.1 Preliminaries We focus on rich text documents, such as PDF and Word documents, that include visual formatting information (e.g., multiple font types and sizes), as shown in Figure 3. Documents, Words, and Phrases. Consider a set of documents D = {\ud835\udc371, \ud835\udc372, ..., \ud835\udc37\ud835\udc59}. For each document \ud835\udc37\u2208D, which may be a PDF or Word document, we often instead operate on a plain text serialized representation, extracted as a preprocessing step. To generate this representation for a document \ud835\udc37, we use an extraction tool such as pdfplumber [13], which generates a sequence of words \ud835\udc4a\ud835\udc37= [\ud835\udc641, ...,\ud835\udc64\ud835\udc5a], each with formatting and location features (e.g., font name/size/bounding boxes). For simplicity, we ignore images, but they can be treated as a special word. For any two consecutive words \ud835\udc64\ud835\udc56and \ud835\udc64\ud835\udc56+1, if they have the same formatting features: font size, name (e.g., Times New Roman), and type (e.g., bold or underline), we group them into a phrase \ud835\udc60. We let \ud835\udc46\ud835\udc37= [\ud835\udc601, ...,\ud835\udc60\ud835\udc5b] be the sequence of phrases corresponding to \ud835\udc37\u2014we often operate on \ud835\udc46\ud835\udc37instead of the document directly. Visual Patterns. For each phrase \ud835\udc60\u2208\ud835\udc46\ud835\udc37, we further define a visual pattern, \ud835\udc5d(\ud835\udc60), as a vector of visual formatting features; we currently use:\ud835\udc5d(\ud835\udc60) = [\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52,\ud835\udc5b\ud835\udc4e\ud835\udc5a\ud835\udc52,\ud835\udc61\ud835\udc66\ud835\udc5d\ud835\udc52,\ud835\udc4e\ud835\udc59\ud835\udc59_\ud835\udc50\ud835\udc4e\ud835\udc5d,\ud835\udc5b\ud835\udc62\ud835\udc5a_\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc59\ud835\udc5d\u210e\ud835\udc4e_\ud835\udc60\ud835\udc61,\ud835\udc50\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc52\ud835\udc5f] but other features may be included. Here, the first three features correspond to the font, as in the word-level features we had previously, and the remaining three features are phrase-level features: \ud835\udc4e\ud835\udc59\ud835\udc59_\ud835\udc50\ud835\udc4e\ud835\udc5d is a Boolean value that denotes whether the phrase \ud835\udc60is capitalized, \ud835\udc5b\ud835\udc62\ud835\udc5a_\ud835\udc60\ud835\udc61and \ud835\udc4e\ud835\udc59\ud835\udc5d\u210e\ud835\udc4e_\ud835\udc60\ud835\udc61indicate whether the phrase starts with a number (e.g., 1) or a letter (e.g., A), while \ud835\udc50\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc52\ud835\udc5findicates if a phrase is in the center of a line. Candidate SHTs. We are now in a position to define SHTs. We define a candidate SHT for a document \ud835\udc37to be a single-rooted, ordered, fully connected, directed tree \ud835\udc47= (\ud835\udc49, \ud835\udc38), where each \ud835\udc63\u2208\ud835\udc49corresponds to a single distinct phrase \ud835\udc60\ud835\udc56\u2208\ud835\udc46\ud835\udc37, denoted \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63) = \ud835\udc56, the phrase index for \ud835\udc63, satisfying (1) \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63) < \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\u2032) for any children\ud835\udc63\u2032 of\ud835\udc63, and (2)\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63) < \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\u2032) for any right siblings\ud835\udc63\u2032 of \ud835\udc63. These two properties together imply that a pre-order traversal of \ud835\udc47visits nodes in increasing phrase index order. A candidate SHT for Figure 1a is shown in Figure 6b. Node A1 represents the phrase (and section header) \u201cCapital Improvement and Disaster Recovery Projects (Design)\u201d, while B2 represents the phrase (and subsection header) \u201cPCH Median Improvement Project\u201d. The phrase index for each node is shown in parenthesis, e.g., \ud835\udc56\ud835\udc5b\ud835\udc51(A1) = 10; i.e., A1 corresponds to \ud835\udc6010; ignore the p\ud835\udc56(in red) for now. The SHT obeys the two conditions listed, e.g., A1 (with phrase index 10) has children (11 and 22) and a sibling (63) with larger phrase indexes. Note, however, that not all phrases in \ud835\udc46\ud835\udc37are found in the SHT; this is by design: the SHT simply represents the phrases corresponding to the headers of the document, while those that correspond to the content are omitted. For example, Figure 6b omits phrases \ud835\udc602, ..,\ud835\udc609. However, in certain cases, it may be convenient to refer to headers and content together. For this, we define text span or \ud835\udc61\ud835\udc60, to be a sequence of phrases \ud835\udc60\ud835\udc56, ...,\ud835\udc60\ud835\udc56+\ud835\udc58\u2208\ud835\udc46\ud835\udc37, or equivalently [\ud835\udc56,\ud835\udc56+ \ud835\udc58]. We define \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(\ud835\udc63) for a given node \ud835\udc63to be the phrase index corresponding to its sibling to the immediate right, if available, or, if not, the sibling to the immediate right of the closest ancestor that has one. If none of the ancestors of \ud835\udc63have a right sibling, \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(\ud835\udc63) = \ud835\udc5b, where \ud835\udc5bis the total number of phrases in \ud835\udc46\ud835\udc37. To illustrate, \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(A1) = \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(B2) = 63 (i.e., A2), while \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(A2) = \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(R) = 100, assuming \ud835\udc60100 is the final phrase in our document. A given node \ud835\udc63\u2208\ud835\udc49has a text span: \ud835\udc61\ud835\udc60(\ud835\udc63) = [\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63),\ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(\ud835\udc63) \u22121], i.e., \ud835\udc63\u201ccovers\u201d all of the phrases until the next node with phrase index \ud835\udc5b\ud835\udc52\ud835\udc65\ud835\udc61(\ud835\udc63). Thus, \ud835\udc61\ud835\udc60(R) is [1, 100], while \ud835\udc61\ud835\udc60(B2) is [22, 62]. That is, B2 \u201ccovers\u201d both the header, \ud835\udc6022, as well as the content \ud835\udc6023, . . . ,\ud835\udc6062, until the next header, A2. In the following, we equivalently refer to a node \ud835\udc63, its header phrase \ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63) (i.e., the header corresponding to \ud835\udc63), or text span \ud835\udc61\ud835\udc60(\ud835\udc63) (i.e., the header and content contained within \ud835\udc63). We finally introduce the notion of a granularity or height of a node \ud835\udc63, which is simply the depth of \ud835\udc63in the SHT; in our example, the depth of R is 1, and A1 is 2. 3.2 SHT Construction on a Single Document Given a document \ud835\udc37with phrases \ud835\udc46\ud835\udc37, there are exponentially many candidate SHTs; our goal is to identify the true SHT that correctly reflects the semantic structure of the document. To do so, our procedure, oracle_gen(\ud835\udc37), first identifies which phrases are header phrases (and therefore correspond to SHT nodes). We then assemble these phrases into a tree, ensuring that it is a candidate SHT. Header Phrase Identification. To identify if a phrase \ud835\udc60\u2208\ud835\udc46\ud835\udc37is a header phrase, we make use of visual patterns \ud835\udc5d(\ud835\udc60). We cluster the phrases in \ud835\udc46\ud835\udc37based on their visual patterns. For our running example, the clusters that emerge are shown in Figure 6a, each labeled with its visual pattern (in red). Here, the majority of the phrases end up in the cluster with pattern p5\u2014this corresponds to the content phrases in the document (e.g., C1 in Figure 1-a is a paragraph). To remove clusters whose phrases do not correspond to header phrases, we use LLMs as an oracle. We randomly sample \ud835\udc5a\ud835\udc56\ud835\udc5b(|\ud835\udc36|,\ud835\udc58) (\ud835\udc58is a predefined threshold) phrases in each cluster \ud835\udc36\u2208 C. For each sampled phrase \ud835\udc60\u2208\ud835\udc36, we construct the LLM prompt \u201cIs the phrase [s] a header in the document?\u201d. If over half of the sampled phrases in \ud835\udc36are non-headers, then \ud835\udc36is pruned (e.g., the cluster containing C1 is dropped since C1 is a paragraph). To verify if GPT-4 is effective at disambiguating headers from non-headers, we carefully examined over 200 documents from 16 datasets, covering six diverse domains. In our testing, when \ud835\udc58= 10, GPT-4 effectively removes non-header clusters on 97% of the documents with total cost as $0.37. Still, since this cost is non-zero, we would want to minimize it when working on a large collection of documents; as we illustrate in our next section, we only invoke LLMs for a small subset of documents, each corresponding to a different template. Tree Construction. Given the header phrases across the remaining clusters in C, we assemble the corresponding nodes into a tree. We proceed top-down, operating on one cluster at a time, adding the entire cluster to the partially constructed SHT. At each step, we pick the cluster \ud835\udc36that contains the phrase with the lowest index. For each phrase \ud835\udc60\ud835\udc56in this cluster \ud835\udc36, we create a corresponding node \ud835\udc63\ud835\udc56and add it to the partially constructed SHT, in increasing phrase index order, simultaneously. For each such node \ud835\udc63\ud835\udc56, we examine the \ud835\udc61\ud835\udc60of all existing nodes in the partially constructed SHT, and pick its parent to be the ancestor \ud835\udc63\ud835\udc57such that \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63\ud835\udc57), and there is no other \ud835\udc63\ud835\udc58> \ud835\udc63\ud835\udc57such that\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63\ud835\udc58). This condition basically ensures that \ud835\udc63\ud835\udc56is added under the most specific node \ud835\udc63\ud835\udc57that can accommodate it. Once we\u2019ve identified the appropriate parents for each node in the cluster, we then add all of these nodes together. The root (usually corresponding to \ud835\udc601) merits special treatment: if there is no cluster that contains \ud835\udc601, we create a node corresponding to \ud835\udc601, else we start with the cluster that contains \ud835\udc601. Usually this cluster just has \ud835\udc601; if it contains other phrases, we create an artificial root node corresponding to an empty phrase \ud835\udc600, and deem it to be the root. We then process the cluster that contains \ud835\udc601 along with other phrases. Returning to our example, the cluster corresponding to visual pattern \ud835\udc5d1 with phrase \ud835\udc601 is processed first, allowing R to be added to the tree. Then, the cluster corresponding to \ud835\udc5d2 is processed next as it has the lowest phrase index number 10, with A1 and A2 added to the tree together, both with R as parent. Then, the cluster corresponding to \ud835\udc5d3 is processed, with B1 and B2 being added as children of A1, and so on. Correctness for Well-Formatted SHTs. Next, we show that if the true SHT for a document has a property that we call wellformattedness, then oracle_gen(\ud835\udc37) correctly outputs the true SHT. Given an SHT\ud835\udc47, the visual prefix \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc63) for a node \ud835\udc63is defined to be the sequence of visual patterns from the root to \ud835\udc63. In our example, \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52(B1) = \ud835\udc5d1\ud835\udc5d2. We extend the definition to a set in the natural way, e.g., \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52({B2, A1}) = {\ud835\udc5d1, \ud835\udc5d1\ud835\udc5d2}. Let \ud835\udc5d\ud835\udc60\ud835\udc52\ud835\udc61(\ud835\udc5d) be a function that accepts a visual pattern and returns all the nodes that obey that pattern. For example, \ud835\udc5d\ud835\udc60\ud835\udc52\ud835\udc61(\ud835\udc5d2) ={A1, A2}. Then, an SHT \ud835\udc47= (\ud835\udc49, \ud835\udc38) is said to be well-formatted if (1) for any two siblings \ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57, \ud835\udc5d(\ud835\udc63\ud835\udc56) = \ud835\udc5d(\ud835\udc63\ud835\udc57); (2) for all visual patterns \ud835\udc5d, \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc5d\ud835\udc60\ud835\udc52\ud835\udc61(\ud835\udc5d)) is unique. The first condition mandates that sibling nodes, such as \ud835\udc351 and \ud835\udc352, must share the same visual pattern. However, it does not require that all nodes at the same depth, like \ud835\udc352 and \ud835\udc381, must have identical visual patterns. In our agenda watch dataset, subsection headers within a section often have similar formatting, but this need not hold across sections, i.e., different sections may use different formatting. The second condition states that nodes sharing the same visual pattern must have identical visual prefixes. For example, \ud835\udc351 and \ud835\udc352 have the visual prefix \ud835\udc5d1\ud835\udc5d2. Thus, the visual pattern signifies a certain \u201csemantic level\u201d within the SHT, following a consistent path to the root. Theorem 3.1. If the true SHT for a document \ud835\udc37is well-formatted, and if an LLM can correctly identify non-headers, then oracle_gen(\ud835\udc37) outputs the true SHT. Proof. Let \ud835\udc47and \ud835\udc3a\ud835\udc47be the SHT returned by oracle_gen and in the ground truth, respectively. We prove \ud835\udc47= \ud835\udc3a\ud835\udc47when \ud835\udc47is a well-formatted SHT by induction. Let \ud835\udc63\ud835\udc56be the i-th node added in the oracle_gen approach, and \ud835\udc41\ud835\udc56\u22121 = {\ud835\udc631, \ud835\udc632, ...\ud835\udc63\ud835\udc56\u22121} be the first (\ud835\udc56\u22121)-th nodes added in oracle_gen, respectively. Let \ud835\udc47\ud835\udc56\u22121 and \ud835\udc3a\ud835\udc47\ud835\udc56\u22121 be the induced subgraph of \ud835\udc47and \ud835\udc3a\ud835\udc47based on the set of nodes \ud835\udc41\ud835\udc56\u22121. By induction, we assume that oracle_gen returns the correct SHT, i.e., \ud835\udc47\ud835\udc56\u22121 = \ud835\udc3a\ud835\udc47\ud835\udc56\u22121, when adding the first (\ud835\udc56\u22121)-th nodes, and we further prove that, by adding \ud835\udc63\ud835\udc56, \ud835\udc47\ud835\udc56= \ud835\udc3a\ud835\udc47\ud835\udc56. Let \ud835\udc63\ud835\udc57and \ud835\udc63 \u2032 \ud835\udc57be the parent node of \ud835\udc63\ud835\udc56in \ud835\udc47\ud835\udc56and \ud835\udc3a\ud835\udc47\ud835\udc56, respectively. We prove that \ud835\udc63\ud835\udc57= \ud835\udc63 \u2032 \ud835\udc57by considering two cases: one where there exists a node \ud835\udc63\ud835\udc58\u2208\ud835\udc47\ud835\udc56\u22121 (and \ud835\udc63\ud835\udc58\u2208\ud835\udc3a\ud835\udc47\ud835\udc56\u22121, since\ud835\udc47\ud835\udc56\u22121 = \ud835\udc3a\ud835\udc47\ud835\udc56\u22121) shares the same visual pattern as \ud835\udc63\ud835\udc56, i.e., \ud835\udc5d(\ud835\udc63\ud835\udc58) = \ud835\udc5d(\ud835\udc63\ud835\udc56), and one where it does not. Let \ud835\udc54(\ud835\udc63) be the granularity (i.e., height of node) of \ud835\udc63in \ud835\udc47\ud835\udc56. Let \ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56) be the sequence of nodes from root to \ud835\udc63\ud835\udc56in \ud835\udc47\ud835\udc56. Assume \u2203\ud835\udc63\ud835\udc58\u2208\ud835\udc47\ud835\udc56\u22121 and \ud835\udc63\ud835\udc58\u2208\ud835\udc3a\ud835\udc47\ud835\udc56\u22121, s.t., \ud835\udc5d(\ud835\udc63\ud835\udc58) = \ud835\udc5d(\ud835\udc63\ud835\udc56). \ud835\udc54(\ud835\udc63\ud835\udc57) = \ud835\udc54(\ud835\udc63\ud835\udc56) + 1 since \ud835\udc63\ud835\udc57is the parent node of \ud835\udc63\ud835\udc56in \ud835\udc47\ud835\udc56. By definition, \u2200\ud835\udc63\u2208 \ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56), \ud835\udc63\u2260\ud835\udc63\ud835\udc56, we have \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63) < \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56) and \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63). We call each \ud835\udc63\u2208\ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56), \ud835\udc63\u2260\ud835\udc63\ud835\udc56as a candidate parent node of \ud835\udc63\ud835\udc56since adding an edge from \ud835\udc63to \ud835\udc63\ud835\udc56will make\ud835\udc47\ud835\udc56a valid candidate SHT. Thus \ud835\udc63 \u2032 \ud835\udc57\u2208\ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56) since GT should be at least a valid SHT and there is no other \ud835\udc63\ud835\udc5a> \ud835\udc63\ud835\udc57such that \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63\ud835\udc5a). If \ud835\udc54(\ud835\udc63 \u2032 \ud835\udc57) \u2260\ud835\udc54(\ud835\udc63\ud835\udc57), there at least exists one node \ud835\udc63\ud835\udc59\u2208\ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56) and \ud835\udc63\ud835\udc59is a child node of \ud835\udc63 \u2032 \ud835\udc57, s.t., \ud835\udc5d(\ud835\udc63\ud835\udc59) = \ud835\udc5d(\ud835\udc63\ud835\udc56), since \ud835\udc3a\ud835\udc47\ud835\udc56is a well-formatted SHT and the sibling nodes \ud835\udc63\ud835\udc59and \ud835\udc63\ud835\udc56belonging to the same parent \ud835\udc63 \u2032 \ud835\udc57should have the same visual pattern. By \ud835\udc5d(\ud835\udc63\ud835\udc58) = \ud835\udc5d(\ud835\udc63\ud835\udc56), we have \ud835\udc5d(\ud835\udc63\ud835\udc59) = \ud835\udc5d(\ud835\udc63\ud835\udc58), and thus \ud835\udc54(\ud835\udc63\ud835\udc59) = \ud835\udc54(\ud835\udc63\ud835\udc58), since \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc63\ud835\udc59) = \ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc63\ud835\udc58). \ud835\udc54(\ud835\udc63\ud835\udc59) = \ud835\udc54(\ud835\udc63\ud835\udc58) implies \ud835\udc54(\ud835\udc63 \u2032 \ud835\udc57) = \ud835\udc54(\ud835\udc63\ud835\udc57), which contradicts with \ud835\udc54(\ud835\udc63 \u2032 \ud835\udc57) \u2260\ud835\udc54(\ud835\udc63\ud835\udc57). By contradiction, we have \ud835\udc54(\ud835\udc63 \u2032 \ud835\udc57) = \ud835\udc54(\ud835\udc63\ud835\udc57) and further \ud835\udc63\ud835\udc57= \ud835\udc63\ud835\udc57 \ud835\udc57since both \ud835\udc63\ud835\udc57and \ud835\udc63 \u2032 \ud835\udc57are in \ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56). Assume \u009a\ud835\udc63\ud835\udc58\u2208\ud835\udc47\ud835\udc56\u22121 and \ud835\udc63\ud835\udc58\u2208\ud835\udc3a\ud835\udc47\ud835\udc56\u22121, s.t., \ud835\udc5d(\ud835\udc63\ud835\udc58) = \ud835\udc5d(\ud835\udc63\ud835\udc56). Similarly we show \ud835\udc63\ud835\udc57= \ud835\udc63 \u2032 \ud835\udc57by contradiction in this case. Assuming \ud835\udc63\ud835\udc57\u2260\ud835\udc63 \u2032 \ud835\udc57, there at least exist a node \ud835\udc63\ud835\udc59\u2208\ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56), \ud835\udc63\ud835\udc59\u2260\ud835\udc63\ud835\udc56and \ud835\udc63\ud835\udc59is a child of \ud835\udc63 \u2032 \ud835\udc57, s.t., \ud835\udc5d(\ud835\udc63\ud835\udc59) = \ud835\udc5d(\ud835\udc63\ud835\udc56), since \ud835\udc63 \u2032 \ud835\udc57\u2208\ud835\udc5d\ud835\udc4e\ud835\udc61\u210e(\ud835\udc63\ud835\udc56) and \ud835\udc3a\ud835\udc47\ud835\udc56is a wellformatted SHT. However, this contradicts to the assumption that \ud835\udc5d(\ud835\udc63\ud835\udc58) = \ud835\udc5d(\ud835\udc63\ud835\udc56). By contradiction, we have \ud835\udc63\ud835\udc57= \ud835\udc63 \u2032 \ud835\udc57, which concludes the proof. \u25a1 3.3 SHT Construction across Documents Given a set of documents D = {\ud835\udc371, ..., \ud835\udc37\ud835\udc59}, applying oracle_gen(\ud835\udc37\ud835\udc56) to each \ud835\udc37\ud835\udc56can be costly when \ud835\udc59is large. Here, we leverage the fact that, in addition to being well-formatted, the documents share common templates. We define the notion of a template below. We process each document \ud835\udc37\ud835\udc56in turn, attempting it to match against one of the existing templates \ud835\udc61\ud835\udc5d\u2208T P via a function template_gen(\ud835\udc61\ud835\udc5d, \ud835\udc37\ud835\udc56); if a match is successful, a SHT for \ud835\udc37\ud835\udc56is returned\u2013without any LLM calls. Otherwise, we call oracle_gen(\ud835\udc37\ud835\udc56)\u2014here, the corresponding template \ud835\udc61\ud835\udc5dfor the returned SHT is added to T P. If there are multiple successful matches in T P, we return the largest SHT of them all; the rationale here is that we want to capture as much of the header information as possible as part of the SHT. Template. We now define the notion of a template associated with an SHT. The template for an SHT \ud835\udc47: \ud835\udc61\ud835\udc5d= {\ud835\udc54: {\ud835\udc5d}} is a sorted dictionary that captures the mapping between the granularities \ud835\udc54 1 2 5 6 3 a) SHT1 4 p1 p2 p2 p3 p3 p4 1: {p1} 2: {p2} 3: {p3, p4} tp(SHT1) 1 2 5 3 4 p1 p2 p2 p3 p3 b) SHT2 (1) (5) (20) (12) (24) 1 2 3 p1 p2 p2 c) SHT3 (1) (8) (35) 1 2 3 p2 p3 p4 d) SHT4 (1) (6) (74) Figure 7: SHT construction by Pattern Matching; the documents represented by b and c are matches to \ud835\udc61\ud835\udc5d(SHT1)) but not d. of nodes and the set {\ud835\udc5d} of visual patterns found at that granularity. This dictionary is additionally sorted by granularity in increasing order. \ud835\udc61\ud835\udc5d(SHT1) is the template of SHT1 shown in Figure 7-a. We let \ud835\udc61\ud835\udc5d.\ud835\udc54and \ud835\udc61\ud835\udc5d.\ud835\udc5dbe the granularities and visual patterns in \ud835\udc61\ud835\udc5d. For SHT1 in Figure 7-a, \ud835\udc61\ud835\udc5d.\ud835\udc54= {1, 2, 3} and \ud835\udc61\ud835\udc5d.\ud835\udc5d= {\ud835\udc5d1, \ud835\udc5d2, \ud835\udc5d3, \ud835\udc5d4}. Let \ud835\udc61\ud835\udc5d.\ud835\udc54(\ud835\udc5d) be the granularity of a visual pattern \ud835\udc5din \ud835\udc61\ud835\udc5d, e.g., \ud835\udc61\ud835\udc5d.\ud835\udc54(\ud835\udc5d1) = 1 for SHT1. (This value is unique by construction from Section 3.2.) If \ud835\udc5d\u2209\ud835\udc61\ud835\udc5d.\ud835\udc5d, \ud835\udc61\ud835\udc5d.\ud835\udc54(\ud835\udc5d) = \u22121. Template Matching and Generation. We say a document \ud835\udc37 matches a template \ud835\udc61\ud835\udc5dif the visual patterns contained amongst the phrases \ud835\udc46\ud835\udc37cover each granularity 1 . . .\ud835\udc56, for some \ud835\udc56which is a prefix of \ud835\udc61\ud835\udc5d. For instance, document \ud835\udc371 with true SHT, SHT1, has a corresponding template \ud835\udc61\ud835\udc5d(SHT1) in Figure 7-a, and document \ud835\udc372, has a true SHT, SHT2, Figure 7-b. Since \ud835\udc372 includes patterns {\ud835\udc5d1, \ud835\udc5d2, \ud835\udc5d3}, it covers every granularity of the template of \ud835\udc371, and therefore matches the template. Additionally, document \ud835\udc373 with true SHT, SHT3, in Figure 7-c, which includes patterns {\ud835\udc5d1, \ud835\udc5d2}, also matches the template, since it covers a prefix of the granularities in the template (namely 1 and 2), even though it lacks patterns {\ud835\udc5d3, \ud835\udc5d4}. On the other hand, document \ud835\udc374 with true SHT, SHT4, in Figure 7-d, does not contain a match for \ud835\udc5d1, thereby not meeting the prefix constraint, and not being a match for the template. Our rationale for admitting prefix matches is the observation that as the granularity of a header becomes more fine-grained, its visual pattern tends to be more varied. For example, for two scientific papers obeying the same template, the visual patterns of sections remain consistent, but within each section the visual patterns used may vary depending on individual preferences. Note here that in our implementation, we allow for any non-zero prefix for a match; for more constrained document collections, a user may set a prefix threshold, e.g., at least three levels of the template must be covered. Armed with templates and matches to a template, we can now describe our template_gen(\ud835\udc61\ud835\udc5d, \ud835\udc37) procedure, listed in Algorithm 1. We proceed in two phases, where we first identify all of the phrases \ud835\udc60\u2208\ud835\udc46\ud835\udc37that match those in \ud835\udc61\ud835\udc5d.\ud835\udc5d, we add these phrases as nodes to\ud835\udc49 for our yet-to-be-constructed SHT (Line 3-5). Given these phrases, we check if there is a match for the template \ud835\udc61\ud835\udc5d, where a match is defined as above to be a prefix of the template. If no match is found, an empty result is returned (Line 6-7), else we assemble the nodes in \ud835\udc49into an SHT; we use a similar tree construction procedure as in the previous section, operating on the phrases found in the first step, clustered based on visual pattern (Line 8-10). 4 DATA MODEL AND QUERY LANGUAGE In the previous section, we described how we can extract SHTs for each document in a collection as part of document ingestion. Here, Algorithm 1: template_gen(\ud835\udc61\ud835\udc5d, \ud835\udc37) 1 \ud835\udc46\ud835\udc3b\ud835\udc47\ud835\udc37= (\ud835\udc49, \ud835\udc38),\ud835\udc49= \u2205, \ud835\udc38= \u2205 2 \ud835\udc3a= {} 3 for \ud835\udc60\ud835\udc56\u2208\ud835\udc46\ud835\udc37do 4 if \ud835\udc5d(\ud835\udc60\ud835\udc56) \u2208\ud835\udc61\ud835\udc5d.\ud835\udc5dthen 5 \ud835\udc49= \ud835\udc49\u222a\ud835\udc60\ud835\udc56, \ud835\udc3a= \ud835\udc3a\u222a\ud835\udc61\ud835\udc5d.\ud835\udc54(\ud835\udc5d(\ud835\udc60\ud835\udc56)) 6 if \ud835\udc3a= \u2205or \u2203\ud835\udc56\u2208\ud835\udc3a,\ud835\udc56> 1, (\ud835\udc56\u22121) \u2209\ud835\udc3athen 7 Return {} 8 for \ud835\udc63\ud835\udc56\u2208\ud835\udc49, \ud835\udc63\ud835\udc57\u2208\ud835\udc49do 9 if \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc57) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63\ud835\udc56) and \u009a\ud835\udc63\ud835\udc58\u2208\ud835\udc49,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc58) > \ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc56),\ud835\udc60.\ud835\udc61.,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc63\ud835\udc57) \u2208\ud835\udc61\ud835\udc60(\ud835\udc63\ud835\udc58) then 10 \ud835\udc38= \ud835\udc38\u222a(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57) 11 Return \ud835\udc46\ud835\udc3b\ud835\udc47\ud835\udc37 we define the data model used by ZenDB to represent the SHTs as well as other system-specific information, along with user-defined tables that we call DTables, short for Document Tables. 4.1 Data Model Definition In addition to traditional relational tables that we call base tables, ZenDB supports three new types of tables that respectively (i) represent the SHTs per document collection, (ii) let users specify one or more structured relations over the documents, called DTables, to be used within queries; (iii) maintain system metadata associated with the user-defined tables. We describe each one in turn. 4.1.1 SHT Table. The SHT table, shown in Figure 8-c, is a systemdefined and maintained table that represents the SHTs in a document collection. Each row captures information about an SHT Node, and is populated as described subsequently in Section 5. Its main attributes are: \u2022 doc_id, node_id identify the node in a given document. \u2022 name represents the header phrase \ud835\udc60corresponding to the node. \u2022 granularity represents the depth of the node in the tree. \u2022 context, summary, size correspond to the entire sequence of phrases in the text span, a short summary of the text span, and the number of tokens in the text span. \u2022 st_page and ed_page, listing the start and end pages for the text span. \u2022 child_ids and ancestor_ids, the IDs for the children and entire sequence of ancestors. We note that summary, size, st/ed_page, and ancestor_ids can be derived from the other attributes, but we store them explicitly for convenience. These attributes are all used during query processing. 4.1.2 User-defined DTables. Users can use SQL to define DTables, with those tables being used in subsequent queries (Figure 5). We use a special keyword DESCRIPTION to both designate the fact that this is not an ordinary table, and also allowing natural language to be provided that may be used in LLM prompts. To define such a table, the user can say: CREATE TABLE [name] (...) WITH DESCRIPTION [description] Here, the user provides a natural language description for the table. Attributes may be provided during table creation in parentheses (or omitted), and/or could be added afterwards, via the standard approach to alter schemas: node_id doc_id granularity name st_page ed_page context summary size child_ids A1 1 1 Capital Improvement Projects (Design) 1 12 context_1 S(context_1) TC(context_1) [A0, A1, \u2026] B1 1 Marie Canyon Green Street R 1 Public Works Commission Agenda Report 2 3 1 1 1 1 context_2 S(context_2) context_3 S(context_3) TC(context_2) TC(context_3) [B1, B2, \u2026] [] c) SHT Table (Partial) doc_id* name type begin_time 1 NULL NULL NULL table_name table_node table_description t_range Projects R Projects table contains a set of projects in public agenda report\u2026 [3,3] doc_id 1 2 ? ? Projects Projects table contains a set of projects in public agenda report\u2026 text_span* TS1 node* B1 a) Projects (Partial) 1 NULL NULL NULL TS2 B2 B2 1 PCH Median Improvements Project 3 1 2 context_4 S(context_4) TC(context_4) [] doc_id* meeting_date subject 1 NULL NULL text_span* TS1 node* R b) Agenda Meeting (Partial) 1 R [1,1] Agenda Meeting Agenda Meeting table describes the agenda meeting\u2026 2 ? ? Agenda Meeting Agenda Meeting table describes the agenda meeting\u2026 d) Table Catalog (Partial) table_name attr_description Projects name of project type TEXT TEXT Projects type of project attr_name name type TEXT Agenda Meeting subject of meeting subject e) Attribute Catalog (Partial) table_text_span TS3 ? TS4 ? ancestor_ids [] [R] [A1, R] [A1,R] multi_tuple False ? False ? User-Defined Tables (* is system-defined attribute) System-Defined Tables Figure 8: Data Model: User-Defined Tables and System-Defined Tables. ALTER TABLE [name] ADD [name] [type] WITH DESCRIPTION [description], ... ; Again, a natural language description for the attributes are provided when they are added. As we will discuss in Section 5, when the user creates a DTable, ZenDB populates them offline with rows that correspond to tuples. Each tuple represents one entity that can be found in a document. User defined attributes for these tuples are populated with NULL, and are filled in on-demand during query time, as shown in Figure 8a. Here, the Project DTable contains user-defined attributes name, type, and begin-time. ZenDB also maintains three hidden system-defined attributes per DTable\u2014the document id, text span used to extract the tuple, and SHT nodes used in the derivation. These attributes track how each tuple was derived, to provide context when extracting tuple attributes later on, and for debugging and provenance purposes. For instance, \ud835\udc351 corresponds to the \u201cMarie Canyon Green Street\u201d project tuple, and the tuple\u2019s text span may be the same as \ud835\udc351 or a subset (Figure 8c). The user-defined attributes represent the result of a read operation over each attribute. In addition, every expression implicitly defines additional attributes in this table. For instance, if a query evaluates Projects.name = \u201cCapital Improvement\u201d directly using an LLM call, then the attribute [Projects.name|eq|Capital Improvement] is instantiated and populated with the LLM response. Note that we chose to represent these user-specified DTables as regular tables as opposed to views or materialized views; but they could also be represented as such. 4.1.3 System-Defined Tables. In addition to the SHT table, ZenDB maintains two system-defined tables: Table Catalog and Attribute Catalog store metadata related to tables and attributes respectively (Figure 8d,e). In addition to names and descriptions, Table Catalog tracks the text span and SHT node(s) used to identify the contents of the table (since a table may be a small portion of the document), used to localize search when extracting tuples\u2014thereby reducing cost during query processing. The attribute t_range refers to the min/max granularities of the nodes used to extract tuples in the table. For example, all Project tuples extracted so far have granularity 3, thus t_range = [3,3]; this is the setting where tuples correspond to nodes (of some granularity) within the SHT. Finally, to handle the special case where the table is extracted from a leaf node in the SHT, i.e., there are multiple tuples corresponding to a single node that has no finer granularity node below it, we mark this by setting multi_tuple to True. For instance, consider the scenario when users want to create a table called \u201cReferences\u201d and each tuple corresponds to a reference in a published paper. 4.2 Query Language ZenDB currently supports a subset of SQL, corresponding to simple non-nested queries on one or more DTables with optional aggregation, as represented by the following template: SELECT [attr] | agg(attr) FROM [ST]+ WHERE [predicate] GROUP BY [attr] where [..] denotes a list of elements, attr refers to an expression over an attribute, ST refers to one or more DTables, and agg() includes SUM, COUNT, AVG, MAX, MIN1. A predicate has the form: attr op operand, where the operators include >|\u2265|<|\u2264|=|LIKE|IN, and operand is one or more constants. LIKE is used for fuzzy matching where either string similarity or semantic similarity could be used2. We add a restriction that if multiple DTables are listed in the FROM clause, then the WHERE clause includes a predicate specifying that the tuples are equi-joined on doc_id. We add this restriction for now to only allow for within-document joins, but we plan to relax this in future work. Figure 9 shows a query where, for each document whose meeting time is before \u201c2023 October\u201d, we count the \u201cCapital Improvement\u201d projects starting after \u201c2022-06-01\u201d; here, we make use of the withindocument join across two tables. The query semantics are defined as fully populating the userdefined DTables with the LLM results of all attribute reads and expressions, and then executing the SQL query as normal. We follow these semantics because it allows for minor consistencies during query evaluation. Specifically, under an oracle LLM that always returns complete and correct responses, the contents of the attribute reads and expressions will always be consistent (e.g., type is [\u2019A\u2019, \u2019B\u2019], and type = \u2019A\u2019 is true). However, modern LLMs are imperfect and sensitive to the input prompt and context formulation, so the extracted attribute values and expressions 1Text attributes only support COUNT, date attributes only support COUNT, MAX, MIN. 2In ZenDB we use Jaccard similarity with a 0.9 threshold by default. over the attributes may be inconsistent (e.g., extracted type is \u2019B\u2019, but type=\u2019A\u2019 is true). Better understanding and reconciling these potential inconsistencies is outside the scope of this paper, and is important future work. 5 TABLE POPULATION We next describe how we can populate the system-defined tables and attributes described above. Populating the SHT table is straightforward and therefore omitted; we will describe how the summary field is populated in Section 6. Populating Tables Overview. When a user defines a new DTable T, updating Attribute Catalog (Figure 8e) and table_name, table_descr in Table Catalog (Figure 8d) is easy. However, ZenDB must process the document collection D to fill in the system-defined attributes (SDAs) in Table Catalog and T, and populate T with tuples. While ZenDB proactively identifies tuples for T, it doesn\u2019t populate any user-defined attributes until query time. Consider a partitioning of D = \u00d0 D\ud835\udc56\u2286D D\ud835\udc56, where D\ud835\udc56is a set of documents sharing the same template, as identified during SHT construction. For each D\ud835\udc56, ZenDB picks a document \ud835\udc37\u2208D\ud835\udc56and uses an LLM to populate T with tuples, and fill in the SDAs. ZenDB then uses a rule-based approach to extract tuples from the remaining documents \ud835\udc37\u2032 \u2208D\ud835\udc56\u2212{\ud835\udc37} without invoking LLMs. We describe the single document and multi-document extraction next. Single Document Extraction. To populate SDAs for \ud835\udc37for a given DTable T, we first identify the node in the SHT for \ud835\udc37that captures all of the entities for the T; we call this the table node. We then identify nodes that correspond to tuples that lie underneath this node. We use two prompts, table_oracle and tuple_oracle to identify if a given node corresponds to a table or tuple respectively. table_oracle: If the following text describes [table_name], [table_descr], return true. Otherwise , return false. [node_context ]. tuple_oracle: If the following text describes one [tuple_descr] in [ table_name], [table_descr], return true. Otherwise , return false. [ node_context ]. In these prompts, [] is a placeholder. [table_name], [table_descr], and [tuple_descr] correspond to the table name and description, and the tuple description in Table Catalog (e.g., Figure 8d). [node_context] provides the entire text span corresponding to the node from SHT table (e.g., in Figure 8c). To identify the table node, ZenDB walks the SHT top-down and submits table_oracle to LLMs for each node. If the response for all of a node \ud835\udc63\u2019s children are true, then we add \ud835\udc63as a candidate table node and stop descending into \ud835\udc63\u2019s children. Finally, ZenDB fills in the Least Common Ancestor (LCA) of the candidate table nodes as table_node in Table Catalog. Once the table_node is found, ZenDB attempts to populate T with tuples. Once again, ZenDB performs a top-down traversal starting from table_node and evaluates tuple_oracle on each node. If a node \ud835\udc63evaluates to true, it means the node corresponds to an entity. We insert a new tuple into T, assign its node and text span to that of \ud835\udc63\u2019s, and stop traversing \ud835\udc63\u2019s descendants. If no nodes evaluate to true, it implies a leaf node contains multiple tuples and so we flag multi_tuple as true in Table Catalog without populating T. We handle this case separately in Section 6. Multi-document Extraction. Repeated LLM calls for extracting tuple boundaries for every document is too expensive, so we use a rule-based approach to populate tuples (and other SDAs) from the rest of the documents that share the same template. Consider populating table_node for document \ud835\udc37\u2032 \u2208D\ud835\udc56, \ud835\udc37\u2032 \u2260 \ud835\udc37, where tuples from \ud835\udc37were populated as described previously. Let the table_node (i.e., the finest granularity node below which all the tuples are found) and t_range (i.e., tuple granularity range) of the table T in document \ud835\udc37(that has already been populated) be \ud835\udc63\ud835\udc61\ud835\udc5band [\ud835\udc59,\ud835\udc5f], respectively. For \ud835\udc37\u2032, if there exists a node \ud835\udc63in its SHT such that \ud835\udc63\u2019s granularity matches that of \ud835\udc63\ud835\udc61\ud835\udc5band the textual similarity between \ud835\udc63\u2019s phrase and that of \ud835\udc63\ud835\udc61\ud835\udc5bis greater than a threshold, then we set \ud835\udc63to be the table node for \ud835\udc37\u2032; else if no such \ud835\udc63exists, the root is set to be the table node. Now, to populate tuples, suppose for the tuple range [\ud835\udc59,\ud835\udc5f] in \ud835\udc37,\ud835\udc59= \ud835\udc5f= \ud835\udc65. In this easy case, there is a well-defined granularity in the SHT where tuples are found. Then, we add all nodes at granularity \ud835\udc65from \ud835\udc37\u2032 as candidate tuples to T (assuming there is a non-zero number of them). If \ud835\udc59\u2260\ud835\udc5for if the SHT for \ud835\udc37\u2032 has a maximum height < \ud835\udc65, then we simply set multi_tuple to true; in this case, the granularity for tuples is ambiguous, and so we treat it similar to the case where there may be multiple tuples at a given node. Multi-document Extraction Rules. In more detail, we define the following two rules. For each node \ud835\udc63in an SHT, we use \ud835\udc63.\ud835\udc4e\ud835\udc61\ud835\udc61\ud835\udc5f to denote any attribute \ud835\udc4e\ud835\udc61\ud835\udc61\ud835\udc5fbelonging to \ud835\udc63in the SHT table (e.g., \ud835\udc63.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66). For the document \ud835\udc37\u2032 \u2208D\ud835\udc56, let \ud835\udc49\ud835\udc37\u2032 be the set of nodes corresponding to \ud835\udc37\u2032 in the SHT table, and \ud835\udc37\u2032.table_node be the table_node of T in document \ud835\udc37\u2032 in Table Catalog. Rule 1: \u2200\ud835\udc63\ud835\udc56\u2208\ud835\udc49\ud835\udc37\u2032, if \ud835\udc63\ud835\udc56.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66= \ud835\udc63\ud835\udc61\ud835\udc5b.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66as well as \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc63\ud835\udc56.\ud835\udc5b\ud835\udc4e\ud835\udc5a\ud835\udc52, \ud835\udc63\ud835\udc61\ud835\udc5b.\ud835\udc5b\ud835\udc4e\ud835\udc5a\ud835\udc52> \ud835\udf03, then \ud835\udc37\u2032.table_node= \ud835\udc63\ud835\udc56. Else, \ud835\udc37\u2032.table_node= \ud835\udc5f\ud835\udc5c\ud835\udc5c\ud835\udc61. If the rule is unsatisfied, we set the table_node to be the root node of SHT corresponding to \ud835\udc37\u2032. To populate the nodes corresponding to tuples, we first populate the granularity range of tuples t_range. Rule 2: If \u2203\ud835\udc63\ud835\udc57\u2208\ud835\udc37\u2032.table_node.child_ids,\ud835\udc59\u2264\ud835\udc63\ud835\udc57.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66\u2264 \ud835\udc5f, then \ud835\udc37\u2032.t_range= [\ud835\udc59,\ud835\udc5f]. Else, multi_tuple = \ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc52. If the granularities of tuples of T in document \ud835\udc37\u2032 are consistent, i.e., \ud835\udc59= \ud835\udc5fin \ud835\udc37\u2032.t_range, then we create a set of nodes \ud835\udc49, where for each \ud835\udc63\u2208\ud835\udc49, \ud835\udc63.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66= \ud835\udc59and \ud835\udc37\u2032.table_node\u2208 \ud835\udc63.\ud835\udc4e\ud835\udc5b\ud835\udc50\ud835\udc52\ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5f_\ud835\udc56\ud835\udc51\ud835\udc60. \ud835\udc49is further converted to a set of tuples whose \ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61_\ud835\udc60\ud835\udc5d\ud835\udc4e\ud835\udc5b= \ud835\udc63.\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61and \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60= {\ud835\udc63}. These tuples are inserted into the table T. If Rule 2 is violated, we set multi_tuple as true to denote that we do not have a one-to-one mapping between the set of nodes and tuples when populating the table for \ud835\udc37\u2032. Note that doing so might introduce false positives instead of false negatives. False positives are permissive since they will not lose the context of where the answers may be present, and in Section 6 we will discuss how to reduce false positives during query execution. When multi_tuple in \ud835\udc37is true, we don\u2019t populate t_range but set multi_tuple as true for \ud835\udc37\u2032. Overall, when the number of distinct templates (i.e., |D\ud835\udc56|) in documents D is small, the cost incurred by LLMs to populate the SDAs is minimal, since we only invoke LLMs on a single document for each cluster. 6 QUERY ENGINE We discuss how ZenDB generates a query plan for a given query \ud835\udc44 in Section 6.1, and then describe our physical operator implementations that leverage SHTs in Section 6.2. SELECT Agenda_Meeting.doc_id, COUNT(Projects.name) FROM Projects, Agenda_Meeting WHERE Projects.type = \u2018Capital Improvement\u2019 AND Projects.begin_time > \u20182022-06-01\u2019 AND Agenda_Meeting.meeting_time < \u20182023 October\u2019 AND Projects.doc_id = Agenda_Meeting.doc_id GROUP BY Agenda_Meeting.doc_id Figure 9: A Query on Civic Agenda Documents. Figure 10: A Query Plan for the Query in Figure 9. 6.1 Logical Query Plan Unlike traditional settings where I/O and computation costs dominate, here, LLM invocations add to monetary cost3 and/or latency, and thus must be minimized if possible. Keeping this guideline in mind, when generating a logical query plan for a given query \ud835\udc44, ZenDB first parses the SQL query into a parse tree of relational operators. Subsequently, predicates are pushed down to reduce intermediate sizes and thereby downstream LLM invocations\u2014but also taking into account the fact that predicate evaluations that rely on LLMs can be expensive. ZenDB relies on the standard approach from prior work [32] for expensive predicate reordering that takes into account both the selectivity and cost. Specifically, we define a metric \ud835\udc53(\ud835\udc5c) for each selection operator \ud835\udc5c. Let \ud835\udc60\ud835\udc5cbe the selectivity of \ud835\udc5c, computed as \ud835\udc60\ud835\udc5c= |\ud835\udc47\ud835\udc60| |\ud835\udc47\ud835\udc50| , where \ud835\udc47\ud835\udc50(\ud835\udc47\ud835\udc60) are tuples that are processed (satisfy) the predicate associated with \ud835\udc5c. Let \ud835\udc52\ud835\udc5cbe the average cost for evaluating a tuple using operator \ud835\udc5c, which is estimated adaptively during query execution as more tuples are processed by \ud835\udc5c. The goodness of a selection operator \ud835\udc5cis then defined as \ud835\udc53\ud835\udc5c= \ud835\udc52\ud835\udc5c\u00d7 \ud835\udc60\ud835\udc5c. Intuitively, if an operator \ud835\udc5chas lower cost \ud835\udc52\ud835\udc5cand selectivity \ud835\udc50\ud835\udc5c, \ud835\udc5cis preferred to be executed early. ZenDB will sort the set of selection operators on the same table in the increasing order of \ud835\udc53(\ud835\udc5c). Projections on the other hand, are pulled up, to avoid having to populate attributes through LLM calls for tuples that may get discarded. Until a selection or projection is encountered that requires a specific attribute for a tuple, that attribute stays uninterpreted, and therefore NULL. From a join order standpoint, ZenDB adopts a greedy algorithm to generate a left-deep tree, in an approach akin to standard relational query optimization techniques. Here, instead of optimizing for reducing the sizes of intermediate results, we focus on reducing the LLM invocation cost. Let \ud835\udc38(\ud835\udc47) be the cost (in terms of dollars or latency) for evaluating all of the predicates in \ud835\udc44corresponding only to table \ud835\udc47on all of the tuples of \ud835\udc47. ZenDB ranks the tables in \ud835\udc44as \ud835\udc471,\ud835\udc472, ... based on their \ud835\udc38(\ud835\udc47\ud835\udc56) in increasing order, forming a left deep tree with \ud835\udc471 as the driving table, followed by \ud835\udc472 3This is common for several commercial LLMs like OpenAI, Claude-3 [7], Google Gemini [3]. Algorithm 2: \ud835\udc61\ud835\udc5f\ud835\udc52\ud835\udc52_\ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52(\ud835\udc46\ud835\udc3b\ud835\udc47,\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52,\ud835\udc52) 1 \ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60= {\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52.\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52} 2 \ud835\udc34\ud835\udc5b\ud835\udc60= \u2205 3 \ud835\udc47= \ud835\udc54\ud835\udc52\ud835\udc61\ud835\udc47\ud835\udc5f\ud835\udc52\ud835\udc52(\ud835\udc46\ud835\udc3b\ud835\udc47,\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52) 4 /*Refine candidate nodes*/ 5 while \ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc51\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b(\ud835\udc47) = \ud835\udc39\ud835\udc4e\ud835\udc59\ud835\udc60\ud835\udc52do 6 \ud835\udc36\ud835\udc41\ud835\udc60= \u2205 7 for \ud835\udc5b\u2208\ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60do 8 if \ud835\udc60\ud835\udc52\ud835\udc4e\ud835\udc5f\ud835\udc50\u210e_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b,\ud835\udc52) = True then 9 \ud835\udc36\ud835\udc41\ud835\udc60= \ud835\udc36\ud835\udc41\ud835\udc60\u222a\ud835\udc5b 10 \ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60= \ud835\udc36\ud835\udc41\ud835\udc60.\ud835\udc50\u210e\ud835\udc56\ud835\udc59\ud835\udc51\ud835\udc60_\ud835\udc56\ud835\udc51 11 if e.type = predicate then 12 /*Evaluating A Predicate*/ 13 for \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52\u2208\ud835\udc36\ud835\udc41\ud835\udc60do 14 if \ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52.\ud835\udc60\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc66,\ud835\udc52) = \ud835\udc47\ud835\udc5f\ud835\udc62\ud835\udc52then 15 \ud835\udc34\ud835\udc5b\ud835\udc60= \ud835\udc34\ud835\udc5b\ud835\udc60\u222a\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52 16 Return \ud835\udc34\ud835\udc5b\ud835\udc60 17 if e.type = attribute then 18 /*Extracting Attribute Values*/ 19 for \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52\u2208\ud835\udc36\ud835\udc41\ud835\udc60do 20 \ud835\udc34\ud835\udc5b\ud835\udc60= \ud835\udc34\ud835\udc5b\ud835\udc60\u222a\ud835\udc52\ud835\udc65\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc61_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52.\ud835\udc60\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc66,\ud835\udc52) 21 Return \ud835\udc34\ud835\udc5b\ud835\udc60 to form \ud835\udc471 \u22b2\u22b3\ud835\udc472, with the remaining tables being selected based on \ud835\udc38(.). When multi_tuple is false, implying that in table \ud835\udc47, we have pre-populated potential tuples, and therefore have a more precise estimate, \ud835\udc38(\ud835\udc47) = |\ud835\udc47| \u00d7 \ud835\udc52is estimated at query time, where |\ud835\udc47| is the number of tuples in \ud835\udc47, \ud835\udc52denotes the average cost of evaluating a single tuple. Initially, \ud835\udc38(\ud835\udc47) is set to be |\ud835\udc47| to prioritize evaluating the table with the smaller number of tuples, and \ud835\udc52will be estimated adaptively as more tuples are processed during query execution. One logical plan for the query in Figure 9 is shown in Figure 10, where agenda_meeting only has one tuple compared to the Projects table with more than 40 tuples, and thus is evaluated first. The estimation of \ud835\udc38(\ud835\udc47) when multi_tuple is true will be described in Section 6.2. 6.2 Physical Query Plan During query execution, each tuple in the user-defined DTables has attribute values that begin as NULL as in Figure 8a, but some attributes will get populated through selections or projections. When multi_tuple is true, ZenDB leverages LLMs to create a set of tuples satisfying the corresponding predicates with their attributes listed in the projections to be computed, as will discussed shortly. We now discuss our implementations of various operators. Scan. As part of our scan operator, ZenDB executes the query document by document (which explains the restriction of join on doc_id in Section 4.2). This operator first retrieves the tuples in the first document as a batch, followed by tuples in the second document; thus only one SHT is processed at a time. Selections and Projections. Consider a predicate \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51or a projection \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57on table \ud835\udc47; a similar procedure is followed in either case. Say multi_tuple is false, so each row in \ud835\udc47corresponds to a single potential tuple. ZenDB then calls a function\ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52(\ud835\udc46\ud835\udc3b\ud835\udc47,\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52,\ud835\udc52), listed in Algorithm 2, with \ud835\udc52set to \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(respectively, \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57) to evaluate whether \ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52satisfies \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51, returning it if so (respectively, the value of the attribute in \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57). This function implements a tree search on the SHTs, leveraging summaries for each node, as defined in Section 4.1. We next describe how we populate this \ud835\udc60\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc66 per node in the SHT table (Figure 8c). Summary Creation. Given the SHT for a document \ud835\udc37and the expression \ud835\udc52, \ud835\udc46(\ud835\udc63), the summary for a node \ud835\udc63, comprises the following: (1) The phrase(s) corresponding to both \ud835\udc63and its ancestors. (2) An extractive summary of the text span of \ud835\udc63, which is a set of important sentences determined using standard (non-LLM) NLP tools like NLTK [10]. (3) The top-1 sentence the text span of \ud835\udc63with the highest semantic similarity (e.g., cosine similarity) with \ud835\udc52. Parts (1) and (2) are prepared offline when the SHT is built. Part (3) is added during query processing. Including phrases (i.e., headers) of ancestors in (1) often helps enhance accuracy by including additional background for interpreting \ud835\udc63\u2019s text span. For example, in Figure 1, the summary of node \ud835\udc352 contains the header phrase of its parent, \u201cCapital Improvement Projects (Design)\u201d, helping us identify \ud835\udc63as a candidate node when evaluating a predicate such as type = Capital Improvement. Tree Search Algorithm. Given a document \ud835\udc37with its \ud835\udc46\ud835\udc3b\ud835\udc47, a tuple node \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52, an expression \ud835\udc52(either a predicate or a projection), our Algorithm 2, first identifies a sub-tree \ud835\udc47in \ud835\udc46\ud835\udc3b\ud835\udc47with \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52as the root (Line 4), searches \ud835\udc47top-down. For each node \ud835\udc5bin one layer, it calls \ud835\udc60\ud835\udc52\ud835\udc4e\ud835\udc5f\ud835\udc50\u210e_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b,\ud835\udc52) to check whether \ud835\udc5b\u2019s summary contains the right information to evaluate expression \ud835\udc52. It then adds all the nodes that pass \ud835\udc60\ud835\udc52\ud835\udc4e\ud835\udc5f\ud835\udc50\u210e_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52into a candidate set \ud835\udc36\ud835\udc41\ud835\udc60(Line 6-12), and recursively searches their children until a stopping condition is met (Line 6). This condition is (1) the leaf node is reached, (2) the number of tokens in the summary of the node is larger than that of its context (i.e., text span). search_oracle(node , e): If the following text contains the information that describes [e.descr], return True; otherwise , return False. The context is [node.summary ]. Example: [e.descr] = 'the type of project is Capital Improvement ' For each candidate node \ud835\udc5b\u2208\ud835\udc36\ud835\udc41\ud835\udc60, if the expression \ud835\udc52is a predicate, then a call to an LLM with prompt\ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52.\ud835\udc60\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc66,\ud835\udc52) is issued to evaluate if the summary of node satisfies the predicate. This step stops early when there exists one node that passes \ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5f\ud835\udc50,\ud835\udc52) (Line 11-17). When \ud835\udc52is a projected attribute, \ud835\udc52\ud835\udc65\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc61_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52.\ud835\udc60\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc66,\ud835\udc52) is instead used to extract the value of the projected attribute (Line 18-22). evaluate_oracle(context , e): Return True if [e.descr] based on the following context [context ]. Otherwise , return False. Example: [e.descr] = 'type of project is Capital Improvement ' extract_oracle(context , e): Return [e.descr] based on the following context [context ]. Example: [e.descr] = 'name of project ' Each selection operator \ud835\udc5creturns the set of tuples in table \ud835\udc47 satisfying the predicate associated with \ud835\udc5cto downstream operators. We handle the case where multi_tuple is true for table \ud835\udc47in Section 6.3. Even though executing a tree search procedure by exposing node summaries to LLMs incurs additional cost, it is minimal in practice since the height of the tree is often small (thus, the number of iterations is small), and the size of the summary is small and controllable. In Section 7 we show that the benefit introduced by summaries, which achieves better accuracy and lower cost, dominates the additional cost. Other Operators. We use nested loop as our join algorithm. As mentioned earlier, even if we consider latency to be the primary optimization criterion, the evaluation of predicates and projections through LLM invocations would dominate overall latency, and the number of intermediate tuples to be processed during query execution is often not a large number. If we further treat monetary cost as the primary criterion, then joins are effectively free. Thus, a simple nested loop join suffices. Similarly, other operators like aggregation and group-by use simple relational variants. Provenance of Query Answers. ZenDB maintains the provenance in the form of the corresponding text span(s) for the returned query answers in a manner analogous to classical relational provenance [30]. During query processing, we keep track of the sequence of text spans consulted to populate attributes or verify predicates, as an additional metadata attribute, per tuple. These text spans are combined into an array during joins. While we could apply the same idea to aggregations and capture the provenance of contributing tuples into an array, this representation is unwieldy. Determining how best to show all of this provenance to end-users to ensure trust in query answers is an important topic for future work. 6.3 Operators for the Multiple Tuple Case When multi_tuple is true for table T, there are no tuples in T after population in Section 5, and the context of table_node may contain multiple tuples. Let \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47) and \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57(\ud835\udc47) be a set of predicates and projected attributes associated with table T in a given query \ud835\udc44. In this case, ZenDB searches the text span corresponding to the table_node of T, and creates a set of tuples satisfying \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47) with \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57(\ud835\udc47) being populated by LLMs. When table_node is a leaf node in its SHT, ZenDB submits the prompt multi_tuple_oracle (table_node, \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47), \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57(\ud835\udc47)) to LLMs to extract the projected values for the tuples that satisfies the given predicate \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47). multi_tuple_oracle(node ,pred(T),proj(T)): The following text describes one or more [tuple_descr ]. For each [tuple_descr], if pred(T), then return [proj(T)] based on the following context [node.context ]. Example: [tuple_descr] = 'paper ' [predT] = 'publication year is greater than 2009 and conference is VLDB' [proj(T)] = 'name of paper , authors of paper' As an example, consider a publication document \ud835\udc37, where users want to create a table called Reference with the schema as {name, year}, whose text span corresponds to the references section in a paper. Assume that in the SHT of \ud835\udc37, the references section is a leaf node. In this case, ZenDB will not further parse the reference section into individual references, but will call multi_tuple_oracle() to extract the paper name and authors per reference from VLDB whose publication year is later than 2009, directly over the references section. When table_node is not a leaf node in its SHT of document \ud835\udc37, let \ud835\udc37\u2032 be a document sharing the same template with \ud835\udc37and populating its system-defined attributes via \ud835\udc37in Section 5. Let \ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66be the granularity for stopping searching in Algorithm 3, and \ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66= \ud835\udc37.\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52_\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc54\ud835\udc52.\ud835\udc59, i.e., the smallest granularity of tuples in \ud835\udc37. Note that this may introduce false Algorithm 3: \ud835\udc61\ud835\udc5f\ud835\udc52\ud835\udc52_\ud835\udc52\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc4e\ud835\udc61\ud835\udc52_\ud835\udc5a\ud835\udc62\ud835\udc59\ud835\udc61\ud835\udc56_\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52 Input: \ud835\udc46\ud835\udc3b\ud835\udc47,\ud835\udc61\ud835\udc4e\ud835\udc4f\ud835\udc59\ud835\udc52_\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52, \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47), \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57(\ud835\udc47),\ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66 1 \ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60= {\ud835\udc61\ud835\udc4e\ud835\udc4f\ud835\udc59\ud835\udc52_\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52} 2 \ud835\udc47\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc60= \u2205 3 \ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66= \ud835\udc61\ud835\udc4e\ud835\udc4f\ud835\udc59\ud835\udc52_\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52.\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66 4 \ud835\udc47= \ud835\udc54\ud835\udc52\ud835\udc61\ud835\udc47\ud835\udc5f\ud835\udc52\ud835\udc52(\ud835\udc46\ud835\udc3b\ud835\udc47,\ud835\udc61\ud835\udc4e\ud835\udc4f\ud835\udc59\ud835\udc52_\ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc52) 5 /*Refine candidate nodes*/ 6 while \ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66\u2264\ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66do 7 \ud835\udc36\ud835\udc41\ud835\udc60= \u2205 8 for \ud835\udc5b\u2208\ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60do 9 if \ud835\udc60\ud835\udc52\ud835\udc4e\ud835\udc5f\ud835\udc50\u210e_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc5b,\ud835\udc52) = True then 10 \ud835\udc36\ud835\udc41\ud835\udc60= \ud835\udc36\ud835\udc41\ud835\udc60\u222a\ud835\udc5b 11 \ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66= \ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66+ 1 12 \ud835\udc36\ud835\udc62\ud835\udc5f\ud835\udc5f\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc41\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc60= \ud835\udc36\ud835\udc41\ud835\udc60.\ud835\udc50\u210e\ud835\udc56\ud835\udc59\ud835\udc51\ud835\udc60_\ud835\udc56\ud835\udc51 13 \ud835\udc34\ud835\udc5b\ud835\udc60= \u2205 14 for \ud835\udc5b\u2208\ud835\udc36\ud835\udc41\ud835\udc60do 15 \ud835\udc34\ud835\udc5b\ud835\udc60= \ud835\udc34\ud835\udc5b\ud835\udc60\u222a\ud835\udc5a\ud835\udc62\ud835\udc59\ud835\udc61\ud835\udc56_\ud835\udc61\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc52_\ud835\udc5c\ud835\udc5f\ud835\udc4e\ud835\udc50\ud835\udc59\ud835\udc52(\ud835\udc61, \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51(\ud835\udc47), \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc57(\ud835\udc47)) 16 Return \ud835\udc34\ud835\udc5b\ud835\udc60 Datasets # of Documents Avg # Pages Avg # Tokens Publication 100 11.5 13230 Civic Agenda 41 8.7 3185 Notice 80 7.1 3719 Table 1: Characteristics of Datasets. positives (one node might correspond to multiple tuples) but would avoid false negatives (there will not exist nodes that correspond to portions of a tuple). ZenDB executes tree_evaluate_multi_tuple in Algorithm 3. ZenDB starts searching the subtree of SHT with table_node as the root (Line 4). We use the same summary-based search as in tree_evaluate in Algorithm 2 to refine the nodes that are related to the given query top-down layer by layer, and stop the search when the granularity of current layer exceeds \ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc5d_\ud835\udc54\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66(Line 6-12). For each node \ud835\udc5b\u2208\ud835\udc36\ud835\udc41\ud835\udc60that are related to the query and might contain multiple tuples, we call multi_tuple_oracle to extract the corresponding tuples (Line 13-15). 7 EVALUATION In this section, we evaluate ZenDB over three real document collections on accuracy, latency, and cost. 7.1 Methodology 7.1.1 Data & Query Sets. We collected three real-world datasets (i.e., document collections): scientific publications, civic agenda reports, and notice of violations; details are displayed in Table 1. Scientific Publications. This dataset was collected from a systematic review study that examined research questions in the field of personal data management at UC Irvine [11]. The study analyzed over 500 publications; we randomly selected 100 papers for our dataset. The study explored 20 research questions with humanlabeled answers for all of the publications. Civic Agenda Reports. This dataset, from our collaborators at Big Local News, comprises 41 civic agenda reports from 2022 to 2024 in the City of Malibu [14]. Each report details a series of government projects, including their status, updates, decisions, and timelines for beginning, ending, and expected construction. Notice of Violations. This dataset, also from Big Local News, of 80 documents describe notices of violations issued by the US Dept. of Transportation from 2023 to 2024 [12]. Each document concerns potential violations detailed by the Hazardous Materials Safety Administration, including detailed violation orders and descriptions, penalty decisions, and proposed compliance orders. Query Workload. For each dataset, we devise a query workload comprising 9 SQL queries, informed by the needs of our collaborators. These 9 queries are divided into groups of three, QG1, QG2, and QG3, varying in the number of predicates, from one to three respectively. To generate these queries, we first define tables along with a set of attributes per dataset. Then we randomly select \ud835\udc56attributes to create \ud835\udc56predicates for the queries in group QGi, and in SELECT, we additionally include one attribute that is not used in the predicates, as well as doc_id. When we end up sampling attributes across multiple relations, we list both in the FROM clause, and additionally add an equijoin condition on doc_id. So, overall, our queries include selections, projections, and joins. We omit aggregations in our workload since we use relational versions for those operators evaluated after the corresponding attribute values are extracted; and thus the performance on such queries would be similar to that on the queries without them. 7.1.2 Strategies Compared and Evaluation Metrics. We compare ZenDB with four baselines, GPT_single, GPT_merge, RAG_seq, and RAG_tree. The first two operate on an entire document at a time. GPT_single uses a separate LLM call per predicate and projection by constructing a corresponding prompt, appending the entire document as context. GPT_merge combines all of the predicates and projections into a single LLM call alongside the entire document. RAG_seq and RAG_tree refer to RAG-based techniques in two variants implemented by LlamaIndex [9], a state-of-the-art open-source RAG framework: sequential chunking and tree-style chunking, respectively. In RAG_seq, we set the chunk size to 128 tokens and selected top-\ud835\udc58chunks, where \ud835\udc58= max(1, 5%\u00d7doc_size/128). That is, we retrieve at least one chunk, but no more than 5% of the of the document. RAG_tree constructs a hierarchical tree from the document without leveraging semantic structure. This tree is constructed by first chunking the leaves at a fixed granularity. Nodes higher up in the hierarchy are formed by recursively summarizing the nodes below. Subsequently, a path from the root to leaf is retrieved, instead of just one leaf. GPT-4-32k is used to evaluate the queries for all strategies. We use precision and recall to measure the quality of query answers. Given a query \ud835\udc44, let \ud835\udc47_\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc61\u210e(\ud835\udc44) and \ud835\udc47_\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc44) be the set of tuples in the ground truth vs. predicted by an approach, respectively. Precision is measured as |\ud835\udc47_\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc61\u210e(\ud835\udc44)\u2229\ud835\udc47_\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc44)| |\ud835\udc47_\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc44)| , and recall is |\ud835\udc47_\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc61\u210e(\ud835\udc44)\u2229\ud835\udc47_\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc44) | |\ud835\udc47_\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc61\u210e(\ud835\udc44) | . We count the number of input and output tokens to measure the cost of LLM invocations [6]. Finally, we measure the latency of query execution by taking three runs and reporting the average. 7.2 Experimental Results Experiment 1: ZenDB vs. GPT-only Strategies. We first compare ZenDB with GPT_single and GPT_merge, both operating on Precision Recall Cost ($) / Tokens (\u00d7 1000) Latency (Seconds) Strategies PUB CIVIC NOTICE PUB CIVIC NOTICE PUB CIVIC NOTICE PUB CIVIC NOTICE GPT_single 0.74 0.45 0.71 0.38 0.45 0.77 0.98 / 16.2 0.33 / 5.4 0.3 / 5.3 14.6 15.3 6.1 GPT_merge 0.63 0.34 0.66 0.4 0.45 0.72 0.8 / 13.2 0.2 / 3.2 0.2 / 3.7 12.9 7.4 5 RAG_seq 0.51 0.12 0.36 0.38 0.13 0.38 0.02 / 0.4 0.02 / 0.29 0.01 / 0.18 3.76 5.1 1.3 RAG_tree 0.51 0.2 0.2 0.38 0.04 0.17 0.07 / 1.2 0.04 / 0.66 0.02 / 0.35 10 8.9 1.3 ZenDB 0.72 0.73 0.73 0.53 0.84 0.74 0.03 / 0.56 0.03 / 0.53 0.02 / 0.25 4.8 7 1.7 Table 2: Average Precision, Recall, Cost / # of Tokens and Latency of Strategies Per Query, Per Document, in Publication (PUB), Civic Agenda (CIVIC), Notice of Violation (NOTICE) Datasets. (GPT-4-32k is Used.) Precision Recall Cost ($) / Tokens (\u00d7 1000) Latency (Seconds) Strategies QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg GPT_single 0.94 0.66 0.62 0.74 0.65 0.16 0.32 0.38 0.8 / 13.2 1 / 16.6 1.1 / 18.9 0.98 / 16.2 12.8 14.1 16.9 14.6 GPT_merge 0.94 0.41 0.63 0.63 0.65 0.13 0.41 0.4 0.8 / 13.2 0.8 / 13.2 0.8 / 13.2 0.8 / 13.2 12.8 12.9 13.1 12.9 RAG_seq 0.73 0.4 0.39 0.51 0.6 0.23 0.31 0.38 0.01 / 0.23 0.02 / 0.38 0.03 / 0.59 0.02 / 0.4 2.7 3.9 4.5 3.76 RAG_tree 0.79 0.33 0.42 0.51 0.68 0.19 0.27 0.38 0.05 / 0.82 0.08 / 1.3 0.1 / 1.6 0.07 / 1.2 8.4 10.4 11.2 10 ZenDB 0.93 0.64 0.6 0.72 0.7 0.54 0.34 0.53 0.02 / 0.41 0.03 / 0.56 0.04 / 0.71 0.03 / 0.56 3.9 5.2 5.4 4.8 Table 3: Average Precision, Recall, Cost / # of Tokens and Latency of Strategies Per Query, Per Document, in Publication Dataset. (GPT-4-32k is Used.) Precision Recall Cost ($) / Tokens (\u00d7 1000) Latency (Seconds) Strategies QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg GPT_single 0.64 0.36 0.36 0.45 0.73 0.37 0.24 0.45 0.2 / 3.2 0.33 / 5.4 0.47 / 7.6 0.33 / 5.4 7.5 15.3 23.1 15.3 GPT_merge 0.64 0.22 0.16 0.34 0.73 0.32 0.29 0.45 0.2 / 3.2 0.2 / 3.2 0.2 / 3.2 0.2 / 3.2 7.3 6.9 7.5 7.4 RAG_seq 0.25 0.11 0 0.12 0.36 0.04 0 0.13 0.01 / 0.14 0.02 / 0.3 0.03 / 0.43 0.02 / 0.29 3.3 5.2 6.9 5.1 RAG_tree 0.36 0.23 0 0.2 0.12 0.01 0 0.04 0.03 / 0.49 0.04 / 0.6 0.05 / 0.88 0.04 / 0.66 5.9 8.9 12.3 8.9 ZenDB 0.89 0.72 0.61 0.73 0.86 0.79 0.83 0.84 0.02 / 0.43 0.04 / 0.59 0.04 / 0.68 0.03 / 0.53 5.1 7.2 8.8 7 Table 4: Average Precision, Recall, Cost / # of Tokens and Latency of Strategies Per Query, Per Document, in Civic Dataset. (GPT-4-32k is Used.) Precision Recall Cost ($) / Tokens (\u00d7 1000) Latency (Seconds) Strategies QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg QG1 QG2 QG3 Avg GPT_single 0.71 0.65 0.76 0.71 0.9 0.67 0.75 0.77 0.2 / 3.7 0.31 / 5.2 0.43 / 7.1 0.3 / 5.3 4.9 6.2 7.3 6.1 GPT_merge 0.7 0.56 0.62 0.66 0.8 0.6 0.77 0.72 0.2 / 3.7 0.2 / 3.7 0.2 / 3.7 0.2 / 3.7 4.8 5 5.1 5 RAG_seq 0.61 0.31 0.17 0.36 0.67 0.22 0.26 0.38 0.01 / 0.12 0.01 / 0.19 0.01 / 0.23 0.01 / 0.18 0.9 1.3 1.7 1.3 RAG_tree 0.58 0.36 0.24 0.2 0.39 0.5 0.17 0.17 0.02 / 0.25 0.02 / 0.38 0.03 / 0.41 0.02 / 0.35 2.1 2.7 3.1 2.6 ZenDB 0.79 0.67 0.72 0.73 0.87 0.62 0.73 0.74 0.01 / 0.19 0.02 / 0.26 0.02 / 0.3 0.02 / 0.25 1.4 1.7 2.1 1.7 Table 5: Average Precision, Recall, Cost / # of Tokens and Latency of Strategies Per Query, Per Document, in Notice Violation Dataset. (GPT-4-32k is Used.) an entire document at a time. Table 2 reports our metrics of interest on the three datasets, while Table 3, Table 4, and Table 5 provide a breakdown per dataset. We first note that ZenDB achieves comparable precision and recall to GPT_single on the publication and notice datasets. Notably, ZenDB surpasses GPT_single in the civic dataset, improving precision by 28% and recall by 39%, due to this dataset\u2019s complex semantic structure, which poses challenges for GPT_single in generating high-quality responses. ZenDB\u2019s approach of querying based on SHTs, focuses LLM attention on portions of documents at a time, thereby enhancing performance. We also observe that combining multiple predicates into a single prompt makes it more difficult for the LLM to provide the correct answer, resulting in performance degradation. On the cost and latency front, ZenDB significantly reduces both relative to GPT_single and GPT_merge. Specifically, ZenDB achieves cost savings of approximately 29\u00d7, 10\u00d7, and 4\u00d7 for the publication, civic, and notice datasets respectively. It\u2019s noteworthy that ZenDB\u2019s cost savings increase with document size, as the number of tokens it uses is somewhat independent of document size. Instead, it relies on the size of the summary and the number of levels of the SHTs explored during execution, which are controllable factors. Accordingly, we observe varying levels of latency savings with ZenDB, up to a 4\u00d7 reduction across datasets. Experiment 2: ZenDB vs. RAG-only Strategies. When compared with RAG_seq and RAG_tree, we observe that RAG_seq achieves significant cost and latency savings compared to GPT-only strategies. However, relying solely on retrieving physical chunks based on embedding similarity as in RAG, fails to accurately identify the appropriate text spans related to the queries, leading to a substantial degradation in precision and recall. While ZenDB incurs a slightly higher cost, it offers substantial advantages over RAG-based approaches thanks to the use of semantic structure, with increases in precision by up to 61% and recall by up to 80%. RAG_tree generally shows slight improvements in precision and recall over RAG_seq, but it similarly falls short of ZenDB for a similar reason. Its use of tree-style physical chunking often fails to accurately identify the appropriate text spans. Moreover, the Publication Civic Agenda Violation Datasets 0 0.2 0.4 0.6 0.8 1 Average Precision ZenDB no-ES no-node-name no-DS (a) Average Precision Publication Civic Agenda Violation Datasets 0 0.2 0.4 0.6 0.8 1 Average Recall ZenDB no-ES no-node-name no-DS (b) Average Recall Publication Civic Agenda Violation Datasets 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Average # of Tokens (x1000) ZenDB no-ES no-node-name no-DS (c) Average # of Tokens (\u00d7 1000) Publication Civic Agenda Violation Datasets 0 1 2 3 4 5 6 7 8 9 Average Latency (Seconds) ZenDB no-ES no-node-name no-DS (d) Average Latency (Seconds) Figure 11: The Effect of Summary Construction to Performance of ZenDB in Real Datasets. Datasets (# of Docs) # of Nodes # of Layers Cost ($) /Tokens Latency Publication (100) 13.4 2.8 0.05 / 1.8k 6min Civic (41) 32.1 2.9 0.01 / 0.36k 1min Violation (80) 8.9 2.2 0.01 / 0.32k 1min Table 6: SHT Construction. (GPT-4 is Used.) Datasets Cost ($) / Tokens FP FN Latency Publication (100) 0.048 / 95.4k 0 0 7 min Civic (41) 0.005 / 10.1k 0.08 0 3 min Violation (80) 0.005 / 8.9k 0.04 0 2 min Table 7: Table Population. (GPT-3.5-Turbo is Used.) Publication Civic Agenda Violation 0 0.2 0.4 0.6 0.8 1 Average Precision ZenDB ZenDB-light (a) Precision Publication Civic Agenda Violation 0 0.2 0.4 0.6 0.8 1 Average Recall ZenDB ZenDB-light (b) Recall Figure 12: ZenDB vs. ZenDB-light: Precision and Recall. exhaustive summary construction and usage in RAG_tree results in higher cost and latency compared to ZenDB. Experiment 3: Data Preparation. Next, we examine two phases within ZenDB happening prior to queries, SHT construction and table population, and compare it to the costs of online queries. Experiment 3.1: SHT Construction. We present the average number of nodes and layers per SHT, and the total cost, number of tokens, and latency on three datasets, in Table 6. SHT construction is an offline process, making latency at the level of minutes not problematic. The cost is affected by the number of distinct templates in the datasets. ZenDB uses LLMs to verify headers for SHT generation for one document per template, with the remaining SHTs created through visual pattern matching. The cost is further reduced by sampling the phrase clusters. In the publication dataset, the publications originate from 6 conferences, whereas the other two datasets follow a consistent template. Therefore, the publication dataset has a higher cost than the others, although all costs are minimal. Experiment 3.2: Table Population. When users define a DTable, ZenDB populates the system-defined attributes using LLM-based and rulebased approaches. Table 7 presents the total cost and number of tokens (we use GPT-3.5-Turbo)4, with additional latency and 4When the context size of a node exceeds the token limit (e.g., the root node in publication dataset), we use NLTK [10] to summarize the context and adjust the summary size to approximately match the token limit of a prompt. Publication Civic Agenda Violation 0 1 2 3 4 5 6 7 8 9 \\# of Queries 103 ZenDB-light Figure 13: # of Queries on 1 Document by 1 $. Publication Civic Agenda Violation 0 1 2 3 4 5 6 7 8 Average Lantecy ZenDB ZenDB-light Figure 14: ZenDB VS ZenDBlight: Latency. quality results. In particular, to show the quality of table population, let \ud835\udc61\ud835\udc60\ud835\udc54(\ud835\udc52) and \ud835\udc61\ud835\udc60\ud835\udc5d(\ud835\udc52) be the text span of an entity \ud835\udc52(a table or a tuple) in the ground truth and predicted by ZenDB, respectively. We label \ud835\udc61\ud835\udc60\ud835\udc54(\ud835\udc52) \u2282\ud835\udc61\ud835\udc60\ud835\udc5d(\ud835\udc52) as a false positive (FP), indicating that the predicted text span contains the true text span but is larger, which is acceptable since it doesn\u2019t miss the correct answers will be refined by the tree-search algorithm. In constrast, (\ud835\udc61\ud835\udc60\ud835\udc5d(\ud835\udc52) \u2282\ud835\udc61\ud835\udc60\ud835\udc54(\ud835\udc52)) \u2228(\ud835\udc61\ud835\udc60\ud835\udc5d(\ud835\udc52) \u2229\ud835\udc61\ud835\udc60\ud835\udc54(\ud835\udc52)) = \u2205) is considered a false negative (FN) because the predicted text span does not encompass all the true text spans, potentially resulting in missed answers. Notably, ZenDB demonstrates a low FP rate in the violation and civic agenda datasets, showcasing the effectiveness of the approach. The cost incurred in this step is minimal, thanks to the use of the affordable LLM GPT-3.5-Turbo (around 100x cheaper than GPT-4). End-to-end cost comparison: ZenDB vs. Others. Although ZenDB incurs costs to construct SHTs and populate tables before a query arrives, these costs are minimal, totaling 0.1, 0.015, and 0.015 dollars for the publication, civic agenda, and notice of violations datasets, respectively. Even if we ran just a single query subsequently, we would have lower end-to-end costs for ZenDB compared to GPTsingle and GPT-merge, with the loading costs getting amortized across queries. Experiment 4: The Effect of Summary Construction in ZenDB. We examine the effect of summary construction on ZenDB performance in Figure 11. Recall that in Section 6.2, the summary of each node \ud835\udc63in a SHT consists of three components: an extractive summary (ES), the phrases of \ud835\udc63and its ancestors (node-name), and the top-1 sentence related to a given query predicate or projection within the text span of \ud835\udc63(DS, i.e., Dynamic Summary). We explored three variations of ZenDB by removing one component at a time from the summary: no-ES, no-node-name, and no-DS (e.g., no-ES refers to the strategy that excludes the extractive summary from the summary of the node). We observe that the extractive summary impacts the quality of query answers (i.e., precision and recall) the least, while both dynamic summaries and node names (i.e., the header phrases) affect performance more significantly. Node names provide useful metadata that adds more context for the LLM, helping refine the search space. The dynamic summary plays a critical role in summary construction by not only identifying the relevant nodes but also retrieving the text span most related to the given query. We also note that storing node names has a minimal impact on cost and latency due to their compact size. In contrast, both extractive and dynamic summaries have a greater size, though they still represent a relatively small portion of the overall cost and latency. Experiment 5: ZenDB Driven by A Cheaper LLM: GPT-3.5Turbo. We next study the impact of replacing the more expensive LLM used in ZenDB, GPT-4-32k, with an almost 100\u00d7 cheaper LLM, GPT-3.5-turbo, when evaluating queries. We denote this version as ZenDB-light. In Figure 12, ZenDB-light exhibits approximately a 7% decrease in precision and a 3% decrease in recall compared to ZenDB, at 100\u00d7 lower cost. This demonstrates that by refining the text span that ZenDB uses for evaluating queries, as opposed to the entire complex document, ZenDB is able to provide a much simpler and more precise context for LLMs to evaluate. This makes it easier for less-advanced but cheaper models like GPT-3.5-turbo to not just process the entire text span, but also answer the query accurately. We report the average number of SQL queries that can be executed on a single document by spending 1 dollar using ZenDBlight, in Figure 13. ZenDB-light can run approximately 3.5k, 3.7k, and 8k SQL queries with 2 predicates and one projection on average in one document within budget for the publication, civic agenda reports, and notices of violations, respectively, demonstrating the practicality of ZenDB-light. 8 RELATED WORK We now survey related work on querying unstructured data. Text-to-Table Extraction. One approach to querying unstructured data is by simply extracting unstructured data into tables, following which they are queried as usual. This approach is followed by Google DocumentAI [4] and Azure Document Intelligence [5], as well as approaches such as text-to-table [61]. Using an LLM to populate entire tables upfront can be expensive and error-prone on large and complex document collections as in our case. Evaporate [18] uses an LLM to infer schema, and then populate tables, using synthesized rules if possible. Simple extraction rules, such as ones generated by Evaporate, are not applicable in our setting. Retrieval-Augmented Generation (RAG). RAG techniques [20, 34, 41, 60], help identify smaller text portions that are most relevant to a given query in order to fit into finite context windows, reduce cost, and in some cases improve accuracy. Most techniques use fixed granularity chunking policies and don\u2019t account for semantic structure, while recent extensions rely on potentially expensive recursive summarization to build a hierarchy [9, 52]. We showed that this RAG_tree approach suffers from the same issues as vanilla RAG. The leaf nodes still use fixed size chunks that are divorced from semantics, and thus fail to find relevant text segments. In comparison, ZenDB leverages semantic structure to boost precision and recall by up to 61% and 80%. Multi-Modal Databases. Recent work creates of multi-modal databases [24, 35, 55, 56, 58] that support SQL-like interfaces over text, images, and/or video. However, they all apply LLMs or other pre-trained models to entire documents at a time, and are thus limited to simple, small documents. This is equivalent to our vanilla LLM approach, which is expensive and not very accurate. Other work [31] has used interactive query processing to improve query results through user feedback. None of these approaches have explored the use of semantic structure to reduce cost and improve accuracy. Natural Language Interfaces to Data. Supporting natural language querying over structured data is a long-standing question in the database community; a recent survey is one by Quamar et al. [50]. While the database community has been working on this problem for over a decade, e.g., [40], LLMs have dominated recent benchmarks [21, 42]. In our work, we instead focus on the inverse problem of structured (SQL) queries over unstructured data\u2014but this line of work could aid the first step of SQL query construction. LLMs meet Data Management. LLMs potentially disrupts the field of data management [29], but the first step is to actually understand tables. Recent work [25, 28, 62] explores how well LLMs understand tabular data, and representing knowledge learned by the LLM as structured data [51, 59]. Many data management problems have been revisited, including query rewriting [44], database tuning [57], data preprocessing [63], data and join discovery [26, 27, 36], data profiling [33], and data wrangling [23, 43, 48]. Some recent work has also explored how well LLMs can generate tables [54]. ZenDB also uses LLMs, but to a new setting: document analytics. Structured Extraction. Structured extraction from web pages, pdfs, and images has a long history of work. For instance, Snowball [17] proposed structured extraction over the open web, and leverage common techniques such as wrapper induction [38, 46] which also leverage the hierarchical structure of HTML documents and headings. In contrast, ZenDB takes as input PDFs, which are often not hierarchically encoded. Other works, such as Shreddr [22] extract from images of forms where the templates are identical, and focus on efficient use of crowd workers. These are also relevant due to the similarities between LLMs and crowdsourcing [49]. 9 CONCLUSION We presented ZenDB, a document analytics system that leverages templatized structure present in documents in a collection to support cost-efficient and accurate query processing. During ingest, ZenDB extracts structure from documents in the form of SHTs, guaranteeing that the results are correct for well-formatted documents. Then, during table creation, ZenDB maps tuples to nodes in the SHT, with attribute values to be populated during querying. ZenDB supports SQL queries on user-defined document tables, applying predicate reordering and pushdown, and projection pull-up techniques, coupled with a summary-based tree-search approach to optimize query processing. Across multiple domains, ZenDB provides a compelling trade-off point relative to LLM-only or RAG based approaches. In future work, we plan to study the setting where there are no templates or when the templates are very noisy, as well as expand the space of SQL queries supported. In addition, we envision a rich design space for user interfaces to allow users to explore the results of ZenDB queries alongside their provenance.", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2307.05915v2", + "title": "Prompt Generate Train (PGT): Few-shot Domain Adaption of Retrieval Augmented Generation Models for Open Book Question-Answering", + "abstract": "We propose a framework - Prompt, Generate, Train (PGT) - to efficiently\ndevelop a generative question-answering model for open-book question-answering\nover a proprietary collection of text documents. The framework adapts a\nretriever augmented generation (RAG) model to the target domain using\nsupervised fine-tuning and reinforcement learning with synthetic feedback in a\nfew-shot setting. This, we hypothesize, will yield an aligned, uncertainty\ncalibrated model that is competitive with GPT-4 based in-context retrieval\naugmented generation in generating relevant answers at lower serving costs. The\nframework's synthetic generation pipeline will generate synthetic training data\ncomprising tuples using an open-source LLM and a\nnovel consistency filtering scheme. The pipeline will be designed to generate\nboth abstractive and extractive questions that span the entire corpus. The\nframework proposes to fine-tune a smaller RAG model comprising a dense\nretriever (ColBERTv2) and a smaller sized LLM on the synthetic dataset. In\nparallel, the framework will train a Reward model to score domain grounded\nanswers higher than hallucinated answers using an a priori relevance ordering\nof synthetically assembled samples. In the next phase, the framework will align\nthe RAG model with the target domain using reinforcement learning (Proximal\nPolicy Optimization). This step may improve the RAG model's ability to generate\ngrounded answers and ignore out of domain questions. In the final phase, the\nframework will calibrate the model's uncertainty for extractive\nquestion-answers.", + "authors": "C. S. Krishna", + "published": "2023-07-12", + "updated": "2023-07-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "A common use-case within enterprise settings is to expose a question-answering service over a pro- prietary corpus comprising a collection of text documents. The application, in response to the client\u2019s question, must be able to generate a factu- ally grounded, attributed response by condensing information from a set of relevant documents in the underlying corpus. If the answer cannot be syn- thesized from these documents, the system should respond with a \u201ccannot answer\u201d rather than gen- erate a misleading or factually incorrect response (referred to as hallucination in the literature). An additional requirement is to economize develop- ment and serving costs by using smaller LLM ar- chitectures (<10 Bn parameters). It has been shown that specialized task and tar- get domain data specific pre-training and/or fine- tuning enables smaller sized LLMs to outperform in-context learning with LSLMs (Izacard et al., 2022; Hsieh et al., 2023). However, smaller LLMs have inherent weaknesses. (Gudibande et al., 2023) demonstrated that certain properties such as chain of thought reasoning (Wei et al., 2022) only emerge at higher scales. A smaller LLM will therefore struggle to outperform LSLMs such as GPT-4 on questions that require reasoning ability. To address this limitation, the PGT framework needs to include a procedure for model uncertainty calibration. In- formally speaking, the model must \u201cknow when it knows the answer\u201d and \u201cknow when it doesn\u2019t know the answer\u201d. We make the notion of \u201cknow- ing when the model knows\u201d and \u201cknowing when it doesn\u2019t know\u201d more precise in the section on uncertainty calibration. This is a desirable feature since it enables easier integration into a cascad- ing systems such as FrugalGPT (Chen et al., 2023) where the RAG model\u2019s answer can be surfaced only when the model is confident of its answer, else the client\u2019s question can be passed on to a human or alternate model. Further, smaller LLMs are more prone to hallu- cination [ref?]. The framework design, therefore, needs to address hallucination mitigation. Given this perspective, we list the following design goals for the PGT framework: arXiv:2307.05915v2 [cs.LG] 26 Jul 2023 1.1 Design Goals \u2022 Few-shot Adaption: The framework needs to adapt the RAG model to the target domain in a few-shot setting. i.e., it must do so without access to a large volume of manually anno- tated tuples on which the model can be fine-tuned. \u2022 Serving Cost Economization: PGT models must be cheaper to serve and more accurate than systems based on GPT-3.5/4 powered in- context retrieval augmented generation. Since serving cost is a function of model size, we choose models with less than 10 Bn parame- ters. \u2022 Hallucination Mitigation: The framework should yield a model that only generates an- swers based on the underlying corpus. For out of domain questions or questions which cannot be answered based on the underlying corpus, the model should be able to generate \u201ccannot answer this question based on given information\u201d rather than hallucinate a baseless answer. \u2022 Uncertainty Calibration: The framework should yield models calibrated for uncertainty, at least for a certain class of questions, such as extractive or yes/no questions. The ap- plication layer can then rely on the model\u2019s confidence in its generated answer to decide whether to surface answer to the end-user or take a pass on the question.", + "main_content": "The dominant framework to develop models for open-book question answering is based on incontext retrieval augmented generation (Ram et al., 2023)(Bing Search): in response to the user question, a dense or a sparse retriever fetches relevant documents. A prompt is populated by the user\u2019s question and the set of fetched documents. This prompt is then fed to an LSLM such as GPT-4 to generate the response. This framework does not require domain adaptation since the LSLM has been extensively pre-trained on public domain corpora such as Wikipedia, Common-Crawl etc. Forcing the LLM to rely on the fetched documents also constrains the LLM to generate grounded answers to a certain extent, although LLMs such as GPT-4 can still be factually wrong in subtle ways [refs?]. While such a framework is suitable for opendomain question services such as search engines, where the service is expected to answer questions from any domain, we argue that it is overkill for closed-domain question answering. In the latter setting, the model is only expected to generate answers for in-domain questions. Hence, it stands to reason that the LLM\u2019s knowledge base merely needs to cover the target domain for which a smaller LLM can suffice, motivating the feasibility of adapting a smaller LLM for the target domain. A popular approach for domain adaptation for question-answering is to train a RAG architecture (Guu et al., 2020; Lewis et al., 2020) comprising a retriever and generator on annotated training data in the form of tuples from the target domain. (Shi et al., 2023), on the other hand, only trains the retriever component of the RAG architecture. (Izacard et al., 2022) showed that jointly fine-tuning the RAG model on as few as 1024 samples from the target domain improves performance relative to GPT-3 based in-context retrieval augmented generation. They also empirically demonstrated that performance increases when the model is fine-tuned on larger sized data-sets. In most real world settings, such manually crafted data will not be available in sufficient volume or quality. An emerging body of work has adapted dense neural retrievers to the target domain by training on synthetically generated training data comprising pairs (Bergum, 2023; Dai et al., 2022; Saad-Falcon et al., 2023; Abonizio et al., 2023). We extend this line of work to domain adaptation of full-fledged RAG models in a few-shot/zero-shot setting. 3 Methodology The PGT framework utilizes two LSLMs \u2013 GPT-4 and Flan-TF XXL \u2013 to generate question answer pairs over all documents in the corpus. GPT-4 is used to generate a seed set of Y samples for extractive and abstractive question-answering formats. The Flan-TF XXL instance leverages the seed set to generate more samples. This design economizes the cost of generating the training dataset. The RAG model is then trained on this dataset, using both supervised fine-tuning and reinforcement learning. To generate the synthetic data-set with the LSLMs, we split documents into document segments such that the document segment size is bounded by the permissible context window size for the LSLM (1024-2048 tokens). We create an index I1 over these document segments to facilitate random retrieval of a segment. Similarly, we maintain another index I2 over document segments where the document segment size is upper bounded by the context window size of the RAG model\u2019s generator. 3.1 PGT Steps The PGT framework consists of the following steps: 1. Phase 1a (optional): Adapt the retriever component of the RAG model to the domain by training on the Inverse Cloze Task (Khattab et al., 2021; Lee et al., 2019). 2. Phase 1b: Synthetic Training Data Generation pipeline: In this phase, we generate two sets of data to fine-tune the RAG model and the Reward model. 3. Phase 2: This comprises 2 training procedures that can be executed in parallel. (a) RAG model supervised fine-tuning (b) Reward model training 4. Phase 3: Use of reinforcement learning (PPO) to align the RAG model to the target domain, with the Reward model generating a reward score for relevance. 5. Further fine-tuning of the RAG model for uncertainty calibration for certain classes of questions. We can repeat steps 2, 3 and 4 for a few iterations. 4 PGT Components 4.1 RAG-Retriever We choose ColBERTv2 (Khattab and Zaharia, 2020) as the dense retriever in the RAG model. Dense retrievers work best with smaller sized documents of not more than 400 tokens. On the other hand, feeding the generator with chunks rather than whole documents can lead to context fragmentation and degrade the generated answer\u2019s quality. To resolve this trade-off, we modify the retrieval procedure as follows: we maintain a third index I3 in which each document segment from I2 is split into smaller sized, mostly disjoint chunks of token size \u223c300 tokens. The retriever component of the RAG model indexes into I3 to fetch relevant chunks. For a given chunk c of a document and the question q we can compute the similarity score S(q, c) using the ColBERTv2 model instance. Let a document segment d \u2208I2 correspond to chunks (c1, .., cn) indexed in I3. We define the probability of fetching d given a question q to be proportional to the maximum similarity score over all the document chunks: P\u03b7(d|q) \u221dexp(max(S(q, c1), ..., S(q, cn))) (1) Here, \u03b7 refers to the tunable parameters of the retriever. This design and retrieval mechanism, we hypothesize a) ensures tighter coupling between the retriever and the generator, and b) mitigates context fragmentation, thereby improving retrieval and generation quality. We can optionally add a lexical similarity signal based on BM25 (Ma et al., 2020) to the similarity score. 4.2 RAG Generator We choose a pre-trained instance of the Flan-T5 encoder-decoder architecture for answer generation. The encoder encodes the question and passages fetched by the retriever. The decoder, conditioned on the encoding, autoregressively computes the likelihood of generating the answer as P\u03d5(a|q, d) = Y i P\u03d5(ai|q, d, a tuples across the following question formats: extractive (EX) and abstractive (AB) (Khashabi et al., 2020). We randomly sample document segments using I1. We try to preserve documents boundaries as much as possible so that the LLM has a coherent passage as the basis for generating a questionanswer pair. We append a q-a format specific prompts (Appendix-I) for each of the q-a formats. This prompt along with the passage is fed to GPT4 to generate the corresponding answer. We use the seeding set of Y pairs in turn to prompt Flan-TF XXL to generate Z question answer pairs across both formats. 5.2 Generate non-matching question answer pairs It is important to train the model to generate a \u201ccan\u2019t answer based on given references\u201d response as well if the fetched passages cannot generate the required answer. To this end, we also fetch the top K\u2032 matching chunks using the retriever, remove the chunks that were used to generate the answer, and assemble a non-matching passage, p\u2032 c. We then prompt the LSLM to generate a rationale rc for why the given question qc cannot be answered given context p\u2032 c: a\u2032 c \u2190LSLM(p\u2032 c, qc) We assemble a non-matching question-answer pair for every matching question answer pair (p\u2032 c, qc, a\u2032 c) which we refer to as a non-matching tuple. During training, the ratio of number of nonmatching to matching question-answer pairs is a design parameter that can be set based on downstream requirements \u2013 intuitively, if we want the model to err on the side of caution, we should include as many non-matching question answer pairs as matching question answer pairs. 5.3 Generation with Flan-T5 XXL In this phase, we tap Flan-T5 XXL to generate more training data, using the seed set from the previous phase to sample demonstration exemplars. 1. Start with the synthetic training data-set T populated via GPT-4. 2. Concatenate a prefix prompt P consisting of N passage-question-answer tuples sampled without replacement from T along with the meta prompt: P = [(p1, q1, a1)...(pN, qN, aN)] 3. Select a candidate passage pc at random (alternatively, based on ideas presented in (?) from I1 and prompt Flan-T5 XXL to generate a candidate question-answer pair: (qc, ac) = FlanT5XXL([P; pc]) 5.4 Consistency Filtering We add (pc, qc, ac) to T if it meets the following consistency filtering criteria: 1. For the generated question qc the domain adapted retriever\u2019s top K fetched documents should span the passage pc. If not, drop this tuple else proceed to the next step. 2. We again prompt Flan-T5 XXL/GPT-4 to generate an answer a\u2032 c to the question qc based on the top K fetched chunks from the previous step. Retain this sample only if there is a high semantic overlap between ac and a\u2032 c 3. Confidence based threshold (Abonizio et al., 2023): the normalized log-probability of generating a question-answer pair for the candidate passage exceeds a threshold. 4. Uncertainty based threshold (optional): Use a suitable measure of the uncertainty (Lin et al., 2023) of the generated sample. Accept the sample only if the uncertainty measure is less than a threshold. 5. If the sample passes the consistency filtering steps, add it to T 6 Phase 2a: RAG model Supervised Finetuning We present a new log-likelihood function, incontext RAG-token model likelihood, that combines RAG-token model likelihood (Lewis et al., 2020) with in-context RALM learning (Ram et al., 2023): P\u03b7,\u03d5 (a|q; K, L, S) = nS\u22121 Y i=0 S Y j=1 X k\u2208topK P\u03b7 \u0000di,k|[q; aL Si] \u0001 P\u03d5 \u0000aSi+j|[q; di,k; a<(Si+j)] \u0001 (3) The design parameter S is the stride size which determines after how many steps we refresh the context by fetching relevant documents conditioned on the question and a subset of the answer prefix. The design parameter L decides how many of the most recently generated tokens in the prefix to consider for context augmentation. aL Si refers to the L most recent tokens in the answer from the Si\u2019th position and going backwards. K decides how many documents to marginalize over in generating the next token at each step. d(i,k) refers to the k\u2019th document-segment fetched by the retriever via I2 in the i\u2019th stride. nS = n/S determines the number of fetches during retrieval, where n is the token length of the answer. Note that when S = 1, L = 0, this function degenerates into the RAG-token model likelihood function. We intuit that the in-context RAG-token model likelihood provides the flexibility for the retriever to fetch the right document segments conditioned on the evolving answer. This in turn improves the quality of the conditioned generation. By marginalizing over top K documents during training at every generation step, we improve the ability of the retriever to discern relevant from irrelevant documents. 6.1 Generation Procedure The transition probability associated with generating the next token in the answer is given by: P\u03b7,\u03d5 \u0000aSi+j]|[q; a<(Si+j); K, L, S \u0001 = X k\u2208topK P\u03b7 \u0000di,k|[q; aL Si] \u0001 P\u03d5 \u0000aSi+j|[q; di,k; a<(Si+j)] \u0001 (4) This can be plugged into a standard beam decoder to generate the answer. 7 RAG Alignment Training Reinforcement learning with human feedback (RLHF) training (Ouyang et al., 2022; Kadavath et al., 2022) further adjusts model parameters so as to generate answers that are aligned to human preferences. However, this procedure requires human preference feedback on answers, which may not always be available. Our goal is limited to aligning the RAG model such that its answers are grounded in the underlying corpus. Towards this end, we adapt RLHF but without recourse to human preference feedback to design a new technique \u2013 Reinforcement Learning with Synthetic Feedback (RLSF). 7.1 Phase 2b: Reward model Training Let S ((pc, qc, ac)) \u2208R be a measure of the relevance of the response, given the context and the question. The relevance score should be low if the answer is hallucinated, factually incorrect or not grounded in the underlying context, and high otherwise. We want to train a Reward model that can estimate the relevance of a model\u2019s generated response, given the context and question. We set up the training data-set for Reward model training as follows. For every pair of matching and non-matching tuples, (pc, qc, ac), (p\u2032 c, qc, a\u2032 c), we can assemble additional tuples,(p\u2032 c, qc, ac), (pc, qc, a\u2032 c) to assemble a composite tuple: \u03c4 = \u0000(pc, qc, ac) \u0000p\u2032 c, qc, a\u2032 c \u0001 \u0000p\u2032 c, qc, ac \u0001 \u0000pc, qc, a\u2032 c \u0001\u0001 We assemble an alternate data-set of such tuples T \u2217= {\u03c4 1, ..., \u03c4 N} for training the Reward model. For a given \u03c4, we fix the following orderings based on relevance of the answer, given the question and context: S ((pc, qc, ac)) > S \u0000\u0000p\u2032 c, qc, ac \u0001\u0001 S ((pc, qc, ac)) > S \u0000\u0000p\u2032 c, qc, a\u2032 c \u0001\u0001 S \u0000\u0000p\u2032 c, qc, a\u2032 c \u0001\u0001 > S \u0000\u0000p\u2032 c, qc, ac \u0001\u0001 S \u0000\u0000p\u2032 c, qc, a\u2032 c \u0001\u0001 > S ((pc, qc, ac)) (5) We train the Reward model by minimizing the following contrastive loss function: loss(\u03b8) = \u22121 4E\u03c4\u223cT \u2217[log (\u03c3 (RM\u03b8 (\u03c41) \u2212RM\u03b8 (\u03c43))) + log (\u03c3 (RM\u03b8 (\u03c41) \u2212RM\u03b8 (\u03c44))) + log (\u03c3 (RM\u03b8 (\u03c42) \u2212RM\u03b8 (\u03c43))) + log (\u03c3 (RM\u03b8 (\u03c42) \u2212RM\u03b8 (\u03c41)))] (6) 7.2 Phase 3: Alignment using Reinforcement Learning (PPO) We use proximal policy optimization (PPO) (Schulman et al., 2017) to further finetune the student LLM with the Reward model providing the reward signal for relevance of the answer as follows: We sample a passage-question pair using the questiongeneration pipeline from Phase I or optionally sample a tuple from T to yield a passage-question tuple (p, q). The RAG model then generates the answer conditioned on the passage, using Equation 4 with K = 1. The Reward model scores the answer for relevance. We then finetune the RAG model w.r.t parameters \u03d5 of the generator by minimizing the PPO objective: loss(\u03d5) = RM\u03b8((p, q, a)) \u2212\u03b2 log P\u03b7,\u03d5(a|q, p) P\u03b7,\u03d5\u2032(a|q, p) ! (7) 8 Phase 4: Uncertainty Calibration We want the student LLM to be calibrated for uncertainty. Informally, the model should \u201cknow when it knows the answer\u201d and \u201cknow when it doesn\u2019t know the answer\u201d. We make this precise based on the definition of calibration outlined in (Guo et al., 2017). Let pM(a|q, p) be the probability assigned by the model that the answer it generated given the question and context is the correct response. Then, the model is perfectly calibrated if: P(a|pm = p) = p, \u2200p \u2208[0, 1] To calibrate the model, we train the model for predicting whether the answer it generated given the question and supporting evidence is correct or wrong. We do so by maximizing the \u201cindirect logit\u201d (Lin et al., 2022), the log-probability associated with the model predicting \u201ccorrect\u201d or \u201cwrong\u201d for it\u2019s own answer, given the question and supporting evidence from the corpus. We present the recipe below for uncertainty calibration: We use the RAG model to generate an answer to the question using a beam generator and transition probabilities using Equation 4: a \u2190P\u03b7,\u03d5(a|a; K, L, S) We then use the retriever component of the RAG model to fetch the top M(\u223c3) document segments that were used to generate the answer: (d1, ..., dM) \u2190P\u03b7(.|[q; a]) We finetune the generator on a new instruction task: the task of predicting whether the answer is correct or wrong, given the question and document set. We first compute the log-probability associated with predicting \u201ccorrect\u201d or \u201cwrong\u201d, using the RAG-generator. y \u2190P\u03d5(a, q, (d1, ..., dM)) We then minimize the cross-entropy loss based on the ground-truth label and the model\u2019s assessment of the answer\u2019s veracity: loss(\u03d5) = C.E(y, \u02c6 y) (8)" + }, + { + "url": "http://arxiv.org/abs/2310.06225v2", + "title": "GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models", + "abstract": "Large language models (LLMs) have demonstrated remarkable capabilities in\nnatural language understanding across various domains, including healthcare and\nfinance. For some tasks, LLMs achieve similar or better performance than\ntrained human beings, therefore it is reasonable to employ human exams (e.g.,\ncertification tests) to assess the performance of LLMs. We present a\ncomprehensive evaluation of popular LLMs, such as Llama 2 and GPT, on their\nability to answer agriculture-related questions. In our evaluation, we also\nemploy RAG (Retrieval-Augmented Generation) and ER (Ensemble Refinement)\ntechniques, which combine information retrieval, generation capabilities, and\nprompting strategies to improve the LLMs' performance. To demonstrate the\ncapabilities of LLMs, we selected agriculture exams and benchmark datasets from\nthree of the largest agriculture producer countries: Brazil, India, and the\nUSA. Our analysis highlights GPT-4's ability to achieve a passing score on\nexams to earn credits for renewing agronomist certifications, answering 93% of\nthe questions correctly and outperforming earlier general-purpose models, which\nachieved 88% accuracy. On one of our experiments, GPT-4 obtained the highest\nperformance when compared to human subjects. This performance suggests that\nGPT-4 could potentially pass on major graduate education admission tests or\neven earn credits for renewing agronomy certificates. We also explore the\nmodels' capacity to address general agriculture-related questions and generate\ncrop management guidelines for Brazilian and Indian farmers, utilizing robust\ndatasets from the Brazilian Agency of Agriculture (Embrapa) and graduate\nprogram exams from India. The results suggest that GPT-4, ER, and RAG can\ncontribute meaningfully to agricultural education, assessment, and crop\nmanagement practice, offering valuable insights to farmers and agricultural\nprofessionals.", + "authors": "Bruno Silva, Leonardo Nunes, Roberto Estev\u00e3o, Vijay Aski, Ranveer Chandra", + "published": "2023-10-10", + "updated": "2023-10-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "Large Language Models (LLMs), such as GPT-4 and Llama 2, have made significant strides in showcasing remarkable capabilities across a wide range of domains and tasks (Touvron et al., 2023). These models exhibit a level of intelligence that surpasses prior Artificial Intelligence (AI) models, excelling in various fields like coding, medicine, law, agriculture, and psychology, often without requiring special prompts. Their performance is impressively close to human expertise, positioning them as potential early versions of artificial general intelligence (AGI) systems (Bubeck et al., 2023). The technology behind LLMs, which typically involves advanced deep learning techniques, large-scale transformers, and massive amounts of data, has fueled their rapid progress. As research on LLMs continues, it is vital to uncover their limitations and address the challenges in achieving more comprehensive AGI systems. Furthermore, the machine learning community must evolve beyond traditional benchmarking methods, moving towards evaluating LLMs in a manner that closely resembles the assessment of human cognitive abilities. In recent years, significant advancements in arXiv:2310.06225v2 [cs.AI] 12 Oct 2023 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models LLMs based on the transformer architecture (Vaswani et al., 2017) and trained in a self-supervised manner have led to substantial improvements (Nori et al., 2023; Devlin et al., 2019; Radford et al., 2018) in various natural language processing tasks. These models, trained on massive cross-disciplinary corpora, have the potential to offer valuable insights and assistance in specialized domains like agriculture, crop management, and advising. Most prior work on AI in Agriculture has focused on extracting intelligence on satellite imagery and sensor data. In our research at Microsoft, through Project FarmBeats (Vasisht et al., 2017; Chandra et al., 2022) and Project FarmVibes (Microsoft, 2021), we have focused on artificial intelligence techniques to drive the adoption of data-driven agriculture. We use AI to clean data sources, for example, by seeing through clouds (Zhao et al., 2023b), to better data collection by predicting sampling locations, to making better predictions, for example of yields, microclimates (Kumar et al., 2021), and carbon sequestration (Sharma et al., 2022). While these advances have helped improve the state of the art in agricultural AI, tech-savviness of farmers and farm workers has limited the adoption of technology solutions. In this paper, we look at the ability of LLMs to bridge the technology adoption gap in agriculture. Our first consideration is that of an agronomist\u2019s assistant (a virtual agronomist). The vision is that an agronomist can ask questions to the virtual agronomist to remove ambiguity in the response. This can help better customize the answers for the farmer. Such a virtual agronomist can help an agronomist stay up to date with the latest information, can help remove paperwork, and can help them be more effective in their consultation with the farmer. We build a virtual agronomist leveraging the advances in LLMs. Our goal is to leverage the LLM to pass an exam that an agronomist needs to pass, for example, the Certified Crop Advisor related Exams in the USA. We first utilized diverse datasets to estimate the performance of Large Language Models (LLMs) in the field of agriculture. First, the Brazilian dataset from Embrapa\u2019s database comprises a variety of complex, open agriculture- related questions and challenges (Embrapa, 2022). Second, we used questions from the American Society of Agronomy related to the Certified Crop Adviser (CCA), designed to evaluate an individual\u2019s ability to provide competent advice on crop management, soil, and water conservation (Adviser, 2022). Lastly, the Indian dataset from AgriExam offers a collection of previous year questions from various agriculture-related exams for graduate programs, covering topics such as agronomy, horticulture, and soil science (Exam, 2022). These resources, featuring multiple-choice questions and short-answer questions enable us to assess the capabilities of LLMs in answering difficult agriculture-related questions and explore their potential applications in providing expert advice and support within the agricultural sector. By using out-of-the-box models, this study assesses how these LLMs can be applied to provide valuable insights and assistance in the specialized domain of agriculture. These findings contribute to the understanding of LLMs\u2019 potential in the agriculture domain and pave the way for their responsible and efficient use in addressing agricultural challenges and promoting sustainable practices. In this paper, we make several noteworthy contributions to the understanding of LLMs in the agriculture domain. These contributions can be itemized as follows: \u2022 We present a comprehensive evaluation of LLMs, such as Llama2-13B, Llama2-70B, GPT-3.5 and GPT-4, in answering agriculture-related questions using benchmark datasets from major agriculture producer countries. \u2022 We investigate the impact of retrieval techniques and prompting ensemble methods on the performance of these models, providing valuable insights into their capabilities and limitations. \u2022 We discuss the implications of these findings in terms of potential uses of LLMs in agricultural education, assessment, and practice, emphasizing the importance of addressing challenges related to accuracy and safety while harnessing the power of LLMs effectively and responsibly in the agriculture domain.", + "main_content": "In this section, we review the relevant literature and studies that have explored the application of artificial intelligence, particularly large language models, in agriculture and related domains, as well as in educational settings and assessment methodologies. We present a brief introduction to each paper, discussing the contributions and potential improvements in each work, and comparing them to the proposed paper. Nori et al. (2023) evaluated GPT-4 on medical competency examinations and benchmark datasets, such as the USMLE and MultiMedQA suite. Despite not being specialized for medical problems, GPT-4 demonstrated impressive capabilities in natural language understanding and generation. The study explored various aspects of model performance, including text and images in test questions, memorization during training, and probability calibration. Results showed GPT-4 outperforming earlier models and models fine-tuned on medical knowledge, with improved calibration. The authors discuss GPT-4\u2019s ability to explain medical reasoning, personalize explanations, and craft counterfactual scenarios, considering implications for medical education and practice, while acknowledging accuracy and safety challenges. 2 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models Nunes et al. (2023) explored the capabilities of language models (LMs), specifically GPT-3.5 and GPT-4, in tackling high-stakes multiple-choice tests using the National High School Exam (Exame Nacional do Ensino M\u00e9dio ENEM), a multidisciplinary entrance examination widely adopted by Brazilian universities. The ENEM challenges LLMs due to its questions spanning multiple fields of knowledge and requiring understanding of information from diverse domains. The authors analyzed responses generated by GPT-3.5 and GPT-4 models for questions presented in the 2009-2017 exams, as well as for questions of the 2022 exam, which were made public after the training of the models was completed. Different prompt strategies were tested, including the use of Chain-of-Thought (CoT) prompts to generate explanations for answers. The best-performing model, GPT-4 with CoT, achieved an accuracy of 87% on the 2022 edition, surpassing GPT-3.5 by 11 points. In the context of applying Large Language Models (LLMs) for question-answering tasks, a notable study is the MedPaLM 2 paper by Singhal et al. (2023). This model leverages base LLM improvements, medical domain fine-tuning, and prompting strategies, including a novel ensemble refinement (ER) approach that is also used in this research. The model has achieved remarkable results, scoring up to 86.5% on the MedQA dataset, while also demonstrating high performance on clinical topics datasets. Dikli (2006) paper, titled \"An overview of automated scoring of essays,\" discusses the state of research in automated essay grading (AEG) and its potential applications in educational settings. While it provides a comprehensive review of AEG methodologies and their implications for education, it focuses on a specific assessment method. \"Show your work: Improved reporting of experimental results\" by Dodge et al. (2019) emphasizes the importance of improved reporting of experimental results. The work contributes to more transparent and reproducible research practices in the field of natural language processing. In \"Using Latent Semantic Analysis to evaluate the contributions of students in AutoTutor,\" Graesser et al. (2000) investigated the use of Latent Semantic Analysis (LSA) for evaluating students\u2019 contributions in an intelligent tutoring system. The work focuses on AI-driven educational systems and their evaluation but is limited in scope, as it is centered around the AutoTutor system and LSA. Retrieval-Augmented Generation (RAG) techniques have been gaining attention for their ability to enhance the performance of large language models in various domains (Lewis et al., 2020). RAG techniques combine information retrieval and generation capabilities, allowing models to access and utilize relevant external knowledge to generate more accurate and informed responses. Specifically, RAG techniques employ a two-step process: first, retrieving relevant documents or passages from a knowledge source, and second, conditioning the generation process on the retrieved information. This approach has demonstrated success in improving the performance of language models in tasks such as question-answering and summarization, where access to external knowledge can be crucial for generating correct and coherent outputs. Kamilaris and Prenafeta-Bold\u00fa (2018) conducted a survey on \"Deep learning in agriculture,\" outlining the use of deep learning techniques in various agricultural tasks. The paper provides a comprehensive review of deep learning applications in agriculture. \"Crop yield prediction using deep neural networks\" by Khaki and Wang (2019) explores the use of deep learning techniques to predict crop yields with high accuracy. Liakos et al. (2018) presented a review titled \"Machine learning in agriculture,\" discussing various machine learning techniques for addressing agricultural challenges. The paper provides a comprehensive review of machine learning applications in agriculture but does not focus on large language models like GPT. Zhao et al. (2023a) have explored ChatGPT\u2019s potential in agricultural text classification, resulting in a solution called ChatAgri. Evaluations on multi-linguistic datasets show competitive performance compared to existing PLM-based fine-tuning approaches. ChatAgri exhibits strengths in zero-shot learning, domain transferability, and minimal hardware requirements, suggesting its suitability as a low-cost AI technique for smart agriculture applications. The study provides a comprehensive analysis of the findings, contributing to future sustainable smart agriculture development. The paper \"WINOGRANDE: An Adversarial Winograd Schema Challenge at Scale\" by Sakaguchi et al. (2021) introduces a large-scale dataset for evaluating the common sense reasoning capabilities of AI models. This work contributes to the creation of a challenging benchmark for AI systems, fostering research in natural language understanding and common sense reasoning. \"XLNet: Generalized Autoregressive Pretraining for Language Understanding,\" proposed by Yang et al. (2019), presents a novel pretraining method for natural language understanding tasks. This work develops a competitive language model that outperforms other methods on several benchmarks. In summary, while the existing literature has demonstrated the potential of artificial intelligence, particularly LLMs, in various domains and applications, the specific context of agriculture has been less explored. Our proposed paper distinguishes itself from the reviewed works by focusing on employing large language models, such as GPT-4, within the agriculture domain. We aim to evaluate how these models can enhance human capabilities in the context of agriculture, including answering agriculture-related questions and providing guidelines for crop management. By bridging the gap between the advancements in large language models and the agricultural sector, our study offers a unique perspective and contributes to the understanding of AI\u2019s potential impact on agriculture by providing a baseline for future benchmarks about the use of large language models to solve agricultural problems. 3 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models 3 Datasets This study assesses advanced language models in addressing agriculture-related questions by utilizing three distinct datasets with different formats, each originating from three major crop producer countries. 3.1 Certified Crop Adviser (CCA) Exam During which stage of plant growth is an above-normal rainfall conducive to pod and stem blight as well as Phomopsis seed decay? a) Flowering b) Pod filling c) Early vegetative d) Mid vegetative Answer: b) Pod filling Listing 1: Adapted CEU question example, requiring specialized knowledge about Pod and Stem Blight and Phomopsis Seed Decay of Soybean. The Certified Crop Adviser (CCA) Exam is a comprehensive evaluation program designed to assess the competency of agricultural professionals in agronomy and crop advising (Adviser, 2022). The CCA certification ensures that crop advisers possess the necessary knowledge and skills to provide accurate and reliable advice to farmers and other stakeholders in the agricultural sector. The certification process involves passing two exams: the International CCA Exam, which focuses on general agronomic principles, and a Local Board Exam tailored to specific regional conditions and regulations. To maintain their CCA certification, crop advisers are required to participate in continuing education activities and earn Continuing Education Units (CEUs) credits. These CEUs demonstrate that the advisers are staying up to date with the latest developments in agronomy and are committed to lifelong learning. A specific number of CEUs must be earned within a designated time frame to maintain the certification. The questions used for this evaluation are based on Continuing Education Unit (CEU) materials and not the actual Certified Crop Adviser (CCA) certification tests. The purpose of using CEU questions in this study is solely to assess the GPT models\u2019 ability to provide valuable insights and assistance in the specialized domain of agriculture, and not to replicate or undermine the CCA certification process. The Crop Protection Network (CPN) is an excellent resource for CCAs seeking to earn CEUs. The CPN offers a variety of online quizzes related to continuing education units\u2019 credits1. These quizzes cover a wide range of topics in agronomy and crop management, such as disease management, pest control, nutrient management, and environmental stewardship. By participating in these quizzes and earning CEUs, CCAs can ensure that they remain knowledgeable and competent in their field, providing the best possible guidance to farmers and contributing to the overall sustainability and productivity of the agricultural sector. The adapted CEU question example provided in Listing 1 highlights the specialized knowledge required to answer questions related to specific crop diseases, such as pod and stem blight and phomopsis seed decay of soybean. 3.2 EMBRAPA Dataset In this study, we use an extensive dataset, \"500 Perguntas 500 Respostas Embrapa/SCT\" (500 Questions 500 Answers Embrapa/SCT), provided by the Brazilian Agricultural Research Corporation (Embrapa) (Embrapa, 2022). This dataset consists of a series of books containing questions and answers related to various aspects of crop cultivation and management in Brazil. These questions have been formulated by a diverse group of stakeholders, including producers, farmers, and farming associations, and are accompanied by responses from Embrapa specialists. Unlike CCA exams, the questions in this dataset are not presented in a multiple-choice format but as general text-based inquiries, requiring a deeper understanding of the subject matter to generate accurate and relevant answers. The use of this robust dataset presents a unique opportunity to examine the effectiveness of advanced language models in answering domain-specific questions accurately and contextually. By rigorously evaluating LLM\u2019s performance on the Embrapa dataset, our goal is to explore the potential of artificial intelligence in providing reliable and efficient 1https://cropprotectionnetwork.org/cca-ceu-quizzes 4 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models What are the symptoms and damages caused by the gall nematode in citrus plants? Answer: The symptoms of the gall nematode in citrus plants include dysfunctionin the roots, leading to smaller leaf size and marked chlorosis. This resultsin a reduction in production. The growth of the main root may be paralyzed,causing a proliferation of lateral roots. In severe infections, the plant maydie before the nematode completes its cycle. However, not all citrus species are susceptible to this nematode, so it is essential to identify the species and monitor it in the soil and roots. Listing 2: Example of an adapted question from the Embrapa dataset, focusing on the symptoms and damages caused by the gall nematode in citrus plants. solutions to real-world agricultural challenges. Through this analysis, we aim to bridge the gap between state-of-the-art technology and sustainable agricultural practices, ultimately contributing to the improvement of agricultural productivity and resource management on a global scale. Listing 2 illustrates an example of an adapted question from the Embrapa dataset, demonstrating the challenges of verifying the correctness of answers in this context. Since the dataset\u2019s questions are not multiple-choice and often have no single correct answer, assessing the accuracy of responses can be subjective. This highlights the importance of understanding the nuances of agricultural topics and the expertise required to evaluate the relevance and correctness of AI-generated answers. The Embrapa dataset, translated as \"500 Questions 500 Answers,\" not only targets agronomists but also serves as a valuable resource for farmers. It encompasses a wide range of topics essential to both groups, contributing to the overall knowledge base and fostering sustainable agricultural practices. 3.3 AgriExams Questions Which directorate/institute of ICAR coordinates/organizes Parthenium Awareness Week in India during the month of August each year? a) ICAR-DWR, Jabalpur b) ICAR-IARI, New Delhi c) ICAR-CSSRI, Karnal d) ICAR-IIFSR Modipuram Answer: a) ICAR-DWR, Jabalpur Listing 3: Adapted AgriExam question example, highlighting the multiple-choice style and focus on Indian agriculture. The AgriExam dataset features a wide range of questions from various agriculture-related exams, primarily focusing on Indian agriculture (Exam, 2022). These questions are collected from different entrance exams, including those for prestigious institutions like JNKVV (Jawaharlal Nehru Krishi Vishwa Vidyalaya) and RVSKVV (Rajmata Vijayaraje Scindia Krishi Vishwa Vidyalaya). The questions are designed to assess candidates\u2019 knowledge and understanding of diverse agricultural topics, ranging from crop production and soil science to agricultural economics and extension, with a specific focus on India\u2019s farming practices and conditions. Listing 3 shows an example of a multiple-choice question style adapted from the AgriExam dataset. Note that this is not the exact question from the source but serves as a representation of the question style. The questions often structured as multiple-choice or descriptive questions, requiring test-takers to demonstrate a deep understanding of the subject matter. In this paper, we evaluated only the Agriexams multiple-choice questions. Additionally, the questions may focus on the application of scientific principles to real-world agricultural challenges, particularly those relevant to Indian agriculture, thereby encouraging problem-solving and critical thinking skills. 4 Method Overview This section outlines the method that was employed to execute experiments using LLMs to answer agriculture-related questions (Figure 1). 5 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models Collect data through manual extraction or custom scripts from various sources. 1. Data Collection T ransform questions into structured data, either multiple-choice or open question format. 2. Data Handling Remove questions that are not answerable without visual capabilities using LLMs and manual inspection. 3. Data Cleaning Create prompts for LLMs with and without contextual information to generate answers. 4. Prompting Creation Call the LLM models. It may use RAG, depending on whether the evaluation will use contextual data or not. 5. LLM Call Evaluate responses for accuracy and relevance, using GPT to help check the correctness of answers. 6. Checking Answers Figure 1: Answering and Evaluation Scheme 1. Data Collection: The first step in the process involved collecting the data required for the experiments. This was done through manual extraction of questions and answers from various sources, such as books, articles, or online resources. Alternatively, custom scripts were developed to automate the data collection process, particularly when dealing with large datasets or multiple sources. 2. Data Handling: Once the data was collected, it needed to be transformed from text into structured data. This was done by converting the questions into either a multiple-choice format using JSON or an open question format, depending on the nature of the questions and the desired output of the experiment. Structuring the data in this way allowed for easier integration with the LLMs and facilitated subsequent analysis of the results. 3. Data Cleaning: The next step was data cleaning, which involved removing questions that were not answerable without visual capabilities, such as questions based on images. To clean the data, both LLMs and manual inspection were used. This step ensured that the remaining questions were suitable for the LLMs and would yield meaningful results in the context of agriculture-related inquiries. 4. Prompting Creation: With the cleaned and structured data in hand, the next step was to create prompts for the LLMs to generate answers to the questions. The prompts were designed to include contextual information, such as background knowledge about the subject matter, to improve the accuracy and relevance of the generated responses. This step was crucial in guiding the LLMs to produce high-quality answers that addressed the specific needs of the agriculture-related questions. 5. LLM Call: In this step, the LLMs were called and could use RAG and/or ER depending on the contextual data availability and computational resources budget. These models generated responses to the prompts created in the previous step. 6. Checking Answers: Finally, after the LLMs generated answers to the questions, it was essential to check and evaluate the responses for accuracy and relevance. This step involved comparing the generated answers to the original answers provided by experts, as well as assessing the overall quality of the responses in terms of their applicability to the agriculture domain. In this step, LLMs were utilized to help check the correctness of answers, especially for open questions. 4.1 Prompting Strategy 6 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models User User Orchestrator Orchestrator Answer Agent Answer Agent Evaluation Agent Evaluation Agent LLM LLM Raw Questions Raw Question Question Prompt LLM Answer Question + LLM Answer + Ground T ruth Answer Evaluation Prompt Score (0 or 1) Score (0 or 1) Figure 2: Prompting Scheme template_question = \"\"\" {preamble} Answer the following question and provide the correct answer. The question is a multiple-choice question with a unique correct answer. {question} Answer the question by providing the correct alternative (example: the correct option is b). Do not provide an empty answer. \"\"\" Listing 4: GPT question answering prompt. In this research, we employ two distinct LLM agents to perform separate tasks. The first agent (Answer Agent) is responsible for answering the questions provided. The second agent (Evaluation Agent) is assigned the task of evaluating the correctness of the answers generated by the first agent. This dual-agent setup allows for a more thorough examination of the AI models\u2019 performance, as one focuses on generating responses while the other ensures the accuracy and relevance of the answers produced. We also use the Orchestrator to manage the message passing between the models. Figure 2, shows the prompting process. Initially, the User submits raw questions to the Orchestrator, which then presents a raw question to the Answer Agent. The Answer Agent creates a question prompt for the language model, and the LLM processes the prompt, generating an answer sent back to the Orchestrator. Listing 4 shows the question prompt used by the Answer Agent and passed to the LLM. In the listing, the placeholder {preamble} is used to pass some 7 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models template_evaluation = \"\"\" You are an expert professor specialized in checking students\u2019 answers to questions. You are checking the following question: {query} Here is the real answer: {answer} You are checking the following student\u2019s answer: {result} What grade do you give, where 0 is incorrect and 1 is correct? Give me only 0 or 1 as response. If the student\u2019s answer is not related to the question, give me 0. \"\"\" Listing 5: GPT answer evaluation prompt. contextual information (e.g., exam name and/or location) to the answering agent while the placeholder {question} is replaced with the user questions. Next, the Orchestrator forwards the original question, LLM\u2019s answer, and the ground truth answer to the Evaluation Agent. Subsequently, the Evaluation Agent formulates an evaluation prompt and delivers it to the LLM. The evaluation prompt used by the Evaluation Answer is presented in Listing 5 where {query} is placeholder for the question, {answer} is the ground truth question, and {result} is the answer provided by the Answer Agent. The LLM evaluates the prompt and returns a score (either 0 or 1) to the Orchestrator. Finally, the Orchestrator communicates the score to the User, concluding the process. 4.2 Context Provisioning In this study, we used the Llama Index (Liu, 2022) to create context-based questions for LLMs when addressing questions related to agricultural applications, specifically for USA and India datasets. As Embrapa\u2019s dataset is composed of open questions, the process of providing context is not applicable as the answer for the questions would be directly presented in the background material. Remember Embrapa\u2019s dataset is composed of direct questions and answers related to Brazilian agriculture. The Llama Index organized and indexed the same text material provided to certified agronomists when obtaining credits to renew their certificates. By incorporating the Llama Index, we aimed to improve the LLMs\u2019 performance on domain-specific tasks by aligning the context of a given question with the knowledge embedded in the models. The Llama Index organizes and indexes a large collection of domain-specific information, like agricultural texts, research articles, and expert opinions, into a structured and easily accessible format. This indexed information can then be used to create context-based questions for the LLMs, providing them with the necessary background knowledge to generate informed and accurate responses to questions in the agriculture domain. To generate a context-based question using the Llama Index, we extract important contextual information from the identified resources, select key facts, and concepts directly related to the questions. Finally, we crafted context-based questions by combining the original questions with the extracted contextual information, ensuring that the GPT models had access to the necessary background knowledge for generating accurate responses. 4.3 Ensemble Refinement Method Ensemble Refinement (ER) is a prompting strategy proposed by Singhal et al. (2023) that builds on techniques like chain-of-thought and self-consistency (Wei et al., 2023; Wang et al., 2023), leveraging the idea of conditioning a LLM 8 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models on its own generations before producing a final answer. ER is a two-stage process designed to improve the performance of the LLM when answering questions, particularly in the context of agriculture-related questions. In the first stage of ER, a (few-shot) chain-of-thought prompt and a question are given to the model. The model then generates multiple possible explanations and answers stochastically via temperature sampling. Each generation consists of an explanation and an answer for a question. In the second stage, the model is conditioned on the original prompt, the question, and the concatenated generations from the previous step. The model is then prompted to produce a refined explanation and answer. This can be viewed as a generalization of self-consistency, where the LLM aggregates answers from the first stage instead of conducting a simple vote. By considering the strengths and weaknesses of the generated explanations, the LLM can produce more refined answers. To further improve performance, the second stage is performed multiple times. The final answer is then determined by a plurality vote over the generated answers. In this paper, due to the resource cost associated with repeated samplings from a model, ER is only applied for multiple-choice evaluation. 5 Results This section presents the results for establishing a baseline for LLMs in agriculture, evaluating their performance on various datasets and question types. Our findings provide insights into LLM strengths and limitations, guiding future research and novel strategies for enhancing their capabilities. Table 1 presents the number of questions and the adopted preambles for each dataset. In our experiments, we used Azure Open AI deployments for GPT-3.5 and GPT-4. Additionally, for inference, we employed Llama2 models, specifically 13B on a single Nvidia-H100 and 70B on two Nvidia-H100s. Dataset #Questions Preamble CCA Video-Based Questions 89 The following question is related to agriculture and is part of the process of earning credits that are necessary for the renewal of the Certified Crop Adviser (CCA) certification. It is question related to the agricultural industry in the United States. CCA Text-Based Questions 312 Same as Previous preamble. Embrapa 18,955 The following question is related to Brazilian agriculture and is part of a series of questions and answers for Brazilian farmers distributed by Embrapa. AgriExams 1,723 The following question is related to Indian agriculture and is part of an exam to ingress in an important graduate program from an Indian university. Table 1: Number of questions and preambles for each dataset. 5.1 USA Dataset Results In our evaluation, we split the CCA-related questions into two categories: video-based questions and text-based questions. For the text-based questions, we assessed the performance of LLMs both with and without the use of the RAG, ER, and using or not a preamble. Just like in the original ER paper, we adopted for all experiments 11 samplings for the first stage and 33 samplings for the second stage. For video-based questions, we evaluated LLM performance without using retrieval techniques, as there is no textual information available in the videos to serve as context. This approach helps us understand how well LLMs can answer agriculture-related questions based solely on their pre-trained knowledge, without relying on external context provided by RAG techniques. Scenario ER Preamble Llama2-13B Llama2-70B GPT-3.5 GPT-4 1 55% 74% 74% 79% 2 \u2713 58% 72% 74% 83% 3 \u2713 \u2713 \u2013 \u2013 80% 83% Table 2: Video-based questions results. The results presented in Table 2 demonstrate the performance of Llama2-13B, Llama2-70B, GPT-3.5, and GPT-4 on video-based questions under various conditions. In the baseline scenario (Scenario 1), GPT-4 exhibited the best performance, achieving a score of 79%, followed by GPT-3.5 and Llama2-70B at 74%, and Llama2-13B at 55%. 9 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models When a preamble was introduced in Scenario 2, GPT-3.5\u2019s performance remained unchanged at 74%, while GPT-4\u2019s score increased to 83%, indicating the effectiveness of a preamble in boosting GPT-4\u2019s performance. In this scenario, Llama2-13B and Llama2-70B scored 58% and 72%, respectively. In our trials, GPT-3.5 and GPT-4 consistently exceeded the performance of Llama2-13B and Llama2-70B. Consequently, to reduce evaluation costs and focus our efforts on the most promising outcomes, we opted to employ ensemble refinement techniques solely on GPT-3.5 and GPT-4, for all experiments in this paper. In Scenario 3, ER was applied to GPT-3.5 and GPT-4. For GPT-3.5 and GPT-4, both models experienced improved scores, with GPT-3.5 achieving 80% and GPT-4 maintaining its 83% score. The findings suggest that GPT-4 consistently outperforms other models across all scenarios, with further enhancements observed when incorporating a preamble or ER. Scenario RAG ER Preamble Llama2-13B Llama2-70B GPT-3.5 GPT-4 1 47% 55% 64% 79% 2 \u2713 48% 60% 66% 82% 3 \u2713 \u2713 \u2013 \u2013 71% 84% 4 \u2713 70% 81% 82% 93% 5 \u2713 \u2713 71% 81% 88% 93% Table 3: Multiple choice questions results (CCA-related exams text based). The results from Table 3 demonstrate the performance of Llama2-13B, Llama2-70B, GPT-3.5, and GPT-4 in answering text-based multiple-choice questions from CCA-related exams under various scenarios. GPT-4 consistently outperforms all other models in every scenario, with a noticeable increase in performance when provided with assistance. For instance, when a preamble was introduced in Scenario 2, GPT-4\u2019s performance increased from 79% to 82%. The combination of ER and a preamble in Scenario 3 further enhanced GPT-4\u2019s score to 84%. Interestingly, the most significant improvement for all models was observed when employing the RAG (RetrievalAugmented Generation) technique in Scenarios 4 and 5. GPT-4 achieved an impressive 93% score, while Llama2-13B and Llama2-70B both saw substantial improvements, reaching 70% and 81%, respectively. GPT-3.5\u2019s performance also improved, reaching 82% in Scenario 4 and 88% in Scenario 5. While the contextual information provided in preambles is static, RAG retrieves information dynamically, allowing for better adaptation to the context of the questions and potentially resulting in more accurate responses. ER also improves results by harnessing hidden knowledge within LLMs using different temperatures for each GPT call, enabling the extraction of diverse and domain-specific insights for more accurate and contextually relevant answers. For some questions, LLMs give wrong answers within the context of agriculture. For instance, Listing 6 shows an example of how GPT-4 incorrectly answers the question. Which biotic stress factors have received the least attention in terms of commercially released transgenes? a) Insects b) Diseases c) Weeds d) Mammals Answer: b GPT Answer: d Mammals Listing 6: Adapted example of incorrect answer from CCA dataset (GPT-4 with preamble and no RAG). According to previous results, we can see that both RAG and ER methods can improve the quality of answers generated by LLMs. However, combining these methods to answer each question in the dataset can increase the computational costs for each question-answering task. Therefore, to evaluate the impact of ER and RAG simultaneously on GPT-X, we sampled 100 multiple-choice questions and assessed the models\u2019 performance. We also used preambles for the sampled dataset in all scenarios. Table 4 shows the results related to incorporating Ensemble Refinement (ER) and Retrieval-Augmented Generation (RAG) methods to the text-based CCA sampled dataset. Our findings indicate that the combination of these techniques significantly improves the quality of answers generated by Large Language Models (LLMs). In particular, GPT-4 demonstrated the highest performance, reaching 97% accuracy when both RAG and ER were applied. Interestingly, 10 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models Scenario RAG ER GPT-3.5 GPT-4 1 70% 85% 2 \u2713 70% 92% 3 \u2713 92% 93% 4 \u2713 \u2713 92% 97% Table 4: RAG + ER Evaluation table (CCA-related exams sampled text-based questions). while ER alone showed a substantial impact on GPT-4\u2019s performance, increasing accuracy from 85% to 92%, it did not have a noticeable effect on GPT-3.5\u2019s performance. In contrast, RAG alone greatly improved GPT-3.5\u2019s accuracy from 70% to 92%, while only slightly increasing GPT-4\u2019s accuracy from 85% to 93%. These results suggest that incorporating retrieval and ensemble refinement techniques can enhance LLMs\u2019 performance, with the combination of RAG and ER yielding the most significant improvements. While these techniques can significantly improve the quality of answers generated by Large Language Models (LLMs), it is important to note that the combination of RAG and ER comes with increased computational costs for each questionanswering task. ER, in particular, involves multiple samplings in its two-stage process, which can be resource-intensive and time-consuming. RAG, on the other hand, combines retrieval with generative models, which may require additional processing time and resources for accessing and integrating external knowledge. Despite the increased costs, the substantial improvements in model performance, as demonstrated by the results, may justify the use of these techniques in certain applications. However, it is essential to carefully weigh the trade-offs between performance gains and resource requirements when deciding whether to employ RAG, ER, or both in a given context. 5.2 Embrapa Results Scenario Preamble GPT-3.5 GPT-4 1 59% 72% 2 \u2713 78% 84% Table 5: Embrapa questions results. Question: Some countries have used in the culture of sorghum protectors known as safeners. Can this technology be used to use herbicides in sorghum crops? Answer: Although the effect of seed protectors used to protect plants against the phytotoxic effect of herbicides is known, their use in sorghum is not allowed in Brazil, as it is not registered on the Map. GPT Answer: Yes, the use of safeners in sorghum crops can help protect the plants from potential damage caused by herbicides. Safeners are chemicals that selectively protect the crop from the harmful effects of herbicides, allowing for better weed control without harming the sorghum plants. This technology has been successfully used in some countries and can be a valuable tool for managing weeds in sorghum crops. Listing 7: Adapted example of incorrect answer from Embrapa dataset (GPT-4 with preamble). Table 5 highlights the performance of GPT-3.5 and GPT-4 on open-ended questions related to Brazilian agriculture. One challenge in assessing the correctness of the answers generated by the models for Embrapa questions lies in their open-ended nature, which allows for multiple valid responses. Moreover, the RAG technique was not applied in these experiments to ensure a fair comparison, given that the Embrapa dataset already contains question-answer pairs. General models like GPT-3.5 and GPT-4 may struggle to provide accurate responses tailored to the Brazilian context due to potential limitations in their training data concerning Brazil\u2019s unique agricultural conditions and practices (e.g., Listing 7). Despite these challenges, the results suggest that GPT-4 is more capable of understanding and answering open-ended questions related to Brazilian agriculture than GPT-3.5, even without using RAG and accounting for the dataset\u2019s specific characteristics. Additionally, when a preambled is added to the questions specifying the fact they are related to the Brazilian agriculture, we can see an improvement on the accuracy for both models. 11 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models 5.3 AgriExams Results Scenario RAG ER Preamble Llama2-13B Llama2-70B GPT-3.5 GPT-4 1 47% 50% 54% 65% 2 \u2713 45% 48% 55% 66% 3 \u2713 \u2713 \u2013 \u2013 59% 70% 4 \u2713 48% 52% 68% 78% 5 \u2713 \u2713 48% 52% 68% 79% Table 6: AgriExams questions results. The results from Table 6 demonstrate the performance of Llama2-13B, Llama2-70B, GPT-3.5, and GPT-4 in answering AgriExams questions under various scenarios. As observed in the previous experiments, GPT-4 consistently outperforms all other models in every scenario, with noticeable increases in performance when provided with different methods. For instance, when a preamble was introduced in Scenario 2, GPT-4\u2019s performance increased from 65% to 66%, while Llama2-13B\u2019s and Llama2-70B\u2019s scores slightly decreased. Sometimes, Llama2-13B and Llama2-70B may not be as effective at utilizing the contextual information provided by the preamble as GPT-3.5 and GPT-4. The latter models have been trained on larger datasets and possess more advanced architectures, which may enable them to better understand and integrate contextual information in their responses. The combination of Expert Refinement (ER) and a preamble in Scenario 3 further enhanced GPT-4\u2019s score to 70%. Interestingly, the most significant improvement for GPT-3.5 and GPT-4 was observed when employing the RAG (Retrieval-Augmented Generation) technique in Scenarios 4 and 5, with GPT-4 achieving scores of 78% and 79%, respectively. Llama2-13B and Llama2-70B also experienced improvements in Scenarios 4 and 5, reaching scores of 48% and 52%, respectively. These findings underscore the importance of incorporating additional information and techniques, such as a preamble, ER, and RAG, to enhance the performance of large-scale language models in specialized question-answering tasks, such as those found in AgriExams. Considering that the AgriExams questions are tailored to India\u2019s agriculture, the models may face challenges when answering questions specific to the region. Additionally, the background material provided2 is not explicitly designed for these questions, unlike the USA dataset. They are general guidelines related to Indian\u2019s agriculture which may help to answer the questions or not depending on how they are formulated. Listing 8 shows an example of question that was not able to be responded correctly by GPT-4 model. Who is the advocate of the \"Marthandam\" Rural Development Programme? a) Spencer Hatch b) Daniel Hamiltion c) A.T. Mosher d) M.K. Gandhi Answer: a GPT Answer: b) M.K. Gandhi Listing 8: Adapted example of incorrect answer from Agriexams dataset (GPT-4 without RAG and with preamble). Just like the CCA-based text exams, we sampled 100 questions from the Agriexams dataset to evaluate the GPT-X performance using ER and RAG (Table 7). In this case, we used the questions from the JNKVV & RVSKVV M.Sc Entrance exam (2022) 3 which contains multiple choice questions related to Indian agriculture and focused only on GPT-X architectures. Besides the questions and answers we also had access to the human results for this dataset (kvv, 2022). Figure 3 presents the human scores\u2019 cumulative distribution function (CDF) and the GPT results for versions 3.5 and 4, with and without RAG and ER. The results are also presented in Table 7 with scores, ranks and percentile for each scenario of GPT-X evaluation. For each scenario, we just compared GPT-X results to the human available scores. Just like the previous results, we can see GPT-4 consistently outperforms GPT-3.5 across all scenarios, with noticeable improvements when provided with additional methods. In particular, GPT-4 maintains the top rank ER or RAG is introduced. Furthermore, the combination of RAG and ER in Scenario 4 led to the highest performance for both models, with GPT-3.5 reaching a 99.66% percentile and GPT-4 maintaining a top 1 rank. These findings underscore the value of 2https://www.agriexam.com/agriculture-pdf-notes 3https://www.agriexam.com/jnkvv-rvskvv-m-sc-entrance-paper 12 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models 10 20 30 40 50 60 70 80 #Correct answers out of 100 0.0 0.2 0.4 0.6 0.8 1.0 Cumulative Probability Density GPT Scores Compared to the Human Scores Human Scores CDF GPT-3.5 GPT-3.5 (ER) GPT-3.5 (RAG) GPT-3.5 (RAG+ER) GPT-4 GPT-4 (ER) GPT-4 (RAG) GPT-4 (RAG+ER) Figure 3: GPT-X results compared to human scores for JNKVV & RVSKVV M.Sc Entrance exam (2022). Rank represents the hypothetical position of GPT-X on the exam. Percentile is the CDF percentile for the hypothetical GPT-X score. Scenario RAG ER GPT-3.5 GPT-4 Score Rank Percentile Score Rank Percentile 1 62% 57 97.35% 71% 6 99.72% 2 \u2713 65% 24 98.99% 80% 1 100% 3 \u2713 68% 11 99.55% 79% 1 100% 4 \u2713 \u2713 69% 8 99.66% 83% 1 100% Table 7: RAG + ER Evaluation table (JNKVV & RVSKVV M.Sc Entrance exam (2022). utilizing multiple techniques, such as RAG and ER, to improve the performance of language models in domain-specific question-answering tasks like the JNKVV & RVSKVV M.Sc Entrance Exam (2022). In contrast, GPT-3.5 experienced more modest increases in accuracy across all scenarios. When both RAG and ER were applied, its accuracy reached 69%, only slightly higher than the 68% achieved when using RAG alone. These results highlight the potential benefits of employing RAG and ER techniques to enhance the performance of Large Language Models (LLMs) in specific contexts, such as the Agriexams sampled dataset. However, the effectiveness of each method may vary depending on the particular model and dataset being used. 6 Conclusion This study aimed to establish a baseline for assessing the capabilities of large language models, such as LLama 2, GPT-3.5 and GPT-4, in addressing complex problems in agriculture. By evaluating their performance on various datasets and question types, the study provides valuable insights into the strengths and limitations of LLMs within the agricultural domain. It also highlights the importance of leveraging AI in a responsible and effective manner, addressing challenges related to accuracy, safety, and potential biases. The primary contributions of the paper include the establishment of performance baselines for LLMs on agriculturerelated problems, enabling researchers and practitioners to compare their results with the current state of LLM performance on these problems. Furthermore, the exploration of the RAG and ER technique\u2019s impact on LLMs demonstrates its potential in improving the performance of LLMs on region-specific questions, although GPT-4 remains superior even without using RAN and ER. 13 GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models The results demonstrated that GPT-4 consistently outperforms other models across different datasets and question types, including video-based, text-based, and open-ended questions. This superior performance can be attributed to GPT-4\u2019s larger model size and more extensive training dataset, which enables it to capture complex patterns and relationships more effectively. The findings emphasize the potential of GPT-4, particularly when combined with retrieval techniques like RAG, in solving complex agriculture-related problems and answering questions accurately. By providing a starting point for further investigation, this study hopes to foster innovation and collaboration among researchers, ultimately leading to the effective application of LLMs in agriculture. This, in turn, can contribute to the development of more efficient and sustainable farming practices, addressing critical challenges in global food security and environmental conservation. As we move forward, it is crucial to continue refining our understanding of GPT-4\u2019s capabilities and explore richer prompting strategies, alignment techniques, and metacognitive capabilities. Additionally, future research should focus on mitigating the risks associated with erroneous generations and biases while maximizing the benefits of AI integration in agriculture. Overall, this study serves as a stepping stone towards unlocking the potential of AI in agriculture, paving the way for innovative solutions, improved resource management, and more resilient agricultural systems capable of adapting to the challenges posed by climate change and growing global food demand." + }, + { + "url": "http://arxiv.org/abs/2404.10198v1", + "title": "How faithful are RAG models? Quantifying the tug-of-war between RAG and LLMs' internal prior", + "abstract": "Retrieval augmented generation (RAG) is often used to fix hallucinations and\nprovide up-to-date knowledge for large language models (LLMs). However, in\ncases when the LLM alone incorrectly answers a question, does providing the\ncorrect retrieved content always fix the error? Conversely, in cases where the\nretrieved content is incorrect, does the LLM know to ignore the wrong\ninformation, or does it recapitulate the error? To answer these questions, we\nsystematically analyze the tug-of-war between a LLM's internal knowledge (i.e.\nits prior) and the retrieved information in settings when they disagree. We\ntest GPT-4 and other LLMs on question-answering abilities across datasets with\nand without reference documents. As expected, providing the correct retrieved\ninformation fixes most model mistakes (94% accuracy). However, when the\nreference document is perturbed with increasing levels of wrong values, the LLM\nis more likely to recite the incorrect, modified information when its internal\nprior is weaker but is more resistant when its prior is stronger. Similarly, we\nalso find that the more the modified information deviates from the model's\nprior, the less likely the model is to prefer it. These results highlight an\nunderlying tension between a model's prior knowledge and the information\npresented in reference documents.", + "authors": "Kevin Wu, Eric Wu, James Zou", + "published": "2024-04-16", + "updated": "2024-04-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "Large language models (LLMs), though powerful, are prone to hallucination (Pal et al., 2023; Sun et al., 2024; Ahmad et al., 2023). Additionally, they are restricted to knowledge contained in their training corpus, and so are unable to answer queries about recent events or publicly restricted information. Retrieval augmented generation (RAG) is a commonly used framework that provides relevant retrieved content in the LLM prompt and can significantly improve model accuracy (Mao et al., 2020; Chen et al., 2024a; Lewis et al., 2020). Most commercial LLMs, like ChatGPT (OpenAI, 2023), Gemini (Gemini Team, 2023), and Perplexity.ai already employ some version of RAG in their Web interfaces. For example, ChatGPT employs a Bing search whereas Gemini accesses Google Search results. Though RAG has quickly become a default feature of user-facing LLM systems, most evaluations of LLM capabilities are still performed on the non-RAG counterparts (Zheng et al., 2023). This is problematic, as a model\u2019s default and RAG-enabled responses can drastically diverge depending on the quality and accuracy of the retrieved content. This problem is com- pounded when considering that web results constantly change, and can contain outdated, incorrect, or harmful information (Dash et al., 2023; Daws, 2020; Nastasi et al., 2023). Thus, objective evaluations of RAG-enabled LLM behavior are as important as benchmarking their non-RAG counterparts, especially as RAG systems are increasingly relied upon to provide factual information in a myriad of domains. \u2217Denotes equal contribution. 1 arXiv:2404.10198v1 [cs.CL] 16 Apr 2024 Figure 1: A schematic of generating modified documents for each dataset. A question is posed to the LLM with and without a reference document containing information relevant to the query. This document is then perturbed to contain modified information and given as context to the LLM. We then observe whether the LLM prefers the modified information or its own prior answer. In this work, we aim to quantify the tension between LLMs\u2019 internal knowledge and the retrieved information presented in RAG settings. To tease apart these two competing forces, we query LLMs to answer questions and measure the token probabilities while intro- ducing varying perturbations to reference documents. Our analyses reveal two key findings: \u2022 The likelihood of the LLM to adhere to the retrieved information presented in context (RAG preference rate) is inversely correlated with the model\u2019s confidence in its response without context (its prior probability). \u2022 Similarly, LLMs will increasingly revert to their priors when the original context is progressively modified with unrealistic values. We find that these relationships hold under analysis on six different domain datasets across over 1200 questions. We also find that the choice of prompting technique (e.g. strictly adhere, loosely adhere) can influence both the baseline and strength of this relationship. These results highlight the inherent tension in LLMs between the model\u2019s pre-trained knowledge and the retrieved content provided in context. The issue of hallucination in LLMs has been explored in multiple contexts and models (Ji et al., 2023; Kaddour et al., 2023). As a response, RAG systems have been shown to reduce hallucination (Shuster et al., 2021; Kang et al., 2023). Previous works have explored automated RAG evaluation frameworks in various settings (Es et al., 2023a; Hoshi et al., 2023; Saad-Falcon et al., 2023a; Zhang et al., 2024). For example, some studies use LLMs to evaluate the faithfulness, answer relevance, and context relevance of RAG systems by using GPT-3.5 as an evaluator (Es et al., 2023b; Saad-Falcon et al., 2023b). In another study, the authors propose metrics such as noise robustness, negative rejection, information integration, and counterfactual robustness (Chen et al., 2024b). Multiple studies have shown that RAG can mislead LLMs in the presence of complex or misleading search results and that such models can still make mistakes even when given the correct response (Foulds et al., 2024; Shuster et al., 2021). In relation to understanding model priors, other works have used log probabilities to assess the LLM\u2019s confidence in responses (Mitchell et al., 2023; Zhao et al., 2024). However, so far there has not been a systematic exploration of a model\u2019s confidence (via logprobs) and the model\u2019s preference for RAG-provided information. 2 Figure 2: Across six QA datasets using GPT-4, we consistently observe an inverse rela- tionship between the RAG preference rate (y-axis) and two characteristics (x-axes): 1. the model\u2019s prior response probability (lefthand plots), and 2. the amount of deviation from the prior (righthand plots). RAG preference rate is defined as the proportion of responses that align with the information presented in the prompt as context. The model\u2019s prior response probability is computed from the average log probability of the response tokens queried without RAG. The left plot in each pair visualizes the prior probability (grouped into 10 bins) against the RAG preference rate, along with the best-fit trend line and slope. The right plot visualizes absolute deviation from the reference information (for numerical datasets (top), up to two log-fold changes (along with the trendline); for categorical datasets (bottom), a total of four modification categories) against RAG preference rate. Additionally, the upper and lower half percentiles are shown in the right plots to illustrate that lower probability prior responses have monotonically lower RAG preference rates than higher probability prior responses.", + "main_content": "Our main analysis consists of evaluating the RAG question-answering capabilities of GPT-4 when introducing varying levels of perturbations on the RAG documents. For this study, our dataset consists of 1,294 total questions across 6 different domains. Wherever referenced, the GPT-4 model used is gpt-4-turbo-preview, accessed in March 2024. We additionally evaluate our dataset on two other models: GPT3.5 (gpt-3.5-turbo-0125) and Mistral-7B, the Mistral-7BInstruct-v0.1. We chose these two LLMs as they are top-performing models that also allow access to the model\u2019s token probabilities (via the OpenAI and Huggingface APIs). All main figures and tables report results using GPT-4; analyses using GPT-3.5 and Mistral-7B are reported in the Appendix. 3 2.1 Dataset We generate questions from six subject domains. To generate a large set of question-andanswer pairs, we extract a corpus of content webpages and then query GPT-4 to generate a question based on the text, along with the ground truth answer and the excerpt used to generate the question. For each dataset below, we provide the full prompts used to generate questions in the Appendix. 2.1.1 Drug Dosages We initially randomly sampled 500 drug information pages from UpToDate.com, a medical reference website widely used by clinicians. To constrain the scope of questions, we specify in the prompt that the answer must be numerical and in milligrams. To filter out generated questions that did not meet the specified criteria (e.g. ambiguous question, incorrect units, etc.), we perform an additional quality control step, where we ask GPT-4 to verify that the generated question fulfills all criteria. After this step, we have 266 question-answer pairs. 2.1.2 Sports Statistics We pulled Olympics records pages from Wikipedia.org across 9 sports: Athletics, weightlifting, swimming, archery, track cycling, rowing, shooting, short track speed skating, and speed skating. Records are extracted in a table format, from which questions are generated for each record entry. In total, after filtering, we extracted 192 unique questions and answers. 2.1.3 News Top headlines are pulled from the Associated Press RSS feed for dates ranging from 03/15/24 to 03/25/24. From an initial corpus of 1486 news articles, we use GPT-4 to generate one question per article, instructing it to produce questions for which there is a clear numerical answer. We perform another GPT-4 quality control step and result in 249 unique questionanswer pairs. 2.1.4 Dates, Names, and Cities We begin with a random sample of 1000 articles from Huggingface\u2019s Wikipedia dataset (20220301.en, (Foundation)). We use GPT-4 to generate questions related to each field (dates, names, and cities) and filter out responses where the excerpt is not exactly found in the context. To reduce ambiguity when matching groundtruth answers, we restrict the answers to fit certain formats. For dates, we require that the answer adheres to a four-digit year (YYYY). For names, we require a first and last name (eg. George Washington). For cities, we remove any other identities (eg. Seattle, not Seattle, WA). For each domain, among the remaining question-answer pairs that fit these criteria, we randomly sample 200 for our evaluation set. 2.2 Concordance We measure concordance, or the agreement between the reference answer generated based on the article content, and the model\u2019s answer to the corresponding generated question. This is computed for both the model\u2019s answer with and without context. 2.3 Modifying the Retrieved Documents We perform systematic perturbations on each question/answer pair (as visualized in Figure 1. In three datasets with numerical answers (Drug Dosages, Sports Records, Latest News), we produce ten modifications that act as multipliers on the original value: 0.1, 0.2, 0.4, 0.8, 1.2, 1.5, 2.0, 3.0, 5.0, 10.0. In the Wikipedia Years dataset, we perform ten absolute modifications in increments of 20 years for a range of [\u2212100, 100]. For the Wikipedia Names and Locations, the discrete categories required more hand-crafted levels of variation. For each, we performed three categorical perturbations via prompting: slight, significant, 4 and comical. We provide the full prompts used in our study in the Appendix. For example, for a name like Bob Green, a slight modification implies a small tweak to another real name (Rob Greene), whereas a significant modification produces a similar but fictitious name (Bilgorn Grevalle), and a comical modification is an absurd variant (Blob Lawnface). For a city name like Miami, a slight modification changes the name of the most similar city (Fort Lauderdale), a significant modification produces a fictitious city name (Marisole), and a comical modification produces an absurd variant (Miameme). Because of differences in how each modified fact might appear in the retrieved text, we utilize GPT-4 to generate the perturbed excerpts for drug dosages and news. Each modified fact is replaced in the original retrieved text. Then, both the question and context are posed to GPT-4, from which the answers, along with the log probabilities of the output tokens, are collected. 2.4 RAG vs Model Prior Analyses The main analysis we perform in this study is comparing the RAG preference of a model against its internal prior. The LLM is first queried with a question without context. This response and the average probability of the tokens (accessed via the log probs) are referred to as the model\u2019s prior response and the prior probability, respectively. The LLM is then queried again, this time with the retrieved content present in the prompt. The resulting response (the response with RAG) is then compared with the prior response: if the response is still the same as the prior response, then the model prefers its prior. On the other hand, if the model response aligns with the information present in the retrieved content, then the model prefers RAG. For each dataset, the RAG preference rate is computed as the average across all RAG queries. The RAG preference rate is compared against two measurements: the prior probability and the deviation from the prior value. The former is computed by accessing the log probabilities from the OpenAI API call. As these are provided in log scale, we exponentiate them to produce linear probabilities when presenting the results. The latter is computed in several ways. For the Drug Dosages, Sports Statistics, and Latest News datasets, the absolute log fold change between the prior value and the modified value is computed; for the Wikipedia Dates dataset, the simple absolute year change is used; and for the Wikipedia Names and Locations datasets, each categorical change is presented in order of degree of modification. 2.4.1 Analyzing the Effects of Different Prompting Strategies Additional analysis is performed on the prompting technique itself: for the examples above, we use a standard prompt template that is based on RAG prompts used on popular LLM open-source libraries, with over 800k downloads as of March 2024 (LangChain and LlamaIndex). In addition to this template (called Standard), we introduce two more prompt modifications: Strict, which strongly enforces literal adherence to the retrieved context, and Loose, which encourages the model to reason over the retrieved context before responding. 3 Results 3.1 Concordance In Table 1, we observe that the model\u2019s prior response only agreed with the reference answer 34.7% on average. However, the RAG answers elevated the concordance to 94%. This result demonstrates that the RAG pipeline established in this work is highly effective at encouraging the model to adhere to its retrieved content. However, in the minority of cases where providing the retrieved content fails to correct the LLM, we find that the model simply responds with its original prior answer about 20% of the time. 5 Figure 3: Examples from three datasets demonstrating differential LLM responses across various types of context modifications. Responses in red indicate wrong responses (different than the answer); responses in green indicate correct responses. 3.2 RAG Preference Rate vs. Prior Probability In Figure 2 (left side plots), we observe a consistent negative relationship between the token probability of the model\u2019s prior answer and the associated RAG preference rate for all six QA datasets. To visualize an even distribution across probabilities, we bin the probabilities into ten equidistant bins in the range of [0.0, 1.0]. We additionally present the slope from performing a linear regression on the binned probability values against the RAG preference rate in Table 1. The slope indicates the effect of stronger model confidence on the model\u2019s preference for the information presented in the retrieved context; we observe different slopes (ranging from -0.1 to -0.45), suggesting that the effectiveness of RAG in different QA domains can be characterized as being relatively susceptible (e.g., with Dates questions) or robust (e.g., with News questions) to the model\u2019s internal prior knowledge confidence. Specifically, a slope of -0.45, for instance, can be interpreted as expecting a 4.5% decrease in the likelihood of the LLM preferring the contextual information for every 10% increase in the probability of the model\u2019s prior response. 3.2.1 RAG Preference Rate vs Deviation from Prior We also consider the degree of deviation between the model\u2019s prior response and the value contained in the retrieved context (Figure 2, right side plots). A similar pattern emerges in this analysis: as the RAG value diverges from the model\u2019s prior, the model is less likely to adopt the RAG value over its own initial response. We additionally plot data split into the upper and lower half percentiles and observe that, across all six datasets, the lower probability prior responses are monotonically lower than the higher probability response tokens. Thus, the correlation between deviation and RAG response rate holds across both bands of probabilities. 3.2.2 Effect of prompting technique on RAG adherence To assess the degree of influence that the specific prompting technique has on RAG adherence, we test two additional prompts (\u201dstrict\u201d and \u201dloose\u201d) on GPT-4. The strict prompt is intended to coerce the model to disregard its own prior response, while the loose prompt is 6 Figure 4: Effect of different prompts using GPT-4 on RAG preference rate vs prior probability. The \u201dStrict\u201d prompt strongly enforces literal adherence to the retrieved context, while the \u201dLoose\u201d prompt encourages the model to make a reasonable judgment in light of the provided context. We observe lower and steeper drops in RAG adherence with the loose vs strict prompts, suggesting that prompt wording plays a significant factor in controlling RAG adherence. Full prompts are provided in the Appendix. intended for the model to arbitrate between its own prior and the contextual information provided. In Figure 4, the strict prompt has uniformly higher RAG adherence than the standard prompt. The loose prompt, on the other hand, results in much lower RAG adherence rates as prior probability increases. Interestingly, the slope is also steeper, indicating a larger per-unit decrease in RAG preference as the prior probability increases. The choice of prompt is thus an important mechanism for influencing the LLM\u2019s RAG preferences. 3.2.3 Differences in effects between GPT-4, GPT-3.5, and Mistral-7B We report the same analyses when using GPT-3.5 and Mistral-7B in Table 2 and Figure 5. We observe significantly lower performance both in concordance of the prior and with RAG. However, as seen in Figure 5, we nonetheless observe the same inverse trends in these two models as seen in the results with GPT-4. Of note, some datasets (like Latest News) perform poorly without RAG (the model refused the vast majority of queries or provided invalid responses), and thus the prior token probabilities could not be analyzed. In the Mistral-7B results, we also observe that some of the responses using RAG could not consistently provide valid responses. 4 Discussion While RAG is becoming standard practice in commercially available LLMs, the reliability of such systems is still understudied. Our experiments uncover several mechanisms that modulate the degree to which LLMs adhere to RAG systems. Specifically, we quantify a tug-of-war between the strength of the model\u2019s prior and the rate at which the model 7 GPT-4 Concordance (Prior) Concordance (w/ RAG) Slope Drug Dosage 0.554 0.884 -0.26 Sports Stats 0.240 0.943 -0.18 Latest News 0.133 0.936 -0.10 Wikipedia Dates 0.433 0.995 -0.45 Wikipedia Names 0.350 0.965 -0.13 Wikipedia Locations 0.375 0.920 -0.28 Average 0.347 0.940 -0.23 Table 1: Concordance between the GPT response and the reference values for each dataset. Prior refers to GPT-4 responses without context, and \u201dw/ RAG\u201d refers to responses with the relevant retrieved context included in the prompt. Additionally, we include the slope of the relationship between prior probability and RAG preference rate. For instance, the average slope is -0.23, which means that for every 10% increase in the probability of the prior token, we observe a 2.3% decreased likelihood of RAG preference. adheres to the RAG document\u2019s facts. This effect is at odds with claims that RAG itself can fix hallucinations alone, and occurs even when the model is prompted to adhere to RAG documents strictly. RAG systems have a unique appeal over traditional search engines in that they can incorporate prior knowledge to fill in the gaps and extrapolate the retrieved information. We find that this comes with trade-offs \u2013 namely, that such priors can override information provided in documents. When perturbing RAG documents over a wide interval of values, the point at which models revert to their prior responses, or \u201dtipping points\u201d, are latent and heterogeneous across different models and domains. While strong priors are not inherently problematic (and can often serve to safeguard models), the lack of explicit expectations around how models will mix reference documents with their priors can lead to downstream issues. For example, if RAG systems are used to extract nested financial data to be used in an algorithm, what will happen if there is a typo in the financial documents? Will the model notice the error and if so, what data will it provide in its place? Given that LLMs are soon to be widely deployed in many domains including medicine and law, users and developers alike should be cognizant of their unintended effects, especially if users have preconceptions that RAG-enabled systems are, by nature, always truthful. There are several key limitations in our analyses. First, RAG systems can be deployed to many more domains than can be covered by our analyses. However, we hope that our study ixix domains paints an initial picture of the nature of RAG systems. Second, to make our experiments tractable, our question-generation process is strictly fact-based and does not require multi-step logic, document synthesis, or other higher-level reasoning. Third, the perturbations we produce are based on our priors for what would constitute a reasonable or unreasonable range of values. In a natural setting, we would imagine more discrete types of errors (eg. typos, ambiguities, missing information, etc.) which are harder to simulate. We also perform evaluations on GPT-3.5 and GPT-4 because the OpenAI API allows for access to token-wise log probabilities along with the responses. As a consequence, we are limited against performing more comprehensive evaluations on models such as Gemini and Claude because the APIs for these models do not provide access to such information. LLMs are now commonly used as parts of larger, more complex systems. It is crucial to understand how these models interact with information with varying degrees of trustworthiness, accuracy, and uniformity. Our analysis shows that further work is required to characterize the risks of using LLMs to answer questions given contextual information. In particular, we find that model behavior can be erratic and unpredictable when presented with information that exists at the margin of its prior beliefs. 8" + }, + { + "url": "http://arxiv.org/abs/2401.07883v1", + "title": "The Chronicles of RAG: The Retriever, the Chunk and the Generator", + "abstract": "Retrieval Augmented Generation (RAG) has become one of the most popular\nparadigms for enabling LLMs to access external data, and also as a mechanism\nfor grounding to mitigate against hallucinations. When implementing RAG you can\nface several challenges like effective integration of retrieval models,\nefficient representation learning, data diversity, computational efficiency\noptimization, evaluation, and quality of text generation. Given all these\nchallenges, every day a new technique to improve RAG appears, making it\nunfeasible to experiment with all combinations for your problem. In this\ncontext, this paper presents good practices to implement, optimize, and\nevaluate RAG for the Brazilian Portuguese language, focusing on the\nestablishment of a simple pipeline for inference and experiments. We explored a\ndiverse set of methods to answer questions about the first Harry Potter book.\nTo generate the answers we used the OpenAI's gpt-4, gpt-4-1106-preview,\ngpt-3.5-turbo-1106, and Google's Gemini Pro. Focusing on the quality of the\nretriever, our approach achieved an improvement of MRR@10 by 35.4% compared to\nthe baseline. When optimizing the input size in the application, we observed\nthat it is possible to further enhance it by 2.4%. Finally, we present the\ncomplete architecture of the RAG with our recommendations. As result, we moved\nfrom a baseline of 57.88% to a maximum relative score of 98.61%.", + "authors": "Paulo Finardi, Leonardo Avila, Rodrigo Castaldoni, Pedro Gengo, Celio Larcher, Marcos Piau, Pablo Costa, Vinicius Carid\u00e1", + "published": "2024-01-15", + "updated": "2024-01-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.IR" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "The rise of Large Language Models (LLMs) has changed the way we approach Artificial Intelligence (AI) applications. Their ability to answer different user queries in different domains allow these models to show a notable performance in a wide range of tasks like translation, summarizing, question answering, and many others [1]. However, there are a lot of open challenges when it comes to problems that require answers based on updated information, and external data, that were not available in the training data. In order to overcome this challenge, a technique called Retrieval Augmented Generation (RAG) [2] was developed. This approach aims to solve the limitation of the need for external data, by fetching and incorporating this information in the prompt. With this, the model can generate more cohesive answers about subjects and data not seen during the training, decreasing the occurrence of hallucinations [3]. Nevertheless, this approach adds a new layer of challenges since it requires the development of a trustworthy retriever pipeline, given that the quality of the final answer can be highly affected if the retrieved text is not relevant to the user query [4]. The landscape of RAG is rapidly expanding, with a constant influx of new papers introducing diverse implementations [5]. Each of these variants proposes technical modifications or enhancements, such as different retrieval mechanisms, augmentation techniques, or fine-tuning methodologies. This proliferation, while a testament to the field\u2019s dynamism, arXiv:2401.07883v1 [cs.LG] 15 Jan 2024 The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT presents a substantial challenge for AI practitioners. The task of methodically experimenting with, and critically evaluating, each variant\u2019s performance, scalability, and applicability becomes increasingly complex. In this paper, we present a comprehensive series of experiments focused on the application of RAG specifically tailored for Brazilian Portuguese. Our research delves into evaluating various retrieval techniques, including both sparse and dense retrievers. Additionally, we explore two chunking strategies (naive and sentence window) to optimize the integration of retrieved information into the generation process. We also investigate the impact of the positioning of documents within the prompt, analyzing how this influences the overall quality and relevance of the generated content. Finally, our experiments extend to comparing the performance of different LLMs, notably GPT-4 and Gemini, in their ability to effectively incorporate the retrieved information and produce coherent, contextually accurate responses. This paper aims to provide valuable insights and practical guidelines for implementing RAG in Brazilian Portuguese. Our main contributions are summarized as follows: 1) we propose a methodology to prepare a dataset in a format that allows quantifying the quality of the different steps in an RAG system. 2) We proposed a metric (maximum relative score) that allow us to direct quantify the existent gap between each approach and a perfect RAG system. 3) We discuss and compare different implementations, showing good practices and optimizations that can be used when developing a RAG system.", + "main_content": "The chosen dataset was the first Harry Potter book in its Brazilian Portuguese version. This choice is motivated since it is a well known book, and both the Gemini Pro and OpenAI models can answer general questions on the subject. Additionally, over the application of the standard ChatGPT tokenizer cl100k_base, we observed that there are approximately 140, 000 tokens in total, allowing the creation of prompts containing the entire book. Following, a dataset consisting of questions and corresponding answers was developed, with both question and answer generated by the gpt-4 model and based on a reference chunk. Figure 1: From a large document (book), chunks were created, and for each chunk, a question and an answer were generated using gpt-4, where the answer is contained within the chunk. Figure 1 shows the data preparation process. Initially, the dataset was break into chunks with 1000 tokens each, without overlapping, resulting in 140 chunks. Then, using the prompt described in Appendix A, a {question, answer} pair was created for each chunk in the style of the SQuAD dataset [6], meaning that the answer to the question is present within the reference text (chunk). 3 How to Evaluate The contextual comparison of two text samples is not a straightforward task. For instance, despite the sentence1 = \"Brazil has won 5 FIFA World Cup titles.\", and sentence2 = \"Brazil is the five-time champion of the FIFA World Cup.\" (both sentences translated into English for convenience) convey the same meaning, traditional metrics such as BLEU [7] and ROUGE [8] score may not be able to capture such similarity. Specifically, for the example cited: \u2022 BLEU score [sentence1, sentence2] = 0.33 \u2022 ROUGE score [sentence1, sentence2] = 0.22 Therefore, an approach widely used in the literature is to employ gpt-4 to provide a score based on a given prompt, a concept similar to what was done in the G-Eval work [9]. In this work, a scoring system divided into 5 categories to compare two texts was devised, with scores defined as following (translated into English for convenience): The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT \u2022 score 1: The answer has no relation to the reference. \u2022 score 3: The answer has little relevance but is not aligned with the reference. \u2022 score 5: The answer has moderate relevance but contains inaccuracies. \u2022 score 7: The answer is aligned with the reference but has minor omissions. \u2022 score 10: The answer is completely accurate and aligns perfectly with the reference. The prompt used in the evaluation is shown in Appendix A. Our approach uses a one-shot technique for each scoring category. Although, we believe that the evaluation could become more robust and deterministic with the addition of more few-shot examples for each scoring category, these possible variations were not explored in this work. 3.1 Relative Maximum Score In order to assess performance variation for the followings experiments, we created a metric called the relative maximum score, which corresponds to the score given by a model when evaluating the correct combination of question and chunk for all pairs of a given dataset. Through this approach, it is possible to obtain the maximum score that an evaluated LLM could reach for a RAG system. The Table 1 presents results for the custom dataset created in Section 2, by using different LLMs to generate the answer and the gpt-4 score system previously defined. Table 1: Relative maximum score in 140 questions from the created QA Harry Potter dataset. Model Relative Maximum gpt-4 7.55 gpt-4-1106-preview 7.32 gpt-3.5-turbo-1106 7.35 Gemini Pro 7.52 Despite configuring all the seeds and reusing the prompts, the relative maximum in our data was approximately 7.4 average points. This points out that, even with a perfect retriever strategy, the RAG system is not able to achieve a perfect score in this dataset by using these LLM models. For now on, all experiments in this study were assessed in terms of both the relative maximum and the percentage degradation with respect to the relative maximum, as defined in Equation 1. degradation score = 1 \u2212experiment score relative maximum (1) With this score, we are able to address the problems regarding the retriever system itself, instead of having a vague idea of where is the main gap of our pipeline. 4 Introductory Experiments In this section, we will establish a baseline for the metrics defined in Section 3. Additionally, we will apply techniques of lower complexity and compare the results with the baseline. It\u2019s worth noting that we did not explore prompt engineering techniques, although we are aware that prompt engineering has a direct impact on performance, as demonstrated in [10]. 4.1 Baseline: no context We are aware that LLMs are trained on a massive dataset that covers virtually the entire web. This circumstance, coupled with the popularity of the Harry Potter universe, forms a robust hypothesis for testing questions in isolation on OpenAI models. The answer for basic questions such as \"Who is Harry Potter?\", \"Who killed Dumbledore?\", and \"What are Harry Potter\u2019s main friends?\", ChatGPT were correctly answered with precision. However, we observed that when dealing with more detailed questions, the performance was only reasonable. Below are two examples of detailed questions (translated into English for convenience): The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT \u2022 Query: What was Harry\u2019s strategy to stop Quirrell from focusing on the mirror and what did he try to do to discover the location of the Philosopher\u2019s Stone? \u2013 Answer: What occurred to Harry was to keep Quirrell talking to stop him from focusing on the mirror. \u2022 Query: What model of broom did Harry Potter receive and who mentioned the special circumstances to Professor Flitwick? \u2013 Answer: The model of the broom that Harry Potter received is a Nimbus 2000 and it was Professor. Minerva who mentioned the special circumstances to Professor Flitwick. The Table 2 shows the baseline results obtained using some known LLMs for the 140 questions built as described in section 2 and evaluated as described in section 3. For this task, no retrieved context were used, only the question. Table 2: Performance of the External Knowledge experiment. Model Average Score Degradation gpt-4 5.35 -29.1% gpt-4-1106-preview 5.06 -30.9% gpt-3.5-turbo-1106 4.91 -32.8% Gemini Pro 3.81 -50.8% 4.2 Long Context In comparison to the GPT 1 and 2 models [11,12], which handle up to 1024 input tokens, the gpt-4-1106-preview model stands out for its remarkable ability to process up to 128k input tokens. This represents an approximately 128 times increase in input capacity over just four years in model development. The specific architecture of gpt-4 has not been disclosed, but it is believed that this model has not been pre-trained with a 128k token input context [13]. Perhaps a post-pre-training technique could have been used, which would have made it possible to expand the number of input tokens [14,15]. However, it is essential to note that such a technique may show degradation as the expansion limit is reached [16]. Similar to Recurrent Neural Networks (RNNs), which theoretically have an infinite context, disregarding performance limitations and vanish-gradients, we are interested in evaluating the performance of gpt-4-1106-preview over its 128k tokens. To assess the impact of the gpt-4-1106-preview full context capacity on the model\u2019s response, we proceed with a similar analysis of \"Lost in The Middle\" [17] in our dataset. This analysis explores the model output for a given question while changing the position of the answer throughout the prompt. To conduct this experiment, the depth of the chunk containing the answer for the question was altered in increments of 10% of the total number of tokens in the context\u2019s prompt. Thus, on the y-axis, there are 11 variations of answer depth, represented by 0%, 10%, 20%, . . . , 90%, 100%, and the x-axis represents the quantity of tokens used as input in the context, as shown in Figure 2. The colors represents the experiment score, where the greener the better. Figure 2: Performance of gpt-4-1106-preview on the Harry Potter dataset, x-axis: spaced at every 1, 000 tokens of input from the document, y-axis: represents the depth at which the answer is located in the document. The greener the better. Image based on Gregory repository [18]. The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT For instance, for (x = 100, 000, y = 40%), there are (39 chunks, followed by the chunk containing the answer, then by the remaining 60 chunks, making up the 100, 000 tokens in the input context. Based on Figure 2, we can also see when increasing the input length, we see a strong degradation in the score. Besides that, the Figure 3 shows that answers located in the interval of (40%, 80%) exhibit the worst performance, as documented in the article \"Lost In The Middle\" [17]. Figure 3: Average performance analysis of gpt-4-1006-preview using 128k tokens context per answer depth. 4.3 RAG Naive Initially, a straightforward approach for RAG will be done using the llama-index [19], employing all default hyperparameters and using chunk retrieval by cosine similarity with the ADA-002 embedding. Figure 4 depicts the basic diagram of how the problem is addressed. Figure 4: 1. Pass the query to the embedding model to represent its semantics as an embedded query vector; 2. Transfer the embedded query vector to vector database or sparse index (BM25); 3. Fetch the top-k relevant chunks, determined by retriever algorithm; 4. Forward the query text and the chunks retrieved to Large Language Model (LLM); 5. Use the LLM to produce a response based on the prompt filled by the retrieved content. The Table 3 shows the average and degradation metrics for this approach using 2 retrieved chunks. Table 3: Performance of the RAG naive. Model Average Score Degradation gpt-4 6.04 -20% gpt-4-1106-preview 5.74 -21.6% gpt-3.5-turbo-1106 5.80 -21.0% The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT 5 Advanced Experiments The studies and experiments outlined in section 4 have shown unsatisfactory performance, marked by a degradation of at least 20% compared to the peak relative performance. Therefore, in this section, we explore various retrieval approaches for the RAG, recognizing that the quality of the retriever is a crucial factor in enhancing performance for this type of problem. We conducted an evaluation covering both sparse and dense search, a hybrid method, and even a multi-stage architecture using a reranker. In pursuit of code debugging flexibility and easier customization at each stage, we chose not to utilize an RAG framework (like LangChain or Llama-Index). For a comprehensive guide on debugging RAG and more details about retrieval systems, refer to [20] and [21]. 5.1 Retrievers When deploying retrieval systems, it is essential to achieve a balance between \u201ceffectiveness\u201d (How good are the results returned?) and \u201cefficiency\u201d (How much time it takes to return the results? or How much resources are used in terms of disk/RAM/GPU?). This balance ensures that latency, result quality, and computational budget remain within our application\u2019s required limits. This work will exclusively focus on effectiveness measures to quantify the retrievers methods quality. In our retriever experiments, the evaluation strategy centers around assessing how well the retriever performs in retrieving relevant information based on each given query qi. To achieve this, we employ the concept of recall. It is defined as the fraction of the relevant documents for a given query qi that are successfully retrieved in a ranked list R [21]. This metric is based on binary relevance judgments, assuming that documents are either relevant or not [21]. In this paper, each chunk is considered a document and only the respective chunk di is considered relevant to the query qi. While recall is easy to interpret, it does not consider the specific rank positions in which the relevant chunk appears in R. To overcome this limitation, we introduce Reciprocal Rank (RR) into our analysis. In this metric, the rank of the first relevant document to the query in R is used to compute the RR score [21]. Therefore, Reciprocal Rank offers a more nuanced evaluation by assigning a higher value when the relevant chunk is returned in the early positions of our retrievers given the respective query. Recall and Reciprocal Rank were evaluated at a specific cutoff so the measures are presented as R@k and MRR@k. For each query, its results are evaluated and their mean serves as an aggregated measure of effectiveness of a given retriever method. The retrievers are introduced below. In the category of sparse retrievers, we emphasize the BM25, a technique grounded in statistical weighting to assess relevance between search terms and documents. BM25 employs a scoring function that takes into account term frequency and document length, offering an efficient approach for retrieving pertinent information and is typically used as a strong baseline. However, it is exact-match based and can be powerless when query and document are relevant to each other but has no common words. On the other hand, when exploring dense retrievers, we often encounter approaches based on the called bi-encoder design [22]. The bi-encoder independently encodes queries and documents, creating separate vector representations before calculating similarity. An advantage of this approach is that it can be initialized \u2018offline\u2019: document embeddings can be precomputed, leaving only the query embedding being calculated at search time, reducing latency. The hybrid search technique aims to leverage the best of both sparse and dense search approaches. Given a question, both searches are conducted in parallel, generating two lists of candidate documents to answer it. The challenge then lies in combining the two results in the best possible way, ensuring that the final hybrid list surpasses the individual searches. Essentially, we can conceptualize it as a voting system, where each searcher casts a vote on the relevance of a document to a given query, and in the end, the opinions are combined to produce a better result. The multi-stage search architecture is based on the retrieve-and-rerank pipeline. In the first stage, a retriever with good recall is typically used to perform an initial filtering of the documents to be returned. From this narrowed-down list, these candidate documents are then sent to a second stage, which involves higher computational complexity, to rerank them and enhance the final effectiveness of the system. Next, we provide more details about each retriever used. The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT 5.1.1 BM25 Due to the user-friendly nature of BM25, its inclusion as a retriever method is always a welcome addition in RAG evaluations. A study that aligns with the same reasoning, albeit for a different application, can be found in [23], which illustrates the benefits of employing this algorithm to establish a robust baseline. Given that our data shares similarities with the SQuAD dataset, it is expected that many words from the query would be present in the chunk, contributing to the favorable effectiveness of BM25. In the BM25 ranking function, k1 and b are parameters shaping term saturation and document length normalization, respectively. The BM25 formula integrates these parameters to score the relevance of a document to a query, offering flexibility in adjusting k1 and b for improving effectiveness in different retrieval scenarios. Table 4: Comparison between BM25 packages using k1=0.82 and b=0.68. Recall@k rank-bm25 Pyserini BM25 Pyserini Gain (%) 3 0.735 0.914 24.3 5 0.814 0.971 19.2 7 0.857 0.985 14.9 9 0.878 0.985 12.1 Effectiveness is also influenced by chosen BM25 implementation. Pyserini\u2019s BM25 implementation incorporates an analyzer with preprocessing steps such as stemming and language-specific stop word removal. For the sake of comparison, we included results obtained using rank-bm25 [24], a basic implementation without preprocessing that is widely used in Python and integrated into libraries like LangChain and Llama-index. The results can be seen in Table 4. In this work, the Pyserini BM25 implementation [25] was used in all experiments considering k1=0.82 and b=0.68. 5.1.2 ADA-002 OpenAI does not disclose extensive details about the ADA-002 architecture; however, we employed this model in retrieval as the presented bi-encoder design (Figure 5): vector representations were constructed for all available chunks, and for each input query, its embedding was computed at search time. Subsequently, the similarity between the question and chunk were assessed using cosine similarity. Figure 5: Bi-Encoder Architecture Since we have no further details about ADA-002 we will refer to this approach only as dense retriever. 5.1.3 Custom ADA-002 The Custom ADA-002 approach was also utilized in the dense retriever configuration presented in Section 5.1.2. However, embedding customization played a key role in our attempt to enhance the overall representation. Embedding customization is not limited solely to OpenAI\u2019s embeddings; it is a technique applicable to other embeddings of the same kind. There is a significant variety of approaches to optimize a matrix, with one of them being The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT the application of Multiple Negative Ranking Loss, as presented in the article \"Efficient Natural Language Response Suggestion for Smart Reply,\" Section 4.4 [26]. However, given our current focus on simplicity, we will reserve the exploration of this technique for future work. At this moment, we choose to utilize the Mean Squared Error (MSE) Loss. For the fine-tuning stage, it is necessary to have two types of samples: \u2022 positives: [questioni, chunki, label = 1] \u2022 negatives: [questioni, chunkj, label = \u22121], for i \u0338= j Often, as is the case with our dataset, only positive examples are available. However, through a simple and random shuffling, it is possible to generate negative examples. Demonstrating confidence in transfer learning, we found that a few examples were sufficient. Our final dataset consisted of approximately 400 examples, maintaining a 1 : 3 ratio between positive and negative examples. The hyperparameters that exert the most significant impact on performance include the learning rate, batch size, and the number of dimensions in the projection matrix. The ADA-002 model has 1536 dimensions, and the projection matrix is of size 1536 \u00d7 N, where N \u22081024, 2048, 4096. In our experiments, we observed that 2048 dimensions resulted in the best accuracy. This type of fine-tuning requires low GPU resources, with a training time of approximately 5 minutes using the A100 GPU. The model itself is straightforward, consisting of a matrix with dropout (to mitigate overfitting), followed by the hyperbolic tangent activation function, which provided additional accuracy gains in the training set. Figure 6: Cosine similarity of positive and negative classes in the ADA-002 embedding; note the significant overlap between the classes. Test accuracy (before training): 69.5% Figure 7: Cosine similarity of positive and negative classes in the customized embedding; the intersection between the classes is minimal. Test accuracy (after training): 84.3% When analyzing the cosine similarity between positive and negative classes, we can observe the \"shadow\" shared by the histograms. In an ideal scenario, we desire the classes to be disjoint to ensure a clear definition of space. Figure 6 illustrates a significant shadow in the embedding before training, while Figure 7 shows the result after training. Both graphs are derived from the test set. Test accuracy also improved, leading to a better dense representation. 5.1.4 Hybrid Search As stated before, hybrid search is applied when is necessary to combine results from two or more retrieval methods. A widely used algorithm to address this type of problem is known as Reciprocal Rank Fusion (RRF). For a document The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT set D and search results from different methods r in R, for each d in D, we can calculate the RRFscore(d \u2208D) as follows [27]: RRFscore(d \u2208D) = X r\u2208R 1 k + r(d), (2) Considering that 1/r(d) is known as the reciprocal rank, where r(d) represents the position at which the document d was retrieved by the search mechanism r. The term k is introduced to assist in controlling outlier systems [27]. Figure 8: Hybrid Search schema with k=1. Figure 8 shows how to calculate RRFscore for k = 1. In the example, we have four chunks that were retrieved in different orders by two search methods, BM25 (sparse search), and Custom ADA-002 (dense search). The reciprocal rank score is calculated for each chunk. These values are then summed, creating a new score. The final hybrid list is an ordering of chunks that uses this new score. Table 5: Retriever comparison. Where MRR is the Mean Recriprocal Rank metric and R@k is the Recall. Metric Hybrid-BM25-ADA-002 Hybrid-BM25-Custom ADA-002 MRR@10 0.758 0.850 R@3 0.829 0.921 R@5 0.879 0.943 R@7 0.921 0.964 R@9 0.957 0.979 In our experiments, only Pyserini\u2019s BM25 was tested as the sparse retriever, while ADA-002 and Custom ADA-002 were tested as dense retrievers. The hybrid combination that yielded the best results was the one that used BM25 and Custom ADA-002. 5.1.5 Reranker The fundamental idea underlying multi-stage ranking is to divide document ranking into a sequence of stages. After an initial retrieval, which usually involves a sparse retriever or dense retriever, each subsequent stage re-evaluates and reranks the set of candidates forwarded from the preceding stage. Figure 9 represents a multi-stage pipeline where Pyserini BM25 performs the first stage and the candidate chunks are then re-evaluated by the reranker. After that, the final reranked list called retrieved chunks is presented as final result formed by k chunks. Figure 9: Reranker Pipeline The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT Transformers-based models are commonly employed as rerankers, leveraging their capability to enhance the effectiveness of information retrieval systems by capturing intricate relationships and contextual information within documents and queries. The initial utilization of transformers within a multi-stage ranking framework was presented in the study [28]. Their proposed model, known as monoBERT, transforms the ranking process into a relevance classification problem. It achieves this by sorting texts based on the conditional probability P(Relevant = 1 | di; q), where q is the query and di represents documents [21]. The model simultaneously processes both queries and documents. This simultaneous processing leads to a more enriched interaction between them, often resulting in improved effectiveness [29], [30]. However, this neural models have a substantial number of parameters and the scoring of query-document pairs occurs at inference time. This, in turn, increases computational costs and latency. This kind of model is also known by cross-encoder. Refer to Figure 10 for an illustration of the query-document pair processing. Figure 10: Cross-Encoder In the other hand, monoT5 is a sequence-to-sequence reranker [31] that uses T5 models [32] to generate relevance scores between a pair of query and document. T5 models treat all tasks as text-to-text, requiring some adaptations for the query/document similarity task. During training, the format 'Query: {query} Document: {document} Relevant:' is used, with labels yes if the document is relevant to the query, and no otherwise. At inference time, the same format as the training data is used to format pairs of queries and documents to feed the model, and a single-token greedy decode is performed. The score is then obtained by calculating the softmax value considering only the tokens no and yes, and selecting the value corresponding to the yes token. Note that the tokens no and yes are the tokens used in the version provided by [33], but the publication that introduced the monoT5 architecture, [31], uses the tokens false and true. Figure 11 contains an illustration of the monoT5 architecture\u2019s inference process. Figure 11: monoT5\u2019s inference process In our experiments, Pyserini BM25 was employed as first-stage retrieval, retrieving 50 documents to be reranked in the second stage. In the second stage, we utilized the model unicamp-dl/mt5-base-en-pt-msmarco-v2, a sequenceto-sequence reranker trained on pairs of queries and documents in English and Portuguese from the dataset [33]. The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT 5.2 Retrievers Results The results achieved with the various retrievers are presented in Table 6. Table 6: Retriever comparison. Metric ADA-002 Custom ADA-002 Hybrid-BM25-ADA-002 Hybrid-BM25-Custom ADA-002 BM25 BM25 + Reranker MRR@10 0.565 0.665 0.758 0.850 0.879 0.919 R@3 0.628 0.735 0.829 0.921 0.914 0.971 R@5 0.692 0.835 0.879 0.943 0.971 0.985 R@7 0.750 0.871 0.921 0.964 0.985 0.992 R@9 0.814 0.921 0.957 0.979 0.985 1 The multi-stage pipeline was able to achieve the best results in MRR@10 and Recall@k. 6 Conclusions The implementation of RAG systems faces challenges such as the effective integration of retrieval models, efficient representation learning, the diversity of data, the optimization of computational efficiency, evaluation, and text generation quality. Faced with these constantly evolving obstacles, this article proposes best practices for the implementation, optimization, and evaluation of RAG on a Brazilian Portuguese dataset, focusing on a simplified pipeline for inference and experimentation. So far, we have introduced the main components and methods along with their results and gaps. In this section, we will discuss key points that contribute to the performance improvement in RAG applications. We will start by discussing the relationship between the quality of the retriever and the achieved performance, in which our approach showed a significant improvement of MRR@10 by 35.4% compared to the baseline. Next, we will address the impact of input size on performance. In this domain, we observed that it is possible to enhance the best information retrieval strategy by 2.4% through input size optimization. Finally, we will present the complete architecture of RAG with our recommendations. When evaluating the final accuracy of our approach, we reached 98.61%, representing an improvement of 40.73 in less degradation score compared to the baseline. 6.1 Retriever Score versus Performance As mentioned in section 5.1, the performance of information/chunk retrieval, measured by the MRR@10 metric, varies between (0.565, 0.919), as detailed in Table 6. This variation represents approximately 35.4%. It is important to highlight that the RAG\u2019s performance is directly influenced by the quality of the retriever. Figure 12 shows the relationship between the retrieval metric MRR@10 metric and the degradation score for the studied retrieval methods. Figure 12: Retriever effectiveness vs RAG Performance. x axis is the MRR@10 metric and y axis is the degradation (where 0 is the perfect scenario) score. 6.2 Input Size versus Performance We observed that the best performance was achieved with the retrieval of 3 chunks using the retrieve-and-rerank strategy, as shown in Table 7. The use of a reranker (Figure 9) showed improved information retrieval in our tests. With this same configuration, the Gemini Pro achieved performance similar to gpt-4, as indicated in Table 8. Despite achieving perfect recall for 9 chunks, as evidenced in Table 6, using an input with 9000 tokens, 6000 more than the best scenario (3 chunks), did not result in the best performance. As discussed in Section 4.2, the quality of The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT Table 7: Performance of number of chunks retrieved with gpt-4. # Retrieved Chunks ADA-002 Custom ADA-002 BM25 Hybrid BM25 + Reranker 3 6.19 6.41 7.10 7.31 7.44 5 6.29 6.61 7.32 7.37 7.43 7 6.42 6.82 7.17 7.20 7.32 9 6.57 6.88 7.22 7.34 7.37 Table 8: Performance of best retriever RAG. Model Retriever Method # Retrieved Chunks Degradation gpt-4 ADA-002 9 -13.0% gpt-4 ADA-002 Custom 9 -8.8% gpt-4 BM25 5 -3% gpt-4 Hybrid 5 -2.3% gpt-4 BM25 + Reranker 3 -1.4% Gemini Pro BM25 + Reranker 3 -2.2% RAG is directly related to the input size and the position where the answer is located. Therefore, the final results confirm the observations made in our experiments. Moreover, from a cost perspective, it is crucial to avoid overloading the LLM with a large number of input tokens, as the cost is also based on the amount of input text. It is important to note that the results obtained in this study cannot be considered as a generalization for other datasets. Exploratory Data Analysis and the use of good retriever practices, as presented in [34], are always a solid path to achieving good results. 6.3 Final Results Despite this work being grounded in a single dataset, it is always crucial to emphasize the importance of data quality. In a simplified manner, as illustrated in Figure 13, data quality in RAG can be divided into Input, Retriever, and Evaluation. Figure 13: Core Points in RAG. \u2022 Input: How are the queries formulated? Are they synthetic or generic? What is the application\u2019s purpose? Building a RAG for a chatbot differs significantly from constructing a RAG for extracting information from a long and complex document. \u2022 Retriever: How do the data behave in information retrieval? Are the queries strongly linked by keywords in the text? What is the cosine similarity of query-to-query, query-to-document, and document-to-document? \u2022 Evaluation: How are the data measured? Defining metrics and success rates, as in the first experiment, is always a safe path to avoid biases. Build the evaluation system before testing, avoiding bias. In conclusion, the main contribution of this work was to identify and present the best possible configuration of techniques and parameters for a RAG application. Figure 14 provides a summarized overview of the end-to-end experiment results for the discussed approaches, and the best practices recommended by this study achieve a final accuracy of 98.61%, representing an improvement of 40.73 percentage points compared to the baseline. The Chronicles of RAG: The Retriever, the Chunk and the Generator PREPRINT Figure 14: RAG Performance: Performance evolution on the Harry Potter dataset, where an accuracy of 100% is considered the relative maximum. 6.4 Future Work We will expand our search to cover additional data sets, preferably those containing real data that already have questions and the reference documents for the answers. By using such datasets, we intend to explore techniques related to segmentation and chunk construction, as elaborated in Appendix B, Section B." + }, + { + "url": "http://arxiv.org/abs/2404.13892v2", + "title": "Retrieval-Augmented Audio Deepfake Detection", + "abstract": "With recent advances in speech synthesis including text-to-speech (TTS) and\nvoice conversion (VC) systems enabling the generation of ultra-realistic audio\ndeepfakes, there is growing concern about their potential misuse. However, most\ndeepfake (DF) detection methods rely solely on the fuzzy knowledge learned by a\nsingle model, resulting in performance bottlenecks and transparency issues.\nInspired by retrieval-augmented generation (RAG), we propose a\nretrieval-augmented detection (RAD) framework that augments test samples with\nsimilar retrieved samples for enhanced detection. We also extend the\nmulti-fusion attentive classifier to integrate it with our proposed RAD\nframework. Extensive experiments show the superior performance of the proposed\nRAD framework over baseline methods, achieving state-of-the-art results on the\nASVspoof 2021 DF set and competitive results on the 2019 and 2021 LA sets.\nFurther sample analysis indicates that the retriever consistently retrieves\nsamples mostly from the same speaker with acoustic characteristics highly\nconsistent with the query audio, thereby improving detection performance.", + "authors": "Zuheng Kang, Yayun He, Botao Zhao, Xiaoyang Qu, Junqing Peng, Jing Xiao, Jianzong Wang", + "published": "2024-04-22", + "updated": "2024-04-23", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "eess.AS" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "Recent artificial intelligence (AI) techniques have enabled the gen- eration of synthesized audio known as DeepFakes (DF) with in- creasing degrees of fidelity to natural human speech. Sophisticated DF generation techniques, such as text-to-speech (TTS) and voice conversion (VC) to mimic the timbre, prosody, and intonation of the speaker, can now generate audio perceptually indistinguishable from genuine recordings. However, the potential malicious use of such AI-synthesized speech has serious personal and societal im- plications, including disruption of automatic speaker verification (ASV) systems, propagating misinformation, and defaming repu- tations. As a result, the vulnerability of current audio-based com- munication systems to synthesized speech poses a serious threat. Therefore, the development of effective DF detection techniques is urgently needed to address this emerging risk for everyone of us. Recent advances in artificial intelligence generation content (AIGC) techniques, such as DF generation, however, have made detecting DFs increasingly challenging. In newly organized compe- titions, such as the ASVspoof 2021 [27], ADD 2022 [28], 2023 [29], even the state-of-the-art (SOTA) DF detection systems perform poorly, with an equal error rate (EER) of over 10% [30], making them impossible to detect properly, and unsuitable for commercial deployment. This suggests that the rapid development of DF gener- ation seems to have greatly outpaced the development of detection arXiv:2404.13892v2 [cs.SD] 23 Apr 2024 ICMR \u201924, June 10\u201314, 2024, Phuket, Thailand Zuheng Kang et al. technologies \u2013 the traditional detection methods are far from in- adequate for identifying current out-of-domain [3] DF-generated samples. Building robust and reliable DF detection systems remains a critical issue for the research community that has not yet been fully resolved. Recent decades have witnessed the emergence of various frame- works for detecting DF audio. The predominant frameworks utilize a pipeline consisting of a front-end feature extractor and a back-end classifier (discussed in \u00a7 2.1). Early works relied on hand-crafted features for DF detection with some success, such as Mel-frequency cepstral coefficients (MFCC) [10], linear-frequency cepstral coef- ficients (LFCC) [5, 17, 24], CQT [16, 32], and F0 sub-band feature [7]. However, these features exhibit limited performance due to the limited dataset used to train the robust model. More recent SOTA frameworks have leveraged the capabilities of self-supervised model feature extractors, such as wav2vec [1, 2, 18, 21, 25] and WavLM [2, 4, 9]. Moreover, Kawa et al. [13] utilize another powerful task- specific feature in deepfake detection using the pre-trained Whisper model [19]. Theoretically, by training on a large number of labeled or unlabeled bonafide samples in very large datasets, these feature extractor models could be particularly sensitive to unseen DF arti- facts. Some alternative frameworks replace hand-crafted features with end-to-end trainable encoders. Notable examples include Jung et al.\u2019s advanced graph attention network architecture AASIST [26], and Huang et al.\u2019s discriminative frequency-based improvements to SincNet [12], both of which achieve competitive performance. As discussed by Sun et al. [20], vocoders employed in regular speech synthesis can introduce inconsistencies that reveal DFs. However, regardless of the methodology and architecture used, these frame- works rely solely on a single model to accomplish this challenging task, which may prove inadequate. DF detection should be a knowledge-intensive task that also relies on vast external knowledge. To explain this, let\u2019s start with a story. In the identification of antique artifacts, fakes are often fabricated so realistically that it is difficult to determine their au- thenticity. Senior experts usually conduct a meticulous comparative analysis that includes material and textural attributes of many simi- lar artifacts. With so many subtle factors to consider, relying solely on one\u2019s limited intelligence is likely to result in poor judgment. Similarly, relying only on a single model to detect deepfakes may be too challenging to make mistakes. Retrieval augmented generation (RAG) [8, 15] methodology pro- vides a nice example for solving the problem of knowledge-intensive tasks (discussed in \u00a7 2.2). The RAG resolved the limitation of the single model by combining a pre-trained large language model (LLM) with an information retrieval system over a large knowledge database. Fundamentally, the RAG framework leverages significant supplementary real-time updated additional interpretable knowl- edge (with rapidly changing, proprietary data sources) to augment the limitations of a single model\u2019s knowledge, enabling it to provide a more reasoned answer. Complementary background information allows a single model to overcome its inherent knowledge gaps. Similarly, the retrieval-augmented-based approach could provide the same benefits for DF detection. When analyzing suspect au- dio, the model could query a retrieval system to find many similar reference audio segments. These retrieved results would provide additional references to inform the deepfake detection, such as typical artifacts for bonafide or synthetic spoofed examples. Then, the deepfake detector could integrate these retrieval results and the suspected audio into its decision process. This retrieval-augmented approach has several advantages. The model gains access to a much larger knowledge base beyond what can be encoded only in its model parameters. The retrieved results also provide supporting evidence for decisions, improving model performance. Furthermore, the system can update or modify its knowledge database for differ- ent detection tasks, since the model has learned to characterize a limited type of DF synthesizing methods. In summary, augmenting deepfake detectors with conditional retrieval of external data is an attractive direction. Developing DF detection methodology with a retrieval-augmented approach is worthy of further exploration. To implement the above-mentioned issue, we made the following contributions: \u2022 We proposed a retrieval augmented detection (RAD) frame- work which innovatively retrieves similar samples with addi- tional knowledge, and incorporates them into the detection process for improved detection performance. \u2022 We extend the multi-fusion attentive classifier to integrate with RAD. \u2022 Extensive Experiments show that our proposed method achieves state-of-the-art results on the ASVspoof 2021 DF set and com- petitive results on the 2019 and 2021 LA sets, demonstrating the effectiveness of the proposed retrieval-augmented ap- proaches.", + "main_content": "(1) Pipeline Framework. The most common pipelines typically include separate components for feature extraction, and classification (Figure 1-1). Specifically, the front-end feature extractor first converts the raw speech signal \ud835\udc65into a speech features \ud835\udc66. These speech features are then passed to the back-end classifier, which analyzes the speech features that make a bonafide vs. spoof decision \ud835\udc67. Such architectures leverage the capabilities of efficient hand-crafted features or semantically rich self-supervised pre-trained features to obtain highly informative speech representations. The back-end classifier then fully analyzes and mines these features, and makes the final prediction. (2) End-to-End Framework. More advanced frameworks transform the feature extraction into a trainable encoder, forming an endto-end architecture (Figure 1-2). Differently, instead of the feature extractor, the raw speech \ud835\udc65is fed into a trainable encoder to produce speech representations \ud835\udc66. To further improve performance, some approaches fuse multiple feature extractors or encoders, concatenating the resulting speech features and representations for deeper training. 2.2 Retrieval Augmented Generation Prior to introducing our proposed approach, it is instructive to first provide background knowledge on the retrieval augmented generation (RAG) framework [15] to facilitate comparison with our method. The RAG framework consists of three main stages: Retrieval-Augmented Audio Deepfake Detection ICMR \u201924, June 10\u201314, 2024, Phuket, Thailand Figure 1: The overview of traditional frameworks, and our proposed framework for audio deepfake detection. (1) shows the pipeline framework. (2) shows the end-to-end framework. (ours) shows our proposed retrieval augmented-based detection (RAD) framework. (1) Build Knowledge Retrieval Database. As shown in the stage 1 (blue section) of Figure 3-RAG, the plain text and format-rich text database \ud835\udc65is partitioned into smaller chunks {\ud835\udc65\ud835\udc5b} at \ud835\udc5bth sample. These text chunks are then embedded into dense vector representations {\ud835\udc63\ud835\udc5b} by a language model. In addition, each embedding \ud835\udc63\ud835\udc5bmaintains an index that links it to its original text chunk \ud835\udc65\ud835\udc5b, allowing the retrieval of the original text content. Finally, these embeddings {\ud835\udc63\ud835\udc5b} can be stored in a vector database V that facilitates efficient similarity search and retrieval. (2) Retrieve Knowledge. As shown in the stage 2 (red section) of Figure 3-RAG, the user\u2019s query text \u02dc \ud835\udc65\ud835\udc5eis embedded into a query embedding \u02dc \ud835\udc63\ud835\udc5eusing the same language model. This query embedding is then leveraged to perform a similarity search across the vector database V containing all the document chunk embeddings. The top \ud835\udc3emost similar embeddings {\u02dc \ud835\udc63\ud835\udc58} related to its document chunks { \u02dc \ud835\udc65\ud835\udc58} are retrieved based on their semantic proximity to the query embedding \u02dc \ud835\udc63\ud835\udc5ein the vector space. These most relevant \ud835\udc3edocument chunks { \u02dc \ud835\udc65\ud835\udc58} can be used as augmented contextual information to complement the original user query. (3) Get Results (Answer Generation). As shown in stage 3 (green section) of Figure 3-RAG, the original user query \u02dc \ud835\udc65\ud835\udc5eis concatenated with the retrieved document chunks { \u02dc \ud835\udc65\ud835\udc58} to construct an expanded prompt \ud835\udc5dwith its function P. Where \ud835\udc5d= P \u0000 \u02dc \ud835\udc65\ud835\udc5e, { \u02dc \ud835\udc65\ud835\udc58}\u0001. This enriched prompt \ud835\udc5d, which contains both the initial query and relevant contextual information, is subsequently input to a large language model (LLM). The LLM analyzes the overall content and relationships within \ud835\udc5dto generate the final answer \ud835\udc67. 3 METHODOLOGY 3.1 Self-supervised Feature with WavLM The overall framework of our proposed method is illustrated in Figure 1. Unlike traditional methods (described in \u00a7 2.1), our proposed framework leverages the state-of-the-art WavLM [4] feature extractor and incorporates an additional retrieval module after feature extraction to overcome performance bottlenecks. Specifically, we adopt a retrieval-augmented structure similar to RAG (described in \u00a7 2.2), which retrieves a few similar features from the bonafide samples and fuse them with the original test features before feeding into the detection model. By incorporating retrieved features highly similar to the test sample, our model can make much more reliable predictions through joint analysis. In the following section, we describe these modules in detail, including the WavLM feature extractor (described in \u00a7 3.1), the retrieval augmented mechanism (described in \u00a7 3.2), and the design of the detection model classifier (described in \u00a7 3.3). Meanwhile, since this framework is complex, we present a number of speedup techniques (described in \u00a7 3.4) to greatly reduce the space and time complexity. To further improve performance, we also jointly optimize the WavLM feature extractor with the detection model in an end-to-end manner (described in \u00a7 3.1). WavLM Feature Extraction. Recent advanced feature extractor WavLM [4] employs wav2vec 2.0 [1] as its backbone and is trained with larger real multilingual, multi-channel unlabeled speech data for much better performance. WavLM utilizes a masked speech denoising and prediction framework that artificially adds noise and overlapping speech to clean input audio before masking certain time segments, and the model should then predict the speech content of the original frames of these masked segments. This denoising process allows WavLM to learn robust representations that capture not only a variety of speech features but also the acoustic environment. In addition, WavLM performs excellently in a variety of downstream speech tasks such as automatic speech recognition (ASR), automatic speaker verification (ASV), and text-to-speech (TTS) with minimal fine-tuning. This suggests that WavLM already understands and is familiar with many high-level speech characteristics of bonafide audio, which are particularly appropriate for unseen DF-synthesized audio, since it often contains features that are very different from bonafide audio. In the proposed framework, the feature extraction component utilizes the complete set of latent features \ud835\udc66\u2208R\ud835\udc3f\u00d7\ud835\udc47\u00d7\ud835\udc39from all layers of the WavLM encoder transformer when processing an input audio segment \ud835\udc65. Where \ud835\udc3fis the number of WavLM encoder transformer layers, \ud835\udc47is the number of frames, \ud835\udc39is the dimension of features (same as WavLM feature size). This enables the model to leverage speech information encompassing low-level acoustic features as well as higher-level semantic abstractions extracted by the deeper layers. WavLM Fine-Tuning. Since WavLM is trained only on bonafide audio during pre-training, it may not have exposure to spoofed samples, which potentially leads to incorrect classifications. Therefore, we first fine-tune the entire WavLM feature extractor in an Figure 2: The baseline structure for fine-tuning. ICMR \u201924, June 10\u201314, 2024, Phuket, Thailand Zuheng Kang et al. Figure 3: The overview of the RAG and RAD pipeline. Triangular edge rectangles represent vectors for retrieval databases. In RAG, long rectangles represent document chunks. In RAD, long rectangles with/without an outline represent long/short features, rounded edge rectangles represent audio segments. end-to-end manner without the RAD framework. That is, shown in Figure 2, the speech is encoded into short features by a trainable WavLM model E and time-wise speedup method S, which is then encoded into intermediate representations by an MFA module. These representations are then classified as either bonafide or spoofed by a fully connected layer. By jointly optimizing the parameters, we obtain a fine-tuned WavLM model that can serve as an improved feature extractor in the subsequent RAD framework. In the subsequent RAD inference phase, we only leverage the fine-tuned WavLM and discard the back-end model. 3.2 Retrieval Augmented Detection To address the performance limitations imposed by the detection bottleneck, we propose the retrieval augmented detection (RAD) framework. Similar to the RAG framework, the proposed RAD approach consists of three main stages, but with some procedural modifications compared to RAG. (1) Build Knowledge Retrieval Database. As shown in the stage 1 (blue section) of Figure 3-RAD, the bonafide audio dataset \ud835\udc65\u2032 is segmented into smaller audio segments {\ud835\udc65\ud835\udc5b} at the \ud835\udc5bth sample. These audio segments can be encoded to latent long feature representations n \ud835\udc66\u2032 \ud835\udc5b,\ud835\udc59 o \u2208R\ud835\udc41\u00d7\ud835\udc3f\u00d7\ud835\udc47\u2032\u00d7\ud835\udc39by WavLM feature extractor E (\u00b7), where\ud835\udc47\u2032 is the time dimension of long features, \ud835\udc41is the number of audio segments, and \ud835\udc59indexes the encoder layer of WavLM. Subsequently, two operations are performed on the long features n \ud835\udc66\u2032 \ud835\udc5b,\ud835\udc59 o : (a) The features are embedded into dense vector representations \b \ud835\udc63\ud835\udc5b,\ud835\udc59 \t \u2208R\ud835\udc41\u00d7\ud835\udc3f\u00d7\ud835\udc39via the mapping M (\u00b7) in Equation 1, where each \ud835\udc63\ud835\udc5b,\ud835\udc59summarizes the time-wise features \ud835\udc66\u2032 \ud835\udc5b,\ud835\udc59by temporal averaging to eliminate the time dimension \ud835\udc47\u2032; (b) The time dimension is shortened to form short feature \b \ud835\udc66\ud835\udc5b,\ud835\udc59 \t \u2208R\ud835\udc41\u00d7\ud835\udc3f\u00d7\ud835\udc47\u00d7\ud835\udc39for improved efficiency by the function S (\u00b7) (details in \u00a7 3.4), where\ud835\udc47is the time dimension of short feature. \ud835\udc63\ud835\udc5b,\ud835\udc59= M \u0010 \ud835\udc66\u2032 \ud835\udc5b,\ud835\udc59 \u0011 = 1 \ud835\udc47 \ud835\udc47 \u2211\ufe01 \ud835\udc61=1 \ud835\udc66\u2032 \ud835\udc5b,\ud835\udc59 \f \f \f\ud835\udc61. (1) Importantly, each embedding \ud835\udc63\ud835\udc5b,\ud835\udc59maintains an index linking to its original short feature \ud835\udc66\ud835\udc5b,\ud835\udc59, enabling the retrieval of the original audio segment \ud835\udc65\ud835\udc5bfor the source content. Finally, the collection of embeddings \b \ud835\udc63\ud835\udc5b,\ud835\udc59 \t are stored in \ud835\udc59vector databases V \ud835\udc59to enable efficient similarity search and retrieval. (2) Retrieve Knowledge. As shown in the stage 2 (red section) of Figure 3-RAD, a sample to be detected \u02dc \ud835\udc65\ud835\udc5ecan be embedded into a query embedding \u02dc \ud835\udc63\ud835\udc5e,\ud835\udc59\u2208R\ud835\udc3f\u00d7\ud835\udc39by function E, M, and can be converted to short features \u02dc \ud835\udc66\ud835\udc5e,\ud835\udc59\u2208R\ud835\udc3f\u00d7\ud835\udc47\u00d7\ud835\udc39by function E, S. This query embedding \u02dc \ud835\udc63\ud835\udc5e,\ud835\udc59is then utilized to perform a similarity search across vector databases \u2013 for each layer \ud835\udc59, there is a corresponding vector database V \ud835\udc59to be searched. The top \ud835\udc3emost similar embeddings \b\u02dc \ud835\udc63\ud835\udc58,\ud835\udc59 \t \u2208R\ud835\udc3e\u00d7\ud835\udc3f\u00d7\ud835\udc39are retrieved, along with their associated short features \b \u02dc \ud835\udc66\ud835\udc58,\ud835\udc59 \t \u2208R\ud835\udc3e\u00d7\ud835\udc3f\u00d7\ud835\udc47\u00d7\ud835\udc39. These \ud835\udc3emost relevant audio samples serve as references for detailed comparison with the sample to be detected. By analyzing the similarities and differences, the authenticity of the tested samples can be better determined. (3) Get Results (Sample Detection). As shown in stage 3 (green section) of Figure 3-RAD, our proposed RAD framework requires the training of an additional detection model. This model accepts the samples to be tested as well as the most relevant retrieved samples. Specifically, the detection model is provided with the query short features \u02dc \ud835\udc66\ud835\udc5e,\ud835\udc59and the top \ud835\udc3esimilar short features \b \u02dc \ud835\udc66\ud835\udc58,\ud835\udc59 \t to make the Retrieval-Augmented Audio Deepfake Detection ICMR \u201924, June 10\u201314, 2024, Phuket, Thailand Comparison RAG RAD Full training / fine tuning for detection Knowledge Updates Updates directly to the latest retrieved knowledge base, without the need for frequent retraining to obtain the latest knowledge. Knowledge is input implicitly. The ability of previously trained models for knowledge acquisition is yet to be verified. Full training or retraining is needed for knowledge and data updates. External Knowledge External knowledge can be acquired through retraining. Data Processing Only large datasets of high quality can improve performance. Interpretability The results generated can be traced back to specific data sources, offering greater interpretability and traceability. Only similar samples can be traced, but it is difficult to interpret directly how it relates to the sample being examined. Works as a black box with very low interpretability. Computational Resources Relying only on one model's judgment. External Training Zero-shot learning, without external training. Latency Requirements Only the model inference process will have low latency. Hallucination/Detection Error Less likely to hallucinate because each generated result is based on retrieved evidence. Careful comparison and review of retrieved similar samples with the tested samples may reduce detection errors, but this needs to be verified. Relying on only one model may produce higher detection errors. Ethical and Privacy Issues Content in external databases may have ethical and privacy concerns. Initially requires full training for specific tasks with high quality data. Utilizing external resources, especially relevant or similar data from databases. Minimal data processing and manipulation is required. Database retrieval techniques are required for each generated or detection task, and external data sources require regular maintenance. These require additional computational resources. Data retrieval process leads to higher latency. The labels are fixed, credible, and free of ethical and privacy concerns. Figure 4: Properties of RAG, RAD, and full training/finetuning for detection. Red text represents the focused attention, and green cells represent ideas that should be verified in this paper. final decision\ud835\udc67. Importantly, this detection model not only evaluates relevant samples, but also provides detailed comparisons with the most similar real samples. This additional contextual information helps to make more accurate judgments for DF detections. Properties similar to RAG in RAD. Despite their different applications, with RAD optimized for detection and RAG for generation, given the similarities in structure and algorithms between RAG and RAD, it is likely that RAD also has the same advantages as RAG. Figure 4 provides a detailed summary of the key advantages and disadvantages of RAG, RAD, and full training / fine-tuning approaches. Although similarities and differences exist across these methods, three critical questions emerge as follows: \u2022 Question 1: Does the RAD framework reduce detection errors? \u2022 Question 2: Does updating external knowledge for the RAD framework further improve detection performance? \u2022 Question 3: Can the retrieved audio samples be interpreted? Research questions 1, 2 are verified in \u00a7 4.4, and question 3 is verified in \u00a7 4.5. 3.3 Detection Model To apply RAD to DF detection, we extend the Multi-Fusion Attentive (MFA) classifier [9], named RAD-MFA, which combines the raw query input for detection and the retrieved similar bonafide samples to make comprehensive analysis for detections. Specifically, Figure 5 illustrates the overall structure of our proposed detection model, and Figure 5 shows the MFA sub-modules in detail. MFA Module. The MFA Module in our framework handles the test feature \u02dc \ud835\udc66\ud835\udc5eand the retrieved features \b \u02dc \ud835\udc66\ud835\udc58,\ud835\udc59 \t . For conciseness, these features are denoted by \ud835\udc66in Figure 5. Specifically, the MFA module is implemented through the following steps: Figure 5: The structure of detection model architecture. \u2295 denotes the concatenation. This process illustrates the 3rd get results stage of Figure 3-RAD in detail. (1) The input feature \ud835\udc66\u2208R\ud835\udc35\u00d7\ud835\udc3f\u00d7\ud835\udc47\u00d7\ud835\udc39is passed through \ud835\udc3fparallel time-wise attentive statistic pooling (ASP) layers (denoted as ASP\ud835\udc47(\u00b7)) to eliminate the time dimension. Here, \ud835\udc35denotes a virtual dimension. (2) The last outputs are concatenated and passed through a fully connected layer to transform the features to R\ud835\udc35\u00d7\ud835\udc3f\u00d72\ud835\udc39. (3) These outputs are then passed through a layer-wise ASP layer (denoted as ASP\ud835\udc3f(\u00b7)) to form the intermediate representation \ud835\udc5f\u2208R\ud835\udc35\u00d74\ud835\udc39. Extended RAD-based MFA. The RAD-MFA is implemented through the following steps: (1) The test feature \u02dc \ud835\udc66\ud835\udc5e\u2208R1\u00d7\ud835\udc3f\u00d7\ud835\udc47\u00d7\ud835\udc39and the retrieved features \b \u02dc \ud835\udc66\ud835\udc58,\ud835\udc59 \t \u2208R\ud835\udc3e\u00d7\ud835\udc3f\u00d7\ud835\udc47\u00d7\ud835\udc39are sent to the same MFA module, creating intermediate representations \ud835\udc5f\ud835\udc5e\u2208R1\u00d74\ud835\udc39and \ud835\udc5f\ud835\udc58\u2208R1\u00d74\ud835\udc39 respectively. (2) These two representations are used to form \ud835\udc5f\ud835\udc51\u2208R1\u00d74\ud835\udc39by taking their difference, \ud835\udc5f\ud835\udc51= \ud835\udc5f\ud835\udc58\u2212\ud835\udc5f\ud835\udc5e. We make a difference between two features with extremely similar timbre, which allows the discriminative model to pay more attention to other differential information, such as background noise. (3) This output is sent to a sample-wise ASP layer (denoted as ASP\ud835\udc3e(\u00b7)) to form the intermediate representation \ud835\udc5f\ud835\udc52\u2208R1\u00d78\ud835\udc39. (4) \ud835\udc5f\ud835\udc52is concatenated with \ud835\udc5f\ud835\udc5e, and sent to a fully connected layer to make the final decision. Through this scheme, the RAD-based detection model can take into account numerous particularly similar bonafide samples and make comprehensive judgments on their contents and distributions. This enables the model to achieve more accurate detection results by accounting for many additional highly similar authentic cases. 3.4 Performance Optimization In order to speed up the process of training and testing, two approaches are used for optimization. Locally Stored Features. To speed up the training and testing process, we pre-compute and cache the WavLM features of all audio ICMR \u201924, June 10\u201314, 2024, Phuket, Thailand Zuheng Kang et al. segments in the knowledge retrieval database. Specifically, each raw audio segment \ud835\udc65\ud835\udc5bis passed through the WavLM model to extract the corresponding feature representation \ud835\udc66\ud835\udc5b,\ud835\udc59. The retrieved feature \b \ud835\udc66\ud835\udc5b,\ud835\udc59 \t are stored locally, and the indexes in the database only store pointers to these pre-extracted feature paths. For training, the entire pipeline operates on the pre-computed features rather than raw audio, with the cached test features, retrieved database features, and corresponding labels, making the training process extremely fast. For testing, this design allows efficient retrieval of audio contents without repetitively invoking the computationally expensive WavLM feature extraction each time. Time-wise Speedup. The acoustic features extracted by WavLM have a frame-to-frame hop size of 20 ms and a large number of layers, resulting in an extremely large number of generated feature parameters that require a large amount of storage space. To solve the problem, we propose a method S (\u00b7) to simplify features in the time dimension. Let \ud835\udc65denote an audio segment sample, the feature extracted by WavLM is called latent long features \ud835\udc66\u2032 \u2208R\ud835\udc47\u2032\u00d7\ud835\udc39, which can then be transformed into short features \ud835\udc66\u2208R\ud835\udc47\u00d7\ud835\udc39by the S. A speedup parameter \ud835\udf0fis introduced, which partitions the long feature \ud835\udc66\u2032 along the time dimension as: partition \u0000\ud835\udc66\u2032\u0001 = \u0002\u0002 \ud835\udc66\u2032 1, ...,\ud835\udc66\u2032 \ud835\udf0f \u0003 , \u0002 \ud835\udc66\u2032 \ud835\udf0f+1, ...,\ud835\udc66\u2032 2\ud835\udf0f \u0003 , ..., \u0002 ...,\ud835\udc66\u2032 \ud835\udc47\u2032 \u0003\u0003 | {z } \ud835\udc47\u2032/\ud835\udf0fpartitions . (2) The speedup short feature \ud835\udc66\ud835\udc61can then be derived by taking the average along the partitioned time dimension as: \ud835\udc66= S \u0000\ud835\udc66\u2032\u0001 = \u0002 mean \u0000\u0002 ...,\ud835\udc66\u2032 \ud835\udf0f \u0003\u0001 , ..., mean \u0000\u0002 ...,\ud835\udc66\u2032 \ud835\udc47\u2032 \u0003\u0001\u0003 . (3) Applying this speedup technique enables significant savings in storage space. However, there is an additional question that needs to be experimentally verified: \u2022 Question 4: Does time-wise speedup method affect downstream DF detection performance? This question is validated in \u00a7 4.4. 4 EXPERIMENTS The following section describes the datasets and assessment metrics (described in \u00a7 4.1) used for all of the reported experimental work, as well as details of the reproducible implementation (described in \u00a7 4.2). The experimental results (described in \u00a7 4.3) will list the evaluation results compared to the existing SOTA. The ablation studies (described in \u00a7 4.4) will then focus on several experiments related to the Four Research Questions collected from \u00a7 3.2, 3.4, and why our proposed RAD framework could be effective. 4.1 Datasets and Metrics ASVspoof 2019 LA Database. The ASVspoof 2019 [23] logical access (LA) dataset is comprised of bonafide and spoofed utterances generated using totally 19 different spoofing algorithms, including TTS, VC, and replay attacks. The dataset contains separate partitions for training, development, and evaluation. The training and development sets contain samples of 6 spoofing algorithms, while the evaluation set contains samples from 2 algorithms seen during training as well as 11 unseen spoofing algorithms not present in the training data. The training set trains the model, the development set selects the best-performing model, and the evaluation set impartially evaluates the performance of the selected model. In addition, all bonafide samples will be used to build the retrieval database. This experimental design aims to evaluate the generalization ability of the DF detection system against unknown spoofing attacks. Furthermore, the dataset may not contain complete bonafide recordings of all speakers that were impersonated in the spoofing dataset. The lack of target speaker data may limit the ability of the DF detection system to perform accurate speaker characteristics comparisons. To mitigate this issue, additional bonafide samples from impersonated speakers should be found to augment the available knowledge database for the retrieval system. ASVspoof 2021 LA Database. The LA and DF evaluation subsets from the ASVspoof 2021 [27] challenge present intentionally more difficult spoofing detection tasks compared to the 2019 LA data, including more unseen attacks, and both encoding and transmission distortions in the LA set, as well as the unseen coding and compression artifacts in the DF set. According to the challenge guidelines, since no new training or development data was released for the ASVspoof 2021 challenge, model training, and evaluation were limited only to the ASVspoof 2019 database. The entire subset of ASVspoof 2021 LA and DF were used for model testing. VCTK Database. All bonafide samples of ASVSpoof are a very small subset of the VCTK dataset. To mitigate the issue of insufficient bonafide samples per speaker, as the datasets used originate from the VCTK dataset, we expand our database by using additional samples that exclude all samples already present in our experimental data. This allows us to increase the number of bonafide samples available for each speaker in the retrieval task. Metrics. We evaluated our proposed model on these datasets using two standard metrics: the minimum normalized tandem detection cost function (min t-DCF) [14] and pooled equal error rate (EER). The min t-DCF measures the combined (tandem) performance of the ASV systems, and the EER reflects the independent DF detection capability. 4.2 Implementation Details Data Processing. The original audio recordings in the database are segmented into clips of 4 seconds in length. Audio recordings over the 4-second duration are truncated to 4 seconds. For audio recordings shorter than 4 seconds, the clips are padded to the 4-second length by repeating the recording. No additional processing such as voice activity detection (VAD) is applied to the audio samples prior to segmentation. The audio segments are first encoded into short features using a WavLM model, and are then shortened into more compact short features through a time-wise speedup model with parameter \ud835\udf0f= 10. Vector Database. In order to store and query embeddings from vector databases more conveniently, we created \ud835\udc3fdatabases to store the audio feature vectors extracted at each layer of the WavLM model. When performing a database retrieval, the query audio is converted to query embedding, and the top 10 most similar WavLM short features will be retrieved. Retrieval-Augmented Audio Deepfake Detection ICMR \u201924, June 10\u201314, 2024, Phuket, Thailand Table 1: Comparison with other anti-spoofing systems in the ASVspoof 2019 LA evaluation set, reported in terms of pooled min t-DCF and EER (%). System Configuration min t-DCF EER(%) Hua et al. [11] DNN+ResNet 0.0481 1.64 Zhang et al. [31] FFT+SENet 0.0368 1.14 Ding et al. [6] SAMO 0.0356 1.08 Tak et al. [22] RawGAT-ST 0.0335 1.06 Jung et al. [26] AASIST 0.0275 0.83 Huang et al. [12] DFSincNet 0.0176 0.52 Fan et al. [7] f0+Res2Net 0.0159 0.47 Guo et al. [9] WavLM+MFA 0.0126 0.42 Ours WavLM+RAD-MFA 0.0115 0.40 Model Training. The front-end feature extractor utilized in this work is WavLM. During fine-tuning of the front-end WavLM, the Adam optimizer is employed with a learning rate of 3e-6 and a batch size of 4. For training the MFA, the batch size is changed to 32 and the learning rate is 3e-5. All experiments were performed utilizing two NVIDIA GeForce RTX 3090 GPUs. Each model configuration is trained for approximately 30 epochs. 4.3 Experimental Results To demonstrate the superior performance of our proposed method over existing approaches, we compare our proposed method to recent SOTA methods. Results on ASVspoof 2019 LA evaluation set. The experimental results in Table 1 compare the performance of our proposed methods to existing approaches on the ASVspoof 2019 LA evaluation dataset. Our method achieves an EER of 0.40% and a min t-DCF of 0.0115 which is the best reporting result, demonstrating the effectiveness and superiority of our proposed method. Notably, although Guo et al. [9] utilizes a similar WavLM feature extractor and MFA network, our proposed RAD framework improves its performance, overcoming the limitations of single-model approaches. In our analysis, the RAD framework first retrieves the most similar audio samples, which are likely from the same speaker, and then performs careful comparisons between these samples and the test sample. For the detection model, it only needs to consider the differences between the two, rather than relying on fuzzy prior knowledge for detection. In contrast, our proposed method is more robust. Specifically, by focusing on fine-grained differences rather than generalized knowledge, our method can more accurately distinguish more detailed information. Results on ASVspoof 2021 DF evaluation set. We further test our model on the ASVspoof 2021 LA and DF evaluation set, results are shown in Table 2. In the DF subset, our method achieves SOTA performance on the DF subset with an EER of 2.38%. In the LA subset, we obtain an EER of 4.89%, which is also quite a competitive performance, but still better than the baseline system [9] without RAD. Further analysis and ablation studies are needed to a fully Table 2: Comparative results of our proposed method with other systems in the ASVspoof 2021 LA and DF evaluation set with pooled EER (%). System Configuration LA DF Fan et al. [7] f0+Res2Net 3.61 \u2013 Do\u00f1as et al. [18] wav2vec2+ASP 3.54 4.98 Wang et al. [25] wav2vec2+LGF 6.53 4.75 Tak et al. [21] wav2vec2+AASIST 0.82 2.85 Fan et al. [7] WavLM+MFA 5.08 2.56 Ours WavLM+RAD-MFA 4.83 2.38 Table 3: Ablation studies on ASVspoof 2021 DF dataset for the effectiveness of each component with pooled EER (%). -L and -S: large and small. ft: fine-tuning. Just Difference is the \ud835\udc5f\ud835\udc52(denoted in Figure 5, without \ud835\udc5f\ud835\udc5e) directly connected to the fully connected layer for classification. Ablation Configuration Pooled EER(%) Full Framework \u2013 2.38 w/o RAD Baseline (Figure 2) 2.90 w/o VCTK ASVspoof 2019 only 2.54 w/o WavLM-L WavLM-S 9.15 w/o ft WavLM-S 9.62 WavLM-L 4.98 Variation Structure Just Difference 2.49 characterize the advantages of our proposed method on each component. 4.4 Ablation Study Ablation Study on Different Components. The ablation study presented in Table 3 summarizes the results obtained by evaluating different configurations and components of the proposed system on the ASVspoof 2021 DF subset. Specifically, the impact of the RAD framework, WavLM-L feature extractor, fine-tuning of the feature extractor, incorporation of additional VCTK datasets for data retrieval, and the structure of the detection network were analyzed. The experiments were conducted using a time-wise speedup parameter \ud835\udf0f= 10. System performance was assessed using the pooled EER expressed as a percentage. The key observations are summarized as follows (line-by-line explanation from Table 3): (1) The full system reaches the SOTA performance with a pooled EER of 2.38%. (2) Removing the proposed RAD framework for similar sample retrieval increases the pooled EER to 2.90%. This validates the effectiveness of the RAD framework, which answers the research Question 1. However, this result is slightly higher than that of Guo et al. [9], which may be due to different parameter settings and time-wise speedup operations. ICMR \u201924, June 10\u201314, 2024, Phuket, Thailand Zuheng Kang et al. Table 4: Ablation study of the effect of different timewise speedup parameter \ud835\udf0fon DF detection performance of ASVspoof 2021 DF dataset, using pooled EER (%). Speedup (\ud835\udf0f=) 5 10 20 Original 4.68 4.98 5.45 Fine-tune 2.36 2.38 2.54 (3) Excluding the supplementary VCTK dataset slightly increases the pooled EER to 2.64%, indicating that updating the knowledge with additional related data could improve the detection performance, which answers the research Question 2. (4) Replacing WavLM-L with WavLM-S significantly increases the pooled EER to 9.15%, highlighting the importance of the feature extractor in the overall framework. (5) Without fine-tuning, the EER rises drastically to 9.62% and 4.98% for WavLM-S and WavLM-L respectively. This observation clearly highlights the positive influence of fine-tuning in enhancing DF detection performance, since fine-tuning combines spoofed data instead of using bonafide data alone, thereby improving the discriminatory capability of DF samples. (6) We also tried the variation structure of removing the \ud835\udc5f\ud835\udc5ebranch and directly connecting \ud835\udc5f\ud835\udc52(denoted in Figure 5) to the classifier slightly increases the pooled EER to 2.49%, suggesting that not only the difference of the feature, but also the original feature play the role for performance improvement. Effect of Time-wise Speedup Parameter. Table 4 examines whether time-wise speedup affects the performance of DF detection. We tested the original and the fine-tuned WavLM-Large feature extractor on the ASVspoof 2021 DF with \ud835\udf0fis 5, 10, 20, reporting by the pooled EER. It is difficult to test under \ud835\udf0f< 5 due to very high computational costs and storage consumption, which need to be addressed in future work. Before fine-tuning, the performance varies greatly across \ud835\udf0f: the smaller \ud835\udf0fis, the better the performance, but the much higher computational cost and storage consumption. After fine-tuning, the gap narrows, suggesting that optimization can reduce the impact of time-wise speedup operation. Overall, the time-wise speedup operation will affect the performance, but not too much, which answers the research Question 4. Taking these factors into account, we finally chose \ud835\udf0f= 10. However, we still need better ways to reduce storage and computation, which is an open problem that needs to be investigated. 4.5 Sample Analysis The retrieval samples shown in Figure 6 with clickable audio to hear, offer insights into the factors influencing successful detection of spoofing artifacts. This figure presents 4 test samples, comprising 3 spoofed and 1 bonafide audio, along with the 2 to 3 most similar samples. Extracts were taken from three layers (initial, middle, final) of a WavLM-L model. Despite quality defects in the spoofed test samples, the system appeared to retrieve samples from the same speaker identities. This suggests the system might rely strongly on speaker-discriminative features, potentially providing an approximate answer to the research Question 3. However, although Figure 6: Examples of retrieved samples (click on the figure to hear the sound). initial layer retrievals corresponded to the same speakers, middle and final layer results may differ in speaker identity. Our analysis implies that the shallower layers of the model may focus on timbral and quality-based features, whereas the deeper layers capture more abstract semantic information. Nevertheless, these explanations remain brief and qualitative, lacking rigorous argumentation, which could be an interesting area for future exploration. In summary, the retrieved samples have the greatest similarity at the feature level, providing insights for the successful detection of the spoofing artifacts. After retrieval, a careful comparison of the retrieval results with the test sample will be a key factor in the performance improvement. 5 CONCLUSIONS In this work, we proposed a novel retrieval-augmented detection (RAD) framework that leverages retrieved samples to enhance deepfake detection performance. We also extend the multi-fusion attentive classifier by integrating it with our proposed RAD framework. Extensive experiments demonstrate state-of-the-art results of the RAD framework on the ASVspoof 2021 DF dataset and competitive performance on the 2019 and 2021 LA datasets. The consistent improvements achieved on multiple datasets highlight the potential of RAD as a new paradigm for DF detection. The ablation study reveals RAD and retrieval-augmented generation (RAG) share similar properties in improving detection performance. Additionally, the retrieved samples are usually from the same speaker, suggesting potential interpretability. In conclusion, this work opens promising research avenues into retrieval-based augmentation techniques to enhance performance for detection tasks. By breaking the reliance on a single model, RAD provides a new perspective that utilizes more available information to overcome performance limitations and advance DF detection techniques. ACKNOWLEDGMENTS Supported by the Key Research and Development Program of Guangdong Province (grant No. 2021B0101400003) and the Corresponding author is Jianzong Wang (jzwang@188.com). Retrieval-Augmented Audio Deepfake Detection ICMR \u201924, June 10\u201314, 2024, Phuket, Thailand" + } + ] +} \ No newline at end of file