| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:46:56.117625Z" |
| }, |
| "title": "Capturing Logical Structure of Visually Structured Documents with Multimodal Transition Parser", |
| "authors": [ |
| { |
| "first": "Yuta", |
| "middle": [], |
| "last": "Koreeda", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "koreeda@stanford.edu" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University", |
| "location": { |
| "settlement": "Stanford", |
| "region": "CA", |
| "country": "USA" |
| } |
| }, |
| "email": "manning@stanford.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "While many NLP pipelines assume raw, clean texts, many texts we encounter in the wild, including a vast majority of legal documents, are not so clean, with many of them being visually structured documents (VSDs) such as PDFs. Conventional preprocessing tools for VSDs mainly focused on word segmentation and coarse layout analysis, whereas finegrained logical structure analysis (such as identifying paragraph boundaries and their hierarchies) of VSDs is underexplored. To that end, we proposed to formulate the task as prediction of transition labels between text fragments that maps the fragments to a tree, and developed a feature-based machine learning system that fuses visual, textual and semantic cues. Our system is easily customizable to different types of VSDs and it significantly outperformed baselines in identifying different structures in VSDs. For example, our system obtained a paragraph boundary detection F1 score of 0.953 which is significantly better than a popular PDF-to-text tool with an F1 score of 0.739.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "While many NLP pipelines assume raw, clean texts, many texts we encounter in the wild, including a vast majority of legal documents, are not so clean, with many of them being visually structured documents (VSDs) such as PDFs. Conventional preprocessing tools for VSDs mainly focused on word segmentation and coarse layout analysis, whereas finegrained logical structure analysis (such as identifying paragraph boundaries and their hierarchies) of VSDs is underexplored. To that end, we proposed to formulate the task as prediction of transition labels between text fragments that maps the fragments to a tree, and developed a feature-based machine learning system that fuses visual, textual and semantic cues. Our system is easily customizable to different types of VSDs and it significantly outperformed baselines in identifying different structures in VSDs. For example, our system obtained a paragraph boundary detection F1 score of 0.953 which is significantly better than a popular PDF-to-text tool with an F1 score of 0.739.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Despite recent motivation to utilize NLP for wider range of real world applications, most NLP papers, tasks and pipelines assume raw, clean texts. However, many texts we encounter in the wild, including a vast majority of legal documents (e.g., contracts and legal codes), are not so clean, with many of them being visually structured documents (VSDs) such as PDFs. For example, of 7.3 million text documents found in Panama Papers (which arguably approximates the distribution of data one would see in the wild), approximately 30% were PDFs 1 . Good preprocessing of VSDs is crucial in order to apply recent advances in NLP to real world applications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Thus far, the most micro and macro extremes of VSD preprocessing have been extensively studied, such as word segmentation and layout analysis (detecting figures, body texts, etc.; Soto and Yoo, 2019; Stahl et al., 2018) , respectively. While these two lines of studies allow extracting a sequence of words in the body of a document, neither of them accounts for local, logical structures such as paragraph boundaries and their hierarchies.", |
| "cite_spans": [ |
| { |
| "start": 180, |
| "end": 199, |
| "text": "Soto and Yoo, 2019;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 200, |
| "end": 219, |
| "text": "Stahl et al., 2018)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "These structures convey important information in any domain, but they are particulary important in the legal domain. For example, Figure 1 (1) shows raw text extracted from a non-disclosure agreement (NDA) in PDF format. An information extraction (IE) system must be aware of the hierarchical structure to successfully identify target information (e.g., extracting \"definition of confidential information\" requires understanding of hierarchy as in Figure 1 (2)). Furthermore, we must utilize the logical structures to remove debris that has slipped through layout analysis (\"Page 1 of 5\" in this case) and other structural artifacts (such as semicolons and section numbers) for a generic NLP pipeline to work properly.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 130, |
| "end": 138, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 448, |
| "end": 456, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Yet, such logical structure analysis is difficult. Even the best PDF-to-text tool with a word-related error rate as low as 1.0% suffers from 17.0% newline detection error (Bast and Korzen, 2017) that is arguably the easiest form of logical structure analysis.", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 194, |
| "text": "(Bast and Korzen, 2017)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The goal of this study is to develop a fine-grained logical structure analysis system for VSDs. We propose a transition parser-like formulation of logical structure analysis, where we predict a transition label between each consecutive pair of text fragments (e.g., two fragments are in a same paragraph, or in different paragraphs of different hierarchies). Based on such formulation, we developed a feature-based machine learning system that fuses multimodal cues: visual (such as indentation and line spacing), textual (such as section num- ..... 2. CONFIDENTIAL INFORMATION\\n2.1. Confidential Information means all confidential information relating to the Purpose which the Disclosing\\nParty or any of its Affiliates, discloses or makes available, to the Receiving Party or any of its Affiliates,\\nbefore, on or after the Effective Date. This includes:\\na) the terms of this Agreement;\\nPage 1 of 5\\nb) all confidential or proprietary information relating to: the business, customers, plans, pricing, operations,\\ntechniques, know-how, technical information, design, trade secrets, whether in tangible or intangible form.\\n2.2. Confidential Information does not include information which:\\na) is or subsequently becomes public knowledge or publicly available through no fault of the Receiving Party; or \u2026 Disclosing Party: a Party to this Agreement which discloses its Confidential Information to the other Party; 2.1. Confidential Information means all confidential information relating to the Purpose which the Disclosing Confidential Information: has the meaning given in clause 2 of this Agreement;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "b) all confidential or proprietary information relating to: the business, customers, plans, pricing, operations, Party or any of its Affiliates, discloses or makes available, to the Receiving Party or any of its Affiliates, before, on or after the Effective Date. This includes:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS THE PARTIES AGREE AS FOLLOWS:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Page 1 of 5 a) the terms of this Agreement; techniques, know-how, technical information, design, trade secrets, whether in tangible or intangible form. bering and punctuation), and semantic (such as language model coherence) cues. Finally, we show that our system is easily customizable to different types of VSDs and that it significantly outperforms baselines in identifying different structures in VSDs. For example, our system obtained a paragraph boundary detection F1 score of 0.953 that is significantly better than PDFMiner 2 , a popular PDF-to-text tool, with an F1 score of 0.739. We open-sourced our system and dataset 3 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS THE PARTIES AGREE AS FOLLOWS:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In this study, we concentrate on logical structure analysis of VSDs. The input is a sequence of text blocks (Figure 1 (3)) that can be obtained by utilizing existing coarse layout analysis and word-level preprocessing tools. We aim to extract paragraphs and identify their relationships. This is equivalent to creating a tree with each block as a node (Figure 1(4) ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 108, |
| "end": 117, |
| "text": "(Figure 1", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 352, |
| "end": 364, |
| "text": "(Figure 1(4)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Problem Setting and Our Formulation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We propose to formulate this tree generation problem as identification of a transition label between each consecutive pair of blocks (Figure 1 (5)) that defines their relationship in the tree. We define the transition trans i between i-th block (hereafter b i ) and b i+1 as one of the following: continuous b i and b i+1 are continuous in a single paragraph (Figure 1(6) ). consecutive b i+1 is the start of a new para-2 https://euske.github.io/pdfminer/ 3 https://github.com/stanfordnlp/pdf-s truct graph at the same level as b i (Figure 1(7) ). down b i+1 is the start of a new paragraph that is a child (a lower level) of the paragraph that b i belongs to (Figure 1 (6)). up b i+1 is the start of a new paragraph that is in a higher level than the paragraph that b i belongs to (Figure 1(8) ). omitted i-th block is debris and omitted (Figure 1(9) ). trans i\u22121 is carried over to the relationship betwen b i\u22121 and b i+1 . While down is well-defined (because we assume a tree), up can be ambiguous as to how many levels we should raise. To that end, we also introduce a pointer to each up block, which points at b j whose level b i belongs to (ptr i = b j , where j < i; Figure 1(8) ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 142, |
| "text": "(Figure 1", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 359, |
| "end": 371, |
| "text": "(Figure 1(6)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 532, |
| "end": 544, |
| "text": "(Figure 1(7)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 660, |
| "end": 669, |
| "text": "(Figure 1", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 782, |
| "end": 794, |
| "text": "(Figure 1(8)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 839, |
| "end": 851, |
| "text": "(Figure 1(9)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 1174, |
| "end": 1185, |
| "text": "Figure 1(8)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Problem Setting and Our Formulation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this study, we target four types of VSDs in different file formats and languages: WHEREAS, in a short period of time, COVID-19 has rapidly spread throughout Illinois, necessitating updated and more stringent guidance from federal, state, and local public health officials; and, WHEREAS, for the preservation of public health and safety throughout the entire State of Illinois, and to ensure that our healthcare delivery system is capable of serving those who are sick, I find it necessary to take additional measures consistent with public health guidance to slow and stop the spread of COVID-19;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Contract", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "WHEREAS, COVID-19 has resulted in significant economic impact, including loss of income and wages, that threaten to undermine housing security and stability;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "WHEREAS, the enforcement of eviction orders for residential premises is contrary to the interest of preserving public health and ensuring that individuals remain in their homes during this public health emergency; THEREFORE, by the powers vested in me as the Governor of the State of Illinois, and pursuant to Sections 7(1), 7(2), 7(8), 7(10), and 7(12) of the Illinois Emergency Management Agency Act, 20 ILCS 3305, and consistent with the powers in public health laws, I hereby order the following, effective March 21, 2020 at 5:00 pm and for the remainder of the duration of the Gubernatorial Disaster Proclamation, which currently extends through April 7, 2020: hardware or software products made subsequent to the initial image load/configuration.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Section", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "(5) \"Outsourcing and Professional Services\" means (a) consulting, ------------------------------------system migrations, project management, other services typically referred to as \"high-end\" services, and (b) outsourcing contracts having a term of more than one year which require pricing be done on a per seat basis. Agreements that consist primarily of lower-end services, including, but not limited to, break/fix, IMAC, warranty and low-end staff augmentation, other than agreements priced on a per-seat basis, are not Outsourcing and Professional Services agreements. (6) \"Service Accounts\" means the customer accounts of Seller listed ---------------on Exhibits B and C hereto.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "----------------(7) \"Services\" means all IT services offered by Seller, including, -------but not limited to, all outsourcing, professional services, break/fix, staff augmentation and consulting services; provided that the term --------\"Services\" shall not include (i) on-site Configuration of Products by CompuCom or (ii) the sale (but not the performance) by CompuCom of extended warranty contracts at time of initial sale of Products to customers. (8) \"Subsidiary,\" with respect to any person, means (i) any ---------corporation of which the outstanding capital stock having at least a majority of the votes entitled to be cast in the election of directors under ordinary circumstances shall at the time be owned, directly or indirectly, by such person or (ii) any other person of which at least a majority of the voting interest under ordinary circumstances is at the time, directly or indirectly, owned by such person.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Capitalized terms used in this Agreement and not otherwise defined shall have the meanings ascribed thereto in the Asset Purchase Agreement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "AGREEMENT TO COOPERATE Subject to the limitations set forth in Article III, Seller and CompuCom agree that with respect to each account listed on Exhibit A hereto, --------that until the earlier of (a) May 11, 2000 or (b) the termination of the existing contract between Seller and such account (i) each party will cooperate with the other in delivering Services and Products to such account in substantially the same manner in which such Services and Products were delivered to such and extracted blocks with an existing software. Specifically, we utilized PDFMiner and extracted each LTTextLine, which roughly corresponds to each line of text, as a block. We merged multiple LTTextLines where LTTextLines are vertically overlapping. For plain texts, we searched documents filed at EDGAR 5 . We simply used each non-blank line of a plain text as a block.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ARTICLE II", |
| "sec_num": null |
| }, |
| { |
| "text": "-2- (c) Contract txt en \u79d8\u5bc6\u4fdd\u6301\u5951\u7d04\u66f8 (\u4ee5\u4e0b\u3001\u7532\u3068\u3044\u3044\u307e\u3059)\u3068_______________________ (\u4ee5\u4e0b\u3001\u4e59\u3068\u3044\u3044\u307e\u3059)\u3068\u306f\u3001\u7532\u4e59\u9593\u306e\u53d6\u5f15(\u4ee5 \u4e0b\u3001\u672c\u4ef6\u53d6\u5f15\u3068\u3044\u3044\u307e\u3059)\u306b\u304a\u3044\u3066\u77e5\u308a\u5f97\u305f\u76f8\u624b\u65b9\u306e\u79d8\u5bc6\u60c5\u5831\u306e\u53d6\u6271\u3044\u306b\u95a2\u3057\u3066\u3001\u4ee5\u4e0b\u306e\u901a\u308a\u3001\u79d8\u5bc6\u4fdd\u6301", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ARTICLE II", |
| "sec_num": null |
| }, |
| { |
| "text": "We annotated all documents by hand. We describe more details of the data collection and annotation in Appendix A.1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ARTICLE II", |
| "sec_num": null |
| }, |
| { |
| "text": "The data statistics are given in Table 1 . While the number of documents is somewhat limited, we note that each document comes with many text blocks and evaluations were stable. Furthermore, it was enough to reliably show the difference between our system and baselines in our experiments.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 33, |
| "end": 40, |
| "text": "Table 1", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "ARTICLE II", |
| "sec_num": null |
| }, |
| { |
| "text": "In this work, we propose to employ handcrafted features and a machine learning-based classifier as the transition parser. This strategy is more suited to our task than utilizing deep learning because (1) we can incorporate visual, textual and semantic cues, and (2) it only requires a small number of training 4 (a) http://www.astho.org/Programs/Infec tious-Disease/Healthcare-Associated-In fections/Electronic-Health-Records/Toolk it/Data-Use-Agreement-New-York-City/, (b) https://www2.illinois.gov/IISNews/21288-Gov._Pritzker_Stay_at_Home_Order.pdf, (c) https://www.sec.gov/Archives/edgar/data/ 86115/0000930661/0000930661-99-001321-in dex.htm, and ( 3.4 3.9 3.1 3.0 #continuous 95.4 (68%) 110.6 (67%) 109.9 (77%) 33.9 (44%) #consecutive 20.3 (17%) 30.8 (20%) 15.3 (12%) 14.9 (20%) #up 8.5 ( 6%) 7.1 ( 4%) 4.8 ( 3%) 11.0 (15%) #down 9.4 ( 6%) 9.9 ( 6%) 4.6 ( 3%) 12.1 (17%) #omitted 4.4 ( 3%) 7.6 ( 3%) 7.4 ( 4%) 1.8 2%A number in the second set of rows indicates an average count over documents. A percentage represents an average ratio of each label. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Parser", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "b i\u22121 , b i , b j , b j+1 ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Parser", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where b j is the first block after b i with trans j = omitted. For", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Parser", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "trans i = omitted, we extract features from [b i\u22121 , b i , b i+1 , b i+2 ].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Parser", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "At test time, since we need to know the presence of omitted before feature extraction, we run a first pass of predictions to identify blocks with omitted, then use that information to dynamically extract features to identify other labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Parser", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Our system can be customized to different types of documents by modifying the features. We have designed a feature set for each document type by visually inspecting the training dataset (Table 2) . For Contract txt en , we regarded space characters as horizontal spacing and blank lines as vertical spacing, which allowed us to define features that are 1, 2, 3 T7 Starts with \"whereas\" 3 T8 Starts with \"now, therefore\" 3 T9 Dictionary-like (includes \":\" & not V4) 2, 3 T10 All capital 2, 3 T11 Contiguous blank field (underbars) 1-2, 2-3 T12 Horizontal line (\"*-=#%_+\" only) 1, 2, 3 Semantic features S1 Language model coherence * 1-2-3", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 186, |
| "end": 195, |
| "text": "(Table 2)", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Transition Parser", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The \"Blocks\" columns list blocks used to extract features for trans 2 (e.g.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Parser", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\"1-2, 2-3\" means [b i\u22121 , b i ] and [b i , b i+1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Parser", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "] are used to extract two sets of features). Features with a similar intended functionality are assigned the same feature name and implementations may vary for different document types. * : Explained in detail in Section 4.1. Figure 1 is down as 1. is the first numbering type that it sees and \"1\" will be added to the memory. B1 and B2 are continuous as no numbering is found and B3 is consecutive as a number \"2\" is found in the same type as 1.. B4 is down as it contains a new numbering type. Language model coherence (S1) To determine if b i should be classified as omitted, it utilizes language model to classify whether it is more natural to have b i or b i+1 after b i\u22121 . Specifically, we use GPT-2 (Radford et al., 2019) to calculate language model loss (i, i \u2212 1) for b i given b i\u22121 as a context (i.e., fed into the model but not used in the loss calculation). We then calculate (i, i\u2212 1) \u2212 (i + 1, i \u2212 1) as the feature. If it is more coherent to have b i after b i\u22121 , (i, i \u2212 1) will be smaller than (i + 1, i \u2212 1) and the feature value will be negative. We also utilize (i + 1, i) \u2212 (i + 1, i \u2212 1). Similar text in similar position (V10) Headers and footers tend to appear in similar positions across different pages with similar texts. For example, a contract may have the contract's title on every pages at the same position. This feature is 1 if there exists a block b j such that blocks' overlapping area is larger than 50% of their bounding box (treating as if they are on the same page), and their edit distance is small 6 .", |
| "cite_spans": [ |
| { |
| "start": 707, |
| "end": 729, |
| "text": "(Radford et al., 2019)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 226, |
| "end": 234, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Transition Parser", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "A Boolean feature that is 0 if the block spans to the right margin and 1 otherwise (i.e., breaks before the right margin). To distinguish the body and the margin of the document, we", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Line break before right margin (V4)", |
| "sec_num": null |
| }, |
| { |
| "text": "Disclosing Party: a Party to this Agreement which discloses its Confidential Information to the other Party;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Line break before right margin (V4)", |
| "sec_num": null |
| }, |
| { |
| "text": "2.1. Confidential Information means all confidential information relating to the Purpose which the Disclosing Confidential Information: has the meaning given in clause 2 of this Agreement;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Line break before right margin (V4)", |
| "sec_num": null |
| }, |
| { |
| "text": "Party or any of its Affiliates, discloses or makes available, to the Receiving Party or any of its Affiliates, before, on or after the Effective Date. This includes:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS THE PARTIES AGREE AS FOLLOWS:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Page 1 of 5 a) the terms of this Agreement;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS THE PARTIES AGREE AS FOLLOWS:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Right margin Clusters #Members 1 1 1 1 1 3 2 Figure 3 : A sketch of how we determine right margin. We apply 1D clustering on the right positions of the blocks and choose the rightmost cluster with at least a user-defined number of members. If we choose to have a minimum of two members, the right margin would be the cluster with three members. apply 1D clustering 7 on the right positions of the blocks and extract the rightmost cluster with minimum members of six per page (to ignore headers/footers) as the right margin (Figure 3) . This margin information is used in other features (V3, V6 and V7). Larger line spacing (V8) A Boolean feature that is 0 if line spacing is normal and 1 otherwise.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 45, |
| "end": 53, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 523, |
| "end": 533, |
| "text": "(Figure 3)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "CONFIDENTIAL INFORMATION", |
| "sec_num": "2." |
| }, |
| { |
| "text": "To determine the normal line spacing, we apply 1D clustering on line spacings and pick a cluster with the largest number of members.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CONFIDENTIAL INFORMATION", |
| "sec_num": "2." |
| }, |
| { |
| "text": "We also implement the pointer identification with handcrafted features and a machine learning-based classifier. Since a down transition creates a new level that a block can point back to, we extract all pairs of [b j , b i ] (b j \u2208 C i ) with trans i = up, trans j = down and j < i. We then extract features from [b j , b i ] and train a binary classifier to predict p (ptr i = b j |b j , b i ). In training, we use ground truth down labels to extract candidates C i . At test time, we aggregate C i from predicted transition labels and predict the pointer by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pointer Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "ptr i = argmax b j \u2208C i p (ptr i = b j |b j , b i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pointer Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "While our pointer points at a block with down (b j ), it is sometimes important to extract features from the first block in the paragraph that b j belongs to, which we will hereafter refer as b head(j) . Using b head(j) , we extract the following features from [b j , b i ]: 7 We utilized a na\u00efve 1D clustering, where it greedily adds elements from a sorted list to a cluster while the maximum difference of the elements is within a user-defined threshold. and b head(j) are left aligned, respectively. Transition counts We count numbers of blocks {b k } j<k<i with down and with up, respectively. We use these two numbers along with their difference as features. This is based on an intuition that a closer block with down tends to be more important. Pointer features are also customizable, but we used the same features 8 for all the document types.", |
| "cite_spans": [ |
| { |
| "start": 275, |
| "end": 276, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pointer Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "While we call our system a \"transition parser\", we do not employ a stack and instead employ the graph-based parser-like formulation for the pointer identification. We selected this strategy because of the recent success of graph-based parsers (Dozat and Manning, 2017; Zhang et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 243, |
| "end": 268, |
| "text": "(Dozat and Manning, 2017;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 269, |
| "end": 288, |
| "text": "Zhang et al., 2019)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pointer Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In this section, we briefly describe the implementation of our system that allows easy customization to different types of VSDs. Our system employs modular and customizable design and is implemented in Python. A user may implement a new feature extractor simply by writing a new fea- ture extractor class where each feature is implemented as its class function (Figure 4 ). For example, @single_input_feature([1]) denotes that the subsequent function should be applied to the second block of each context (thus corresponding to feature V6). Likewise, the features for pointer identification can be implemented by marking a function with @pointer_feature(), which takes a candidate block b j (tb1), a target block b i (tb2), the block next to the target block b i+1 (tb3) and b head(j) (head_tb) as an input.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 361, |
| "end": 370, |
| "text": "(Figure 4", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Implementation and Customization", |
| "sec_num": "5" |
| }, |
| { |
| "text": "A feature extractor object is instantiated for each document where all feature functions are automatically aggregated to produce the feature vector. A new feature extractor can inherit from an existing feature extractor (e.g., feature extractors for Contract pdf en and Contract pdf ja both inherit from a base PDF feature extractor), which makes it easy to reuse implementations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation and Customization", |
| "sec_num": "5" |
| }, |
| { |
| "text": "While we do report transition prediction accuracy, it is not a true task metric since it is rooted on our formulation of the task. Looking back at our initial motivation in Section 1, we introduce two sets of evaluation metrics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "The first set of metrics is rooted on IE perspective. For IE, it is important to identify ancestordescendant and sibling relationships because it allows, for example, identifying a subject (in an ancestral block) and its objects (a decendant block and its siblings). Thus, we evaluate F1 scores for identifying pairs of blocks in (1) same paragraph, (2) sibling, and (3) ancestor-descendant relationships, respectively ( Figure 5 ). Note that we do not include cousin blocks in the sibling relationship, because it is not clear whether cousin blocks have any meaningful information in the context of IE.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 421, |
| "end": 429, |
| "text": "Figure 5", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We use the second set of metrics to evaluate a system's efficacy as a preprocessing tool for more general NLP pipelines. We evaluate paragraph boundary identification metrics since paragraph boundaries can be used to determine appropriate chunks of text to be fed into the NLP pipelines. We also report accuracy for removing debris with omitted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We used five-folds cross validation for the evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We compared our system against the following baselines: Numbering baseline (Hatsutori et al., 2017) This baseline detects numberings using a set of regular expressions and identifies dropping in hierarchy when the type of numberings has changed. Adopting Hatsutori et al. (2017) to our problem formulation, our implementation is the same as the feature \"numbering transition (T1).\" Visual baseline This baseline relies purely on visual cues; i.e., indentation and line spacing. For each pair of consecutive blocks, this baseline outputs (1) continuous when indentation does not change and line spacing is normal (as in feature V8), (2) consecutive when indentation does not change and line spacing is larger than normal, (3) down when indentation gets larger, and (4) up when indentation gets smaller. On up, it points back at the closest block with the same indentation. PDFMiner We use this popular open-source project to detect paragraph boundaries as in Bast and Korzen (2017) . PDFMiner relies purely on geometric heuristics to detect paragraph breaks.", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 99, |
| "text": "(Hatsutori et al., 2017)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 255, |
| "end": 278, |
| "text": "Hatsutori et al. (2017)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 958, |
| "end": 980, |
| "text": "Bast and Korzen (2017)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We used Random Forest (Breiman, 2001) as the transition and pointer classifiers, which is suited for categorical features that occupy the majority of our features. We did not tune hyperparameters of the Random Forest classifier and used default values of scikit-learn (Pedregosa et al., 2011 ). For language model coherence feature S1, we \"Micro\": Micro-average, \"Macro\": Macro-average, \"P\": Precision, \"R\": Recall, \"F\": F1 score \"Micro\": Micro-average, \"Macro\": Macro-average, \"P\": Precision, \"R\": Recall, \"F\": F1 score Table 4 : Results for evaluation on preprocessing perspective used GPT-2 medium 9 for English documents and japanese-gpt2-medium 10 ) for Japanese documents.", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 37, |
| "text": "(Breiman, 2001)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 268, |
| "end": 291, |
| "text": "(Pedregosa et al., 2011", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 521, |
| "end": 528, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Implementation Details", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Structure and preprocessing evaluations are shown on Table 3 and Table 4 , respectively. Our system obtained micro-average structure prediction accuracy of 0.914 for Contract pdf en , 0.908 for Law pdf en , 0.828 for Contract txt en and 0.940 for Contract pdf ja , significantly outperforming the best baselines with 0.778, 0.827, 0.674 and 0.623, respectively. Our system performed the best with respect to F1 scores for all but one structure relationships. 9 https://huggingface.co/gpt2 10 https://huggingface.co/rinna/japanes e-gpt2-medium", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 53, |
| "end": 60, |
| "text": "Table 3", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 65, |
| "end": 72, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "The difference was even more significant for paragraph boundary detection. For Contract pdf en , our system obtained a micro-average paragraph boundary detection F1 score of 0.953 that is significantly better than PDFMiner with an F1 score of 0.739. PDFMiner performed on par with our visual baseline and generally performed worse than our numbering baseline. This shows the importance of incorporating textual information to preprocess VSDs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "Micro-average transition label prediction accuracies were 0.951 (Contract pdf en ), 0.938 (Law pdf en ), 0.955 (Contract txt en ) and 0.923 (Contract pdf ja ). We investigated the importance of each feature with greedy forward selection and greedy backward elimination of the features (Table 5 ). We can observe that our system makes a balanced use of All (0.914) All (0.914) All (0.908) All (0.908) All (0.828) All (0.828) All (0.940) All (0.940) 1 T1, 2 (0.763) T1, 2 (0.855) T1, 2 (0.685) T1, 2 (0.854) V8, 2-3 (0.333) T2, 2 (0.820) T1, 2 (0.596) T1, 2 (0.934) 2 V10, 2 (0.796) T10, 3 (0.794) V8, 2-3 (0.883) V10, 2 (0.859) T10, 2 (0.465) V9, 2 (0.811) V12, 2 (0.686) V9, 2 (0.909) 3 T10, 3 (0.818) T10, 2 (0.794) V10, 2 (0.893) T2, 2 (0.836) T9, 2 (0.716) T3, 2 (0.805) V1, 2-3 (0.821) V8, 2-3 (0.882) 4 T7, 3 (0.853) V1, 2-3 (0.796) V8, 1-2 (0.885) V8, 2-3 (0.800) T3, 2 (0.727) V5", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 285, |
| "end": 293, |
| "text": "(Table 5", |
| "ref_id": "TABREF12" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "(0.785) T2, 2 (0.813) S1 \u2021 , 2 (0.865) 5 T10, 2 (0.813) S1 \u2021 , 2 (0.808) V5, 2-3 (0.858) V1, 2-3 (0.747) T6, 2 (0.721) T10, 2 (0.781) V8, 2-3 (0.887) T2, 2 (0.856) 6 V1, 2-3 (0.844) T8, 2-3 (0.801) T7, 3 (0.881) V4, 2-3 (0.716) T4, 2 (0.723) V8, 2-3 (0.752) V9, 3 (0.906) V4, 2 (0.824) 7 V4, 2 (0.868) T2, 2 (0.774) V1, 2-3 (0.898) V9, 2 (0.676) T2, 2 (0.721) S1 \u2020 , 2 (0.749) T2, 1 (0.913) V1, 2-3 (0.787) 8 T2, 2 (0.886) V4, 2-3 (0.692) V3, 2 (0.904) S1 \u2020 , 2 (0.717) V4, 2 (0.722) V9, 3 (0.751) V4, 2-3 (0.926) V9, 2 (0.799) Numbers in parentheses show micro-average transition label prediction accuracy. The first line shows the results with all features. the visual and textual cues. \"Indentation (V1)\", \"Larger line spacing (V8)\" and \"numbering hierarchy (T1)\", which partially represent the baselines, were ranked high in many cases. At the same time, other features such as \"all capital (T10)\" and \"punctuated (T2)\" were also contributing significantly to the accuracy, which made our system much superior to the baselines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "\u2020 : (i, i \u2212 1) \u2212 (i + 1, i \u2212 1) variant. \u2021 : (i + 1, i) \u2212 (i + 1, i \u2212 1) variant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "The feature importance revealed that the semantic cue (S1) was no more important than other cues. We suspect that the feature (which compares whether adjacent or non-adjacent block is more likely given a context) had fallen back to mere language model with the context being ignored in some cases, possibly due to GPT-2 not being finetuned on the legal domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "We also conducted a qualitative error analysis. For Contract pdf en , we found that our system was performing poorly on documents where they had bold or underlined section titles, followed by paragraphs without any indentation (predicted continuous instead of down). We believe incorporating typographic features would improve our system as implied by the success of the \"all capital (T10)\" feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "For Contract txt en , we found that blocks that are all capitals or are all underbars were misclassified as omitted. All capital words and underbars are frequently used to denote headers and footers, but they were used as section titles and input fields in these examples. Unlike for Contract pdf en , we attribute this problem to lack of training data, as those should have been classified correctly with other features (such as T4 and T8) if the system had seen similar patterns in the training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "Interestingly, we observed that the system tends to do better in documents that are hierarchically more complex. This may be because hierarchically complex documents tend to incorporate more cues to support humans comprehend the documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "As discussed in Section 1, previous works mainly focused on word segmentation and layout analysis, whereas fine-grained logical structure analysis of VSDs is less addressed. Nevertheless, there exist some studies that focus on similar goals. Abreu et al. (2019) and Ferr\u00e9s et al. (2018) have tried to deal with logical structure analysis by identifying specific structures in VSDs such as subheadings. However, these studies are too coarse-grained and cannot handle paragraph-level logical structure, thus they are unable to satisfy the need we have discussed in Section 1. FinSBD-3 shared task (Au et al., 2021 ) is more fine-grained than those works and incorporates extraction of list items. However, its main focus is not on analysis of logical structures; it has only four static levels for list hierarchies and does not consider hierarchies in non-list paragraphs. Hatsutori et al. (2017) proposed a rule-based system that purely relies on numberings. We compared our system against it in Section 6 and showed that our system, which also incorporates textual and semantic cues, is superior to their method. Sporleder and Lapata (2004) proposed a paragraph boundary detection method for plain texts that purely relies on textual and semantic cues. While their method is not intended for VSDs, some of their ideas could be incorporated to our work as additional features. We leave use of more advanced semantic cues for a future work.", |
| "cite_spans": [ |
| { |
| "start": 242, |
| "end": 261, |
| "text": "Abreu et al. (2019)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 266, |
| "end": 286, |
| "text": "Ferr\u00e9s et al. (2018)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 595, |
| "end": 611, |
| "text": "(Au et al., 2021", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 871, |
| "end": 894, |
| "text": "Hatsutori et al. (2017)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1113, |
| "end": 1140, |
| "text": "Sporleder and Lapata (2004)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "7" |
| }, |
| { |
| "text": "While the goal is different, our textual features have some similarity to those used in sentence boundary detection (Gillick, 2009) . Since our goal is to predict structures as well as boundaries, we employ richer textual and visual features that they do not utilize.", |
| "cite_spans": [ |
| { |
| "start": 116, |
| "end": 131, |
| "text": "(Gillick, 2009)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "7" |
| }, |
| { |
| "text": "LayoutLM (Xu et al., 2020 (Xu et al., , 2021 incorporates multimodal self-supervised learning to utilize deep learning for form understanding. While it may alleviate the need for a large training dataset, it is not trivial to adopt the same method for logical structure analysis as text blocks would not fit onto the LayoutLM's context. Furthermore, it is easier to diagnose and to improve our system as it utilizes a combination of hand-crafted features, while deep learning systems tend to be completely black box.", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 25, |
| "text": "(Xu et al., 2020", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 26, |
| "end": 44, |
| "text": "(Xu et al., , 2021", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We proposed a transition parser-like formulation of the logical structure analysis of VSDs and developed a feature-based machine learning system that fuses visual, textual and semantic cues. Our system significantly outperformed baselines and an existing open-source software on different types of VSDs. The experiment revealed that incorporating both the visual and textual cues is crucial in successfully conducting logical structure analysis of VSDs. As a future work, we will incorporate typographic and more advanced semantic cues.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Calculated from Obermaier et al. (2016) by regarding their emails, PDFs and text documents as the denominator.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "d(bi, bj)/ max(len(bi), len(bj)) < 0.1, where d gives the Levenshtein distance and len gives the length of text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "More precisely, the pointer features are implemented slightly different for different document types, such as numbering being modified to Japanese for Contract pdf ja , but they are intended to have similar functionalities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://www.sec.gov/Archives/edgar/O ldloads/ annotation consistency, because labels can be easily determined by a brief inspection of the document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We used computational resource of AI Bridging Cloud Infrastructure (ABCI) provided by the National Institute of Advanced Industrial Science and Technology (AIST) for the experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section, we provide supplemental information regarding the data collection and the annotation discussed in Section 3. For PDFs, we queried Google search engines and downloaded the PDF files that the search engines returned. We used the following queries and the domains:en \" \"non-disclosure\" agreement filetype:pdf\" on seven domains from countries where English is widely spoken (US \".com\", UK \".co.uk\", Australia \".com.au\", New Zealand \".co.nz\", Singapore \".com.sg\", Canada \".ca\", South Africa \".co.za\").", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A.1 Details of Data Collection and Annotation", |
| "sec_num": null |
| }, |
| { |
| "text": "en \"site:*.gov \"order\" filetype:pdf\" on \"google.com\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Law pdf", |
| "sec_num": null |
| }, |
| { |
| "text": "ja \"\"\u79d8 \u5bc6 \u4fdd \u6301 \u5951 \u7d04 \u66f8\" filetype:pdf\" on \"google.co.jp\". For the collection of Contract txt en , we first download all the documents filed at EDGAR from 1996 to 2020 in a form of daily archives 11 . We uncompressed each archive and deserialized files using regular expressions by referencing to the EDGAR specifications(The U.S. Securities and Exchange Commission, 2018), which gave us 12,851,835 filings each of which contains multiple documents. We then extracted NDA candidates from the documents by a rule-based filtering. Using meta-data obtained during the deserialization, we extracted documents whose file type starts with \"EX\" (denotes that it is an exhibit), its file extension is one of \".pdf\", \".PDF\", \".txt\", \".TXT\", \".html\", \".HTML\", \".htm\" or \"HTM\", and its content is matched by a regular expression \"(?<![a-zA-Z,\"()] *)([Nn]on [-] [Dd]isclosure)|(NON[-]DISCLOSURE)\". We then randomly selected documents that fulfill following criteria:\u2022 it is an NDA or an executive order, \u2022 it has embedded texts (for PDFs), \u2022 it is a single column document, and \u2022 a similar document is not yet in the dataset. The last criterion mainly targets contracts from same organizations and executive orders from same authorities. It ensures that we get a wide variety of documents in our dataset.The datasets were annotated by one of the authors. We did not employ majority vote to improve", |
| "cite_spans": [ |
| { |
| "start": 840, |
| "end": 843, |
| "text": "[-]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contract pdf", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "FinDSE@FinTOC-2019 Shared Task", |
| "authors": [ |
| { |
| "first": "Carla", |
| "middle": [], |
| "last": "Abreu", |
| "suffix": "" |
| }, |
| { |
| "first": "Henrique", |
| "middle": [], |
| "last": "Cardoso", |
| "suffix": "" |
| }, |
| { |
| "first": "Eug\u00e9nio", |
| "middle": [], |
| "last": "Oliveira", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Second Financial Narrative Processing Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carla Abreu, Henrique Cardoso, and Eug\u00e9nio Oliveira. 2019. FinDSE@FinTOC-2019 Shared Task. In Pro- ceedings of the Second Financial Narrative Process- ing Workshop.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "FinSBD-2021: The 3rd Shared Task on Structure Boundary Detection in Unstructured Text in the Financial Domain", |
| "authors": [ |
| { |
| "first": "Willy", |
| "middle": [], |
| "last": "Au", |
| "suffix": "" |
| }, |
| { |
| "first": "Abderrahim", |
| "middle": [], |
| "last": "Ait-Azzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Juyeon", |
| "middle": [], |
| "last": "Kang", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Companion Proceedings of the Web Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3442442.3451378" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Willy Au, Abderrahim Ait-Azzi, and Juyeon Kang. 2021. FinSBD-2021: The 3rd Shared Task on Struc- ture Boundary Detection in Unstructured Text in the Financial Domain. In Companion Proceedings of the Web Conference 2021.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A Benchmark and Evaluation for Text Extraction from PDF", |
| "authors": [ |
| { |
| "first": "Hannah", |
| "middle": [], |
| "last": "Bast", |
| "suffix": "" |
| }, |
| { |
| "first": "Claudius", |
| "middle": [], |
| "last": "Korzen", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "2017 ACM/IEEE Joint Conference on Digital Libraries", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/JCDL.2017.7991564" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hannah Bast and Claudius Korzen. 2017. A Bench- mark and Evaluation for Text Extraction from PDF. In 2017 ACM/IEEE Joint Conference on Digital Li- braries.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Random Forests. Machine Learning", |
| "authors": [ |
| { |
| "first": "Leo", |
| "middle": [], |
| "last": "Breiman", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "45", |
| "issue": "", |
| "pages": "5--32", |
| "other_ids": { |
| "DOI": [ |
| "10.1023/A:1010933404324" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leo Breiman. 2001. Random Forests. Machine Learn- ing, 45(1):5-32.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Deep Biaffine Attention for Neural Dependency Parsing", |
| "authors": [ |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Dozat", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "5th International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing. In 5th International Conference on Learn- ing Representations.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "PDFdigest: an Adaptable Layout-Aware PDF-to-XML Textual Content Extractor for Scientific Articles", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Ferr\u00e9s", |
| "suffix": "" |
| }, |
| { |
| "first": "Horacio", |
| "middle": [], |
| "last": "Saggion", |
| "suffix": "" |
| }, |
| { |
| "first": "Francesco", |
| "middle": [], |
| "last": "Ronzano", |
| "suffix": "" |
| }, |
| { |
| "first": "\u00c0lex", |
| "middle": [], |
| "last": "Bravo", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Ferr\u00e9s, Horacio Saggion, Francesco Ronzano, and \u00c0lex Bravo. 2018. PDFdigest: an Adaptable Layout-Aware PDF-to-XML Textual Content Ex- tractor for Scientific Articles. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Sentence Boundary Detection and the Problem with the U.S", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Gillick", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Gillick. 2009. Sentence Boundary Detection and the Problem with the U.S. In Proceedings of Human Language Technologies: The 2009 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Estimating Legal Document Structure by Considering Style Information and Table of Contents", |
| "authors": [ |
| { |
| "first": "Yoichi", |
| "middle": [], |
| "last": "Hatsutori", |
| "suffix": "" |
| }, |
| { |
| "first": "Katsumasa", |
| "middle": [], |
| "last": "Yoshikawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Haruki", |
| "middle": [], |
| "last": "Imai", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "New Frontiers in Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "270--283", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/978-3-319-61572-1_18" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoichi Hatsutori, Katsumasa Yoshikawa, and Haruki Imai. 2017. Estimating Legal Document Structure by Considering Style Information and Table of Con- tents. In New Frontiers in Artificial Intelligence, pages 270-283. Springer International Publishing.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "About the Panama Papers", |
| "authors": [ |
| { |
| "first": "Frederik", |
| "middle": [], |
| "last": "Obermaier", |
| "suffix": "" |
| }, |
| { |
| "first": "Bastian", |
| "middle": [], |
| "last": "Obermayer", |
| "suffix": "" |
| }, |
| { |
| "first": "Vanessa", |
| "middle": [], |
| "last": "Wormer", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Jaschensky", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frederik Obermaier, Bastian Obermayer, Vanessa Wormer, and Wolfgang Jaschensky. 2016. About the Panama Papers. S\u00fcddeutsche Zeitung.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Scikit-learn: Machine Learning in Python", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pedregosa", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Varoquaux", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Gramfort", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Michel", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Thirion", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Grisel", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Blondel", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Prettenhofer", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Dubourg", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Vanderplas", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Passos", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cournapeau", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Brucher", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Perrot", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Duchesnay", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2825--2830", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Language Models are Unsupervised Multitask Learners", |
| "authors": [ |
| { |
| "first": "Alec", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Rewon", |
| "middle": [], |
| "last": "Child", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Luan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dario", |
| "middle": [], |
| "last": "Amodei", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Ope-nAI blog", |
| "volume": "1", |
| "issue": "8", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. Ope- nAI blog, 1(8):9.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Visual Detection with Context for Document Layout Analysis", |
| "authors": [ |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Soto", |
| "suffix": "" |
| }, |
| { |
| "first": "Shinjae", |
| "middle": [], |
| "last": "Yoo", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D19-1348" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlos Soto and Shinjae Yoo. 2019. Visual Detection with Context for Document Layout Analysis. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Automatic Paragraph Identification: A Study across Languages and Domains", |
| "authors": [ |
| { |
| "first": "Caroline", |
| "middle": [], |
| "last": "Sporleder", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Caroline Sporleder and Mirella Lapata. 2004. Auto- matic Paragraph Identification: A Study across Lan- guages and Domains. In Proceedings of the 2004 Conference on Empirical Methods in Natural Lan- guage Processing.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Deep-PDF: A Deep Learning Approach to Extracting Text from PDFs", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Stahl", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| }, |
| { |
| "first": "Drahomira", |
| "middle": [], |
| "last": "Herrmannova", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Patton", |
| "suffix": "" |
| }, |
| { |
| "first": "Jack", |
| "middle": [], |
| "last": "Wells", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher Stahl, Steven Young, Drahomira Herrman- nova, Robert Patton, and Jack Wells. 2018. Deep- PDF: A Deep Learning Approach to Extracting Text from PDFs. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Securities and Exchange Commission", |
| "authors": [ |
| { |
| "first": "U", |
| "middle": [ |
| "S" |
| ], |
| "last": "The", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "The U.S. Securities and Exchange Commission. 2018. EDGAR\u00ae Public Dissemination Service Technical Specification.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Multi-modal pretraining for visually-rich document understanding", |
| "authors": [ |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yiheng", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tengchao", |
| "middle": [], |
| "last": "Lv", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Cui", |
| "suffix": "" |
| }, |
| { |
| "first": "Furu", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "" |
| }, |
| { |
| "first": "Guoxin", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yijuan", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Dinei", |
| "middle": [], |
| "last": "Florencio", |
| "suffix": "" |
| }, |
| { |
| "first": "Cha", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wanxiang", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Lidong", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2021.acl-long.201" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, and Li- dong Zhou. 2021. LayoutLMv2: Multi-modal pre- training for visually-rich document understanding.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Layoutlm: Pretraining of text and layout for document image understanding", |
| "authors": [ |
| { |
| "first": "Yiheng", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Minghao", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Cui", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaohan", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Furu", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3394486.3403172" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutlm: Pre- training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "AMR Parsing as Sequence-to-Graph Transduction", |
| "authors": [ |
| { |
| "first": "Sheng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xutai", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Duh", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P19-1009" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. AMR Parsing as Sequence-to- Graph Transduction. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Overview of the logical structure analysis for VSDs and its formulation.", |
| "num": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "The Python implmenetation of a feature extractor Consecutive numbering Boolean features on whether a numbering in b i is contiguous to a numbering in b j and b head(j) , respectively. Indentation Categorical features on whether indentation gets larger, smaller or stays the same from b j to b i and from b head(j) to b i+1 , respectively. Left aligned Binary features on whether b j , b i+1", |
| "num": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "has ..... clause 2 of this Agreement; Disclosing Party: a Party to this ............... to the other Party; 2. CONFIDENTIAL INFORMATION 2.1. Confidential Information means ..... which the Disclosing Party or any of its Affiliates, ........... or any of its Affiliates, before, on or after the Effective Date. This includes: Page 1 of 5 a) the terms of this Agreement; 2.2. Confidential Information does not include ........ which: a) is independently developed by Receiving Party; or techniques, know-how, ..... or intangible form. b) all confidential or proprietary ........., pricing, operations, Evaluation from IE perspective. For each of ground truth and predicted trees, we extract a relationship matrix (right) that describes all the pairwise relationships and calculate F1 scores/accuracy by comparing the matrices.", |
| "num": null |
| }, |
| "TABREF0": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td colspan=\"2\">Transitions Pointers</td></tr><tr><td/><td>down</td><td>THE PARTIES AGREE AS FOLLOWS:</td></tr><tr><td/><td>down</td><td>1. DEFINITIONS</td></tr><tr><td/><td>consecutive</td></tr><tr><td/><td>up</td><td>B1</td></tr><tr><td>2. CONFIDENTIAL INFORMATION</td><td>down</td></tr><tr><td/><td>continuous</td></tr><tr><td/><td>omitted down consecutive continuous</td><td>2.1. Page 1 of 5 a) the terms of this Agreement;</td></tr><tr><td>2.2. Confidential Information does not include information which: a) is independently developed by Receiving Party; or Without logical structure analysis</td><td colspan=\"2\">down up continuous public knowledge or publicly available through no fault of the Receiving Party; or \u2026 B7 2.2. Confidential Information does not include information which: a) is independently developed by Receiving Party; or 2.2. Confidential Information does not include information which: a) is or subsequently becomes trade secrets, whether in tangible or intangible form.\u23ce customers, plans, pricing, operations, techniques, know-how, technical information, design, Agreement, ; Page 1 of 5 b) all confidential or proprietary information relating to: the business, or any of its Affiliates, before, on or after the Effective Date. This includes a) the terms of this the Disclosing Party or any of its Affiliates, discloses or makes available, to the Receiving Party 2.1. Confidential Information means all confidential information relating to the Purpose which 2. CONFIDENTIAL INFORMATION\u23ce techniques, know..... Removing debris based on logical structure analysis</td></tr></table>", |
| "text": "Confidential Information: has the meaning given in clause 2 of this Agreement;Disclosing Party: a Party to this ........... its Confidential Information to the other Party; 2. CONFIDENTIAL INFORMATION Confidential Information means ...... relating to the Purpose which the Disclosing Party or any of its Affiliates, ............, to the Receiving Party or any of its Affiliates, before, on or after the Effective Date. This includes: -how, ....., trade secrets, whether in tangible or intangible form. b) all confidential or proprietary .... business, customers, plans, pricing, operations," |
| }, |
| "TABREF2": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td><PAGE></td></tr><tr><td>(b) Law pdf en</td></tr></table>", |
| "text": "With exceptions as outlined below, all individuals currently living within the State of Illinois are ordered to stay at home or at their place of residence except as allowed in this Executive Order. To the extent individuals are using shared or outdoor spaces when outside their residence, they must at all times and as much as reasonably possible maintain social distancing of at least six feet from any other person, consistent with the Social Distancing Requirements set forth in this Executive Order. All persons may leave their homes or place of residence only for Essential Activities, Essential Governmental Functions, or to operate Essential Businesses and Operations, all as defined below.Individuals experiencing homelessness are exempt from this directive, but are strongly urged to obtain shelter, and governmental and other entities are strongly urged to make such shelter available as soon as possible and to the maximum extent practicable (and to use in their operation COVID-19 risk mitigation practices recommended by the U.S. Centers for Disease Control and Prevention (CDC) and the Illinois Department of Public" |
| }, |
| "TABREF4": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td>Contract pdf en</td><td>Law pdf en</td><td colspan=\"2\">Contract txt en Contract pdf ja</td></tr><tr><td>Format</td><td>PDF</td><td>PDF</td><td>Text</td><td>PDF</td></tr><tr><td>Language</td><td>English</td><td>English</td><td>English</td><td>Japanese</td></tr><tr><td>#Documents</td><td>40</td><td>40</td><td>22</td><td>40</td></tr><tr><td>#Text blocks</td><td>137.9</td><td>165.9</td><td>142.0</td><td>73.7</td></tr><tr><td>Max. depth</td><td/><td/><td/><td/></tr></table>", |
| "text": "d) http://www.septima.co.jp/co ntracts/27_himitsuhoji.pdf 5 https://www.sec.gov/edgar.shtml" |
| }, |
| "TABREF5": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "Dataset information data which is critical in the legal domain where most data is proprietary.For each block, our parser extracts features from a context of four blocks and performs multiclass classification over the five transition labels. Since omitted changes targets of transition, we also omit omitted blocks in feature extraction. For trans i = omitted, we extract features from [" |
| }, |
| "TABREF7": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "List of features for each feature extractor analogous to those for PDFs. While readers can reference our open-sourced code for the concrete implementation, we will discuss some of the features that have important implementation details. For a target block b It outputs (1) continuous if no numbering is found, (2) consecutive if the numbering in b i+1 is contiguous to the numbering in b i , (3) up if not consecutive and there is a corresponding number in the memory, and (4) down if it is none of above and it is the first number in its numbering type. For example, B0 in" |
| }, |
| "TABREF10": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td>Contract pdf en</td><td>Law pdf en</td><td>Contract txt en</td><td>Contract pdf ja</td></tr><tr><td>Criteria</td><td/><td colspan=\"4\">PDFMiner Visual Number Ours PDFMiner Visual Number Ours Visual Number Ours PDFMiner Visual Number Ours</td></tr><tr><td>Paragraph boundary</td><td>Micro Macro</td><td>P 0.672 0.563 0.914 0.958 R 0.822 0.968 0.700 0.948 F 0.739 0.712 0.793 0.953 P 0.698 0.598 0.921 0.958 R 0.798 0.964 0.703 0.945</td><td colspan=\"2\">0.546 0.536 0.911 0.948 0.465 0.783 0.955 0.858 0.916 0.637 0.948 0.989 0.637 0.945 0.667 0.676 0.750 0.948 0.633 0.702 0.950 0.632 0.565 0.866 0.946 0.527 0.840 0.953 0.874 0.930 0.522 0.943 0.984 0.633 0.944</td><td>0.531 0.603 0.961 0.970 0.850 0.663 0.627 0.991 0.653 0.632 0.759 0.980 0.585 0.645 0.964 0.970 0.867 0.653 0.624 0.988</td></tr></table>", |
| "text": "Results for evaluation on IE perspective" |
| }, |
| "TABREF12": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "Eight most important features chosen by greedy forward selection and backward elimination." |
| } |
| } |
| } |
| } |