ACL-OCL / Base_JSON /prefixU /json /U18 /U18-1011.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U18-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:12:05.604071Z"
},
"title": "Overview of the 2018 ALTA Shared Task: Classifying Patent Applications",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Moll\u00e1",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Macquarie University Sydney",
"location": {
"country": "Australia"
}
},
"email": ""
},
{
"first": "Dilesha",
"middle": [],
"last": "Seneviratne",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an overview of the 2018 ALTA shared task. This is the 9th of the series of shared tasks organised by ALTA since 2010. The task was to classify Australian patent classifications following the sections defined by the International Patient Classification (IPC), using data made available by IP Australia. We introduce the task, describe the data and present the results of the participating teams. Some of the participating teams outperformed state of the art.",
"pdf_parse": {
"paper_id": "U18-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an overview of the 2018 ALTA shared task. This is the 9th of the series of shared tasks organised by ALTA since 2010. The task was to classify Australian patent classifications following the sections defined by the International Patient Classification (IPC), using data made available by IP Australia. We introduce the task, describe the data and present the results of the participating teams. Some of the participating teams outperformed state of the art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When a patent application is submitted there is a process where the application is classified by examiners of patent offices or other people. Patent classifications make it feasible to search quickly for documents about earlier disclosures similar to or related to the invention for which a patent is applied for, and to track technological trends in patent applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The International Patent Classification (IPC) is a hierarchical patent classification system that has been agreed internationally. The first edition of the classification was established by the World Intellectual Property Organization (WIPO) and was in force from September 1, 1968 (WIPO, 2018 . The classification has undertaken a number of revisions since then. Under the current version, a patent can have several classification symbols but there is one which is the primary one. This is what is called the primary IPC mark.",
"cite_spans": [
{
"start": 264,
"end": 281,
"text": "September 1, 1968",
"ref_id": null
},
{
"start": 282,
"end": 293,
"text": "(WIPO, 2018",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An IPC classification symbol is specified according to a hierarchy of information. The generic form of the symbol is A01B 1/00, where each component has a special meaning as defined by WIPO (2018 Table 1 : Sections of the IPC sification symbol denotes the first level of the hierarchy or section symbol. This is a letter from A to H as defined in Table 1 . The goal of the 2018 ALTA Shared Task is to automatically classify Australian patents into one of the IPC sections A to H. Section 2 introduces the ALTA shared tasks. Section 3 presents some related work. Section 4 describes the data. Section 5 describes the evaluation criteria. Section 6 presents the results, and Section 7 concludes this paper.",
"cite_spans": [
{
"start": 185,
"end": 195,
"text": "WIPO (2018",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 196,
"end": 203,
"text": "Table 1",
"ref_id": null
},
{
"start": 347,
"end": 354,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The 2018 ALTA Shared Task is the 9th of the shared tasks organised by the Australasian Language Technology Association (ALTA). Like the previous ALTA shared tasks, it is targeted at university students with programming experience, but it is also open to graduates and professionals. The general objective of these shared tasks is to introduce interested people to the sort of problems that are the subject of active research in a field of natural language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The 2018 ALTA Shared Task",
"sec_num": "2"
},
{
"text": "There are no limitations on the size of the teams or the means that they can use to solve the problem, as long as the processing is fully automatic -there should be no human intervention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The 2018 ALTA Shared Task",
"sec_num": "2"
},
{
"text": "As in past ALTA shared tasks, there are two categories: a student category and an open category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The 2018 ALTA Shared Task",
"sec_num": "2"
},
{
"text": "\u2022 All the members of teams from the student category must be university students. The teams cannot have members that are full-time employed or that have completed a PhD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The 2018 ALTA Shared Task",
"sec_num": "2"
},
{
"text": "\u2022 Any other teams fall into the open category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The 2018 ALTA Shared Task",
"sec_num": "2"
},
{
"text": "The prize is awarded to the team that performs best on the private test set -a subset of the evaluation data for which participant scores are only revealed at the end of the evaluation period (see Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The 2018 ALTA Shared Task",
"sec_num": "2"
},
{
"text": "Extensive research has been conducted on automating patent classification in the IPC hierarchy and a wide variety of approaches have been proposed. These approaches use features that are generated/extracted from patent content (claim, description, etc), patent metadata (title, applicant name, filing date, inventor name, etc) and citations to represent patent documents in classification (Liu and Shih, 2011) . Patent content-based features are the most popular choice among the different types of features to address patent classification (Liu and Shih, 2011 ). In addition, features based on patent metadata which are considered to have strong classification power have been used to boost the classification performance (Richter and MacFarlane, 2005) . Further, patents are not isolated but they are connected through citations which provide rich information about the patent network. Thus, researchers have utilised patent citation information to generate features for patent classification (Liu and Shih, 2011; Li et al., 2007) . While all these types of features have served to build classifiers, which features can represent the patents well is still an open question (Gomez and Moens, 2014b).",
"cite_spans": [
{
"start": 389,
"end": 409,
"text": "(Liu and Shih, 2011)",
"ref_id": "BIBREF6"
},
{
"start": 541,
"end": 560,
"text": "(Liu and Shih, 2011",
"ref_id": "BIBREF6"
},
{
"start": 723,
"end": 753,
"text": "(Richter and MacFarlane, 2005)",
"ref_id": "BIBREF9"
},
{
"start": 995,
"end": 1015,
"text": "(Liu and Shih, 2011;",
"ref_id": "BIBREF6"
},
{
"start": 1016,
"end": 1032,
"text": "Li et al., 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Some of the widely used classification algorithms in the literature for building patent classification systems are Naive Bayes (NB), Artificial Neural Network (ANN), Support Vector Machines (SVM), K-Nearest Neighbour (KNN), Decision Trees (DT) and Logistic Regression (LR). The greater part of these systems has focused on achieving classification effectiveness. SVM has shown superior performance in terms of effectiveness with some datasets (Fall et al., 2003 ), yet it has not been able to scale with large datasets. Seneviratne et al. (2015) have proposed a document signature-based patent classification approach employing KNN which can address the scalability and efficiency with a competitive effectiveness.",
"cite_spans": [
{
"start": 443,
"end": 461,
"text": "(Fall et al., 2003",
"ref_id": "BIBREF1"
},
{
"start": 520,
"end": 545,
"text": "Seneviratne et al. (2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Given that there are different evaluation measures and different datasets, it is difficult to compare the performance between many patent classification approaches. Apart from the shared evaluation tasks of patent classification like CLEF-IP 2010 (Piroi et al., 2010) and CLEF-IP 2011 (Piroi et al., 2011) , where the performance of systems were evaluated using benchmark datasets, a limited number of approaches -e.g. by Fall et al. (2003) , Tikk et al. (2005) and Seneviratne et al. (2015) -have evaluated their methods using publicly available complete data sets like WIPOalpha 1 and WIPO-de. 2 The majority of other systems have been evaluated using ad-hoc datasets, making it difficult to extrapolate their performance (Gomez and Moens, 2014b).",
"cite_spans": [
{
"start": 247,
"end": 267,
"text": "(Piroi et al., 2010)",
"ref_id": "BIBREF7"
},
{
"start": 285,
"end": 305,
"text": "(Piroi et al., 2011)",
"ref_id": "BIBREF8"
},
{
"start": 422,
"end": 440,
"text": "Fall et al. (2003)",
"ref_id": "BIBREF1"
},
{
"start": 443,
"end": 461,
"text": "Tikk et al. (2005)",
"ref_id": "BIBREF11"
},
{
"start": 466,
"end": 491,
"text": "Seneviratne et al. (2015)",
"ref_id": "BIBREF10"
},
{
"start": 596,
"end": 597,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "The CLEF-IP 2010 and 2011 classification tasks required to classify patents at the IPC subclass level (Piroi et al., 2010 (Piroi et al., , 2011 , which is finer grained than the section level used in the ALTA shared task. Both of these classification tasks used evaluation measures such as Precision@1, Precision@5, Recall@5, Map and F1 at 5, 25 and 50. While the best results of these experiments varied, the best results were from Verberne and D'hondt (2011), who achieved 0.74, 0.86, and 0.71 for precision, recall, and F1 score respectively.",
"cite_spans": [
{
"start": 102,
"end": 121,
"text": "(Piroi et al., 2010",
"ref_id": "BIBREF7"
},
{
"start": 122,
"end": 143,
"text": "(Piroi et al., , 2011",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Most of the researchers who have conducted experiments with complete WIPO-alpha and WIPO-de datasets have reported their results at IPC section and subclass levels. For example, the hierarchical classification method by Tikk et al. (2005) has achieved an accuracy of 0.66 at the section level with the WIPO-alpha dataset and 0.65 with the WIPO-de dataset. Gomez and Moens (2014a) have reported their classification results for WIPO-alpha at the section level and the reported values for accuracy and macro-averaged F1 score are 0.74 and 0.71 respectively. 0 A 1 G 2 A 3 A 4 D 5 A Table 2 : First 5 rows of the training data",
"cite_spans": [
{
"start": 220,
"end": 238,
"text": "Tikk et al. (2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 556,
"end": 600,
"text": "0 A 1 G 2 A 3 A 4 D 5 A Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "The data used in the 2018 ALTA Shared Task consists of a collection of Australian patents partitioned into 3,972 documents for training and 1,000 documents for test. The documents are plain text files which are the result of applying a text extracting tool on the original PDF files. As a result, there are errors in the documents, some of which are documented by the participants of the shared task (Benites et al., 2018; Hepburn, 2018) . In particular, 61 documents contain the string \"NA[newline]parse failure\". In addition, meta-data information such as titles, authors, etc. are not marked up in the documents.",
"cite_spans": [
{
"start": 400,
"end": 422,
"text": "(Benites et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 423,
"end": 437,
"text": "Hepburn, 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "The data have been anonymised by replacing the original file names with unique IDs starting from number 1. Prior to assigning the IDs, the files have been shuffled and split into the training and test sets. Two CSV files are used to specify the training and test data, so that the training data contains the annotated sections, and the test data only contain the IDs of the test documents. Table 2 shows the first lines of the CSV file specifying the training data. Figure 1 shows the label distributions of the training and test data. There was no attempt to obtain stratified splits and consequently there were slight differences in the distributions of labels. We can also observe a large imbalance in the distribution of labels, where the most frequent label (\"A\") occurs in more than 30% of the data, and the least frequent label (\"D\") occurs in only 0.2% to 0.3% of the data.",
"cite_spans": [],
"ref_spans": [
{
"start": 390,
"end": 397,
"text": "Table 2",
"ref_id": null
},
{
"start": 466,
"end": 474,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "As in previous ALTA shared tasks, the 2018 shared task was managed and evaluated using Kaggle in Class, with the name \"ALTA 2018 Chal- . 3 This enabled the participants to submit runs prior to the submission deadline for immediate feedback and compare submissions in a leaderboard.",
"cite_spans": [
{
"start": 135,
"end": 138,
"text": ". 3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The framework provided by Kaggle in Class allowed the partition of the test data into a public and a private section. Whenever a participating team submitted a run, the evaluation results of the public partition were immediately available to the team, and the best results of each team appeared in the public leaderboard. The evaluation results of the private partition were available to the competition organisers only, and were used for the final ranking after the submission deadline. To split the test data into the public and private partitions, we used the defaults provided by Kaggle in Class. These defaults performed a random partition with 50% of the data falling into the public partition, and the remaining 50% falling into the private partition. The participants were able to see the entire unlabelled evaluation data, but they did knot know what part of the evaluation data belonged to which partition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Each participating team was allowed to submit up to two (2) runs per day. By limiting the number of runs per day, and by not disclosing the results of the private partition, the risks of overfitting to the private test results were controlled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The chosen evaluation metric was the microaveraged F1 score. This metric is common in multi-label classification tasks, and measures the harmonic mean of recall and precision according to the formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "F 1 = 2 p \u2022 r p + r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Where p is the precision computed as the ratio of true positives to all predicted positives, and r is the recall computed as the ratio of true positives to all actual positives. In particular:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "p = k\u2208C tp k k\u2208C tp k + k\u2208C f p k r = k\u2208C tp k k\u2208C tp k + k\u2208C f n k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Where tp k , f p k and f n k are the number of true positives, false positives, and false negatives, respectively, in class k \u2208 {A, B, C, D, E, F, G, H}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "A total of 14 teams registered in the student category, and 3 teams registered in the open category. Due to the nature of the Kaggle in Class framework, Kaggle users could register to the Kaggle system and submit runs without notifying the ALTA organisers, and therefore a number of runs were from unregistered teams. In total, 14 teams submitted runs, of which 6 were registered in the student category and 3 were registered in the open category. The remaining teams were disqualified for the final prize. Table 3 shows the results of the public and private submissions of all teams, including the runs of disqualified teams. Table 3 also includes two baselines. The Naive Bayes baseline was made available to the participants as a Kaggle kernel. 4 The baseline implemented a simple pipeline using the sklearn environment 5 that implemented a Naive Bayes classifier using tf.idf features. Both the Naive Bayes classifier and the tf.idf vectoriser used the defaults provided by sklearn and were not fine-tuned. All of the participant's best runs outperformed the baseline.",
"cite_spans": [
{
"start": 748,
"end": 749,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 507,
"end": 514,
"text": "Table 3",
"ref_id": null
},
{
"start": 627,
"end": 634,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The SIG CLS baseline is the system reported by Seneviratne et al. (2015) . The system was retrained with the shared task data with small 4 https://www.kaggle.com/dmollaaliod/ naive-bayes-baseline 5 https://scikit-learn.org/stable/ Table 3 : Micro-averaged F1 of the best public and private runs changes on the system settings. 6 Virtually all participants obtained better results than this second baseline.",
"cite_spans": [
{
"start": 47,
"end": 72,
"text": "Seneviratne et al. (2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In past competitions of the ALTA shared task there were some differences between the rankings given in the public and the private submissions. This is the first time, however, that the best teams in the public and the private runs differ. Following the rules of the shared task, the winning team was BMZ, and the best team in the student category was Jason Hepburn. These two teams describe their system in separate papers (Benites et al., 2018; Hepburn, 2018) .",
"cite_spans": [
{
"start": 423,
"end": 445,
"text": "(Benites et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 446,
"end": 460,
"text": "Hepburn, 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The 2018 ALTA Shared Task was the 9th of the series of shared tasks organised by ALTA. This year's task focused on document classification of Australian patent applications following the sections defined by the International Patent Classification (IPC). There was very active participation, with some teams submitting up to 30 runs. Participation was increasingly active near the final submission date, and the top rows of the public leaderboard changed constantly. To the best of our knowledge, prior to this shared task the bestperforming system using the WIPO-alpha set reported an accuracy of 0.74 and a macro-averaged F1 score of 0.71 (Gomez and Moens, 2014a Tikk et al. (2005) WIPO-alpha 0.66 Table 4 : Micro-F1, Macro-F1 and Accuracy of best-performing systems and comparison with literature.",
"cite_spans": [
{
"start": 640,
"end": 663,
"text": "(Gomez and Moens, 2014a",
"ref_id": "BIBREF2"
},
{
"start": 664,
"end": 682,
"text": "Tikk et al. (2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 699,
"end": 706,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "ble 4 shows the accuracy and micro-and macroaveraged F1 score of the two top-performing systems in the test set of the ALTA shared task. 7 Both systems achieved better results in all comparable metrics, which indicates that they appear to have outperformed the state of the art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "http://www.wipo.int/classifications/ ipc/en/ITsupport/Categorization/dataset/ wipo-alpha-readme.html 2 http://www.wipo.int/classifications/ ipc/en/ITsupport/Categorization/dataset/ index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.kaggle.com/c/ alta-2018-challenge",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The specific system settings were: signature width of 8,192 bits, and 10-nearest neighbours. The complete patent text was used to build the patent signatures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This shared task was made possible thanks to the data provided by the Digital Transformation Agency and IP Australia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Classifying patent applications with ensemble methods",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Benites",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Benites, Shervin Malmasi, and Marcos Zampieri. 2018. Classifying patent applications with ensemble methods. In Proceedings ALTA 2018.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automated categorization in the international patent classification",
"authors": [
{
"first": "J",
"middle": [],
"last": "Caspar",
"suffix": ""
},
{
"first": "Atilla",
"middle": [],
"last": "Fall",
"suffix": ""
},
{
"first": "Karim",
"middle": [],
"last": "T\u00f6rcsv\u00e1ri",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Benzineb",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Karetka",
"suffix": ""
}
],
"year": 2003,
"venue": "Acm Sigir Forum",
"volume": "37",
"issue": "",
"pages": "10--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caspar J Fall, Atilla T\u00f6rcsv\u00e1ri, Karim Benzineb, and Gabor Karetka. 2003. Automated categorization in the international patent classification. In Acm Sigir Forum. ACM, volume 37, pages 10-25.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Minimizer of the reconstruction error for multi-class document categorization",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2014,
"venue": "Expert Systems with Applications",
"volume": "41",
"issue": "3",
"pages": "861--868",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Carlos Gomez and Marie-Francine Moens. 2014a. Minimizer of the reconstruction error for multi-class document categorization. Expert Systems with Ap- plications 41(3):861-868.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A survey of automated hierarchical classification of patents",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2014,
"venue": "Professional Search in the Modern World",
"volume": "",
"issue": "",
"pages": "215--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Carlos Gomez and Marie-Francine Moens. 2014b. A survey of automated hierarchical classification of patents. In Professional Search in the Modern World, Springer, pages 215-249.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Universal language model finetuning for patent classification",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Hepburn",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings ALTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Hepburn. 2018. Universal language model fine- tuning for patent classification. In Proceedings ALTA 2018.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic patent classification using citation network information: an experimental study in nanotechnology",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hsinchun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiexun",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries",
"volume": "",
"issue": "",
"pages": "419--427",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Li, Hsinchun Chen, Zhu Zhang, and Jiexun Li. 2007. Automatic patent classification using citation network information: an experimental study in nan- otechnology. In Proceedings of the 7th ACM/IEEE- CS joint conference on Digital libraries. ACM, pages 419-427.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hybridpatent classification based on patent-network analysis",
"authors": [
{
"first": "Duen-Ren",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Meng-Jung",
"middle": [],
"last": "Shih",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of the American Society for Information Science and Technology",
"volume": "62",
"issue": "2",
"pages": "246--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duen-Ren Liu and Meng-Jung Shih. 2011. Hybrid- patent classification based on patent-network analy- sis. Journal of the American Society for Information Science and Technology 62(2):246-256.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Clef-ip 2010: Retrieval experiments in the intellectual property domain",
"authors": [
{
"first": "Florina",
"middle": [],
"last": "Piroi",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Lupu",
"suffix": ""
},
{
"first": "Allan",
"middle": [],
"last": "Hanbury",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"P"
],
"last": "Sexton",
"suffix": ""
},
{
"first": "Walid",
"middle": [],
"last": "Magdy",
"suffix": ""
},
{
"first": "Igor",
"middle": [
"V"
],
"last": "Filippov",
"suffix": ""
}
],
"year": 2010,
"venue": "CLEF",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florina Piroi, Mihai Lupu, Allan Hanbury, Alan P Sexton, Walid Magdy, and Igor V Filippov. 2010. Clef-ip 2010: Retrieval experiments in the intel- lectual property domain. In CLEF (notebook pa- pers/labs/workshops).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Clef-ip 2011: Retrieval in the intellectual property domain",
"authors": [
{
"first": "Florina",
"middle": [],
"last": "Piroi",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Lupu",
"suffix": ""
},
{
"first": "Allan",
"middle": [],
"last": "Hanbury",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Zenz",
"suffix": ""
}
],
"year": 2011,
"venue": "CLEF",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florina Piroi, Mihai Lupu, Allan Hanbury, and Veronika Zenz. 2011. Clef-ip 2011: Retrieval in the intellectual property domain. In CLEF (notebook papers/labs/workshop).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The impact of metadata on the accuracy of automated patent classification",
"authors": [
{
"first": "Georg",
"middle": [],
"last": "Richter",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Macfarlane",
"suffix": ""
}
],
"year": 2005,
"venue": "World Patent Information",
"volume": "27",
"issue": "1",
"pages": "13--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georg Richter and Andrew MacFarlane. 2005. The impact of metadata on the accuracy of automated patent classification. World Patent Information 27(1):13-26.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A signature approach to patent classification",
"authors": [
{
"first": "Dilesha",
"middle": [],
"last": "Seneviratne",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Geva",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Zuccon",
"suffix": ""
},
{
"first": "Gabriela",
"middle": [],
"last": "Ferraro",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Chappell",
"suffix": ""
},
{
"first": "Magali",
"middle": [],
"last": "Meireles",
"suffix": ""
}
],
"year": 2015,
"venue": "Asia Information Retrieval Symposium",
"volume": "",
"issue": "",
"pages": "413--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dilesha Seneviratne, Shlomo Geva, Guido Zuccon, Gabriela Ferraro, Timothy Chappell, and Magali Meireles. 2015. A signature approach to patent clas- sification. In Asia Information Retrieval Sympo- sium. Springer, pages 413-419.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Experiment with a hierarchical text categorization method on wipo patent collections",
"authors": [
{
"first": "Domonkos",
"middle": [],
"last": "Tikk",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Bir\u00f3",
"suffix": ""
},
{
"first": "Jae",
"middle": [
"Dong"
],
"last": "Yang",
"suffix": ""
}
],
"year": 2005,
"venue": "Applied Research in Uncertainty Modeling and Analysis",
"volume": "",
"issue": "",
"pages": "283--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Domonkos Tikk, Gy\u00f6rgy Bir\u00f3, and Jae Dong Yang. 2005. Experiment with a hierarchical text catego- rization method on wipo patent collections. In Ap- plied Research in Uncertainty Modeling and Analy- sis, Springer, pages 283-302.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Patent classification experiments with the linguistic classification system lcs in clef-ip 2011",
"authors": [
{
"first": "Suzan",
"middle": [],
"last": "Verberne",
"suffix": ""
},
{
"first": "D'",
"middle": [],
"last": "Eva",
"suffix": ""
}
],
"year": 2011,
"venue": "CLEF (Notebook Papers/Labs/Workshop)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suzan Verberne and Eva D'hondt. 2011. Patent classi- fication experiments with the linguistic classification system lcs in clef-ip 2011. In CLEF (Notebook Pa- pers/Labs/Workshop).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Guide to the international patent classification, version 2018",
"authors": [
{
"first": "",
"middle": [],
"last": "Wipo",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "WIPO. 2018. Guide to the international patent classifi- cation, version 2018. Technical report, World Intel- lectual Property Organization.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Distribution of labels in percentages lenge\"",
"num": null
},
"TABREF0": {
"content": "<table><tr><td colspan=\"2\">Symbol Section</td></tr><tr><td>A</td><td>Human necessities</td></tr><tr><td>B</td><td>Performing operations, transporting</td></tr><tr><td>C</td><td>Chemistry, metallurgy</td></tr><tr><td>D</td><td>Textiles, paper</td></tr><tr><td>E</td><td>Fixed constructions</td></tr><tr><td>F</td><td>Mechanical engineering, lighting,</td></tr><tr><td/><td>heating, weapons, blasting</td></tr><tr><td>G</td><td>Physics</td></tr><tr><td>H</td><td>Electricity</td></tr></table>",
"num": null,
"type_str": "table",
"text": "). The first character of the IPC clas-",
"html": null
}
}
}
}