ACL-OCL / Base_JSON /prefixR /json /ranlp /2021.ranlp-1.139.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:54:03.381331Z"
},
"title": "Exploiting Domain-Specific Knowledge for Judgment Prediction is no Panacea",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Sala\u00fcn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Montr\u00e9al",
"location": {}
},
"email": "salaunol@iro.umontreal.ca"
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Montr\u00e9al",
"location": {}
},
"email": ""
},
{
"first": "Karim",
"middle": [],
"last": "Benyekhlef",
"suffix": "",
"affiliation": {
"laboratory": "Cyberjustice Laboratory",
"institution": "University of Montr\u00e9al",
"location": {}
},
"email": "karim.benyekhlef@umontreal.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Legal judgment prediction (LJP) usually consists in a text classification task aimed at predicting the verdict on the basis of the fact description. The literature shows that the use of articles as input features helps improve the classification performance. In this work, we designed a verdict prediction task based on landlord-tenant disputes and we applied BERTbased models to which we fed different articlebased features. Although the results obtained are consistent with the literature, the improvements with the articles are mostly obtained with the most frequent labels, suggesting that pre-trained and fine-tuned transformer-based models are not scalable as is for legal reasoning in real life scenarios as they would only excel in accurately predicting the most recurrent verdicts to the detriment of other legal outcomes.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Legal judgment prediction (LJP) usually consists in a text classification task aimed at predicting the verdict on the basis of the fact description. The literature shows that the use of articles as input features helps improve the classification performance. In this work, we designed a verdict prediction task based on landlord-tenant disputes and we applied BERTbased models to which we fed different articlebased features. Although the results obtained are consistent with the literature, the improvements with the articles are mostly obtained with the most frequent labels, suggesting that pre-trained and fine-tuned transformer-based models are not scalable as is for legal reasoning in real life scenarios as they would only excel in accurately predicting the most recurrent verdicts to the detriment of other legal outcomes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "At the intersection of machine learning and law, legal judgment prediction (LJP) is a task that consists in predicting either the outcome of a lawsuit (Skalak, 1989; Nallapati and Manning, 2008; Katz et al., 2017; Aletras et al., 2016; Liu and Chen, 2017) or some other case attributes such as legal areas (\u015e ulea et al., 2017; Soh et al., 2019) or charges .",
"cite_spans": [
{
"start": 151,
"end": 165,
"text": "(Skalak, 1989;",
"ref_id": "BIBREF20"
},
{
"start": 166,
"end": 194,
"text": "Nallapati and Manning, 2008;",
"ref_id": "BIBREF17"
},
{
"start": 195,
"end": 213,
"text": "Katz et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 214,
"end": 235,
"text": "Aletras et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 236,
"end": 255,
"text": "Liu and Chen, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 306,
"end": 327,
"text": "(\u015e ulea et al., 2017;",
"ref_id": "BIBREF22"
},
{
"start": 328,
"end": 345,
"text": "Soh et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One specificity of court rulings is that they are based on the application of legal articles to the factual description of the case. That is to say, a judge must determine whether some law articles are relevant to a case, and if applicable, whether the legal principles they embody are violated. Therefore, articles as domain-specific knowledge can be used as leverage for improving LJP performance, as shown by Luo et al. (2017) and Long et al. (2019) for charge prediction and divorce judgment prediction respectively. also went further by using articles for distinguishing confusing charges in a charge prediction task.",
"cite_spans": [
{
"start": 412,
"end": 429,
"text": "Luo et al. (2017)",
"ref_id": "BIBREF14"
},
{
"start": 434,
"end": 452,
"text": "Long et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Meanwhile, transformers (Vaswani et al., 2017) and BERT models (Devlin et al., 2019) in particular have been widely used in NLP tasks with the assumption that such models, if first pre-trained on massive corpora and then fine-tuned on the dataset of a given task, could suffice for achieving significant improvements. On one hand, this turned out to be true with the CAIL2018 dataset (charge prediction task) as shown by . On the other hand, Holzenberger et al. (2020) mentioned in a statutory reasoning entailment task that a transformer model does worse than a rule-based model, even after further pre-training on the domain corpus. Furthermore, in an employment notice prediction task, Lam et al. (2020) emphasized that domain adaption of such models could even harm performance. These elements raise the question of how well a pre-trained transformer model can handle a legal NLP task and how well the input from domain-specific knowledge such as legislative text can improve the LJP performance. To the best of our knowledge, in the case of LJP tasks aimed at verdict prediction, no experiment has tested so far the application of pre-trained BERT models on both tribunal decision text and cited law articles text combined altogether, which we intend to do in this work.",
"cite_spans": [
{
"start": 24,
"end": 46,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 63,
"end": 84,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 442,
"end": 468,
"text": "Holzenberger et al. (2020)",
"ref_id": "BIBREF6"
},
{
"start": 689,
"end": 706,
"text": "Lam et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We designed a multilabel classification task in which the model must predict the ruling outcomes on the basis of the facts description. One can imagine that such a predictive engine could be used for legal assistance for those who may not afford the services of a legal expert. Unlike Luo et al. (2017) , we put the article prediction aside in order to focus solely on the verdict prediction and assess in which conditions input article-based features can improve classification. For our experiments, we use a landlord-tenant disputes corpus used by West-ermann et al. (2019) and Sala\u00fcn et al. (2020) from which we extracted fine-grained targets labels and article features in order to encompass as much as possible the variety of rulings, thus making the task more representative of real life cases. We present the preprocessing of the dataset along with the creation of article-based features in Section 2. The architectures of the models are shown in Section 3 along with three methods for integrating the information from the articles mentioned in the decisions. Discussion of the results and concluding remarks are provided in Sections 4 and 5 respectively.",
"cite_spans": [
{
"start": 285,
"end": 302,
"text": "Luo et al. (2017)",
"ref_id": "BIBREF14"
},
{
"start": 550,
"end": 575,
"text": "West-ermann et al. (2019)",
"ref_id": null
},
{
"start": 580,
"end": 600,
"text": "Sala\u00fcn et al. (2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Administrative Housing Tribunal is a court of Quebec in Canada with an exclusive jurisdiction in landlord-tenant disputes. We got access to an exhaustive corpus of 667,305 decisions in French issued from 2001 to 2018 publicly accessible through SOQUIJ portal 1 . Documents average and median lengths amount to 307 and 235 tokens respectively with a standard deviation of 371. Each decision is split in two by applying heuristics based on the syntax of the documents: the pre-verdict section, used as text input (text before the italics in Figure 1 ), and the verdict section containing the legal solution chosen by the judge in charge of the case (text in italics in Figure 1 ). The pre-processing of both sections are described in subsections 2.1 and 2.2 respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 543,
"end": 551,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 671,
"end": 679,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Preparation of the Dataset",
"sec_num": "2"
},
{
"text": "As one of our goal is to assess the conditions in which articles help to improve predictions, we ap-1 Soci\u00e9t\u00e9 qu\u00e9b\u00e9coise d'information juridique https:// soquij.qc.ca/ Articles Target labels 1619 1971 1883 monetary penalty for defendant eviction termination of the lease Table 1 : Civil Code of Quebec articles and verdict labels extracted from the decision shown in Figure 1. plied heuristics on the pre-verdict text to extract a total of 1,790 unique cited law articles, 33.8% of which were mentioned only once across the entire corpus. Also, not all articles are related to housing law. We address this by keeping only 445 articles from the Book Five -Obligations of Civil Code of Quebec (C.C.Q.) which establishes the rules concerning the contractual obligations between landlords and tenants and whose frequency in the corpus has a minimum of two. Three examples are shown on Table 1 for the decision in Figure 1 . Article distribution is heavily skewed: the 3 most frequent articles cover 72%, 42% and 27% of all documents respectively while all other articles do not exceed 4%. Mean and median frequency of the articles amount to 2571 and 17 respectively. Section 3 further describes the use of these articles as input.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 288,
"text": "Target labels 1619 1971 1883 monetary penalty for defendant eviction termination of the lease Table 1",
"ref_id": null
},
{
"start": 377,
"end": 386,
"text": "Figure 1.",
"ref_id": "FIGREF0"
},
{
"start": 891,
"end": 898,
"text": "Table 1",
"ref_id": null
},
{
"start": 919,
"end": 927,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Preprocessing of Input Features",
"sec_num": "2.1"
},
{
"text": "The pre-verdict section contains both fact description and legal analysis. As the latter can give hints about the verdict that the model is expected to predict, we removed from the pre-verdict section any paragraphs containing citations of articles (underlined text in Figure 1 ) and we capped the maximum input text length at 128 tokens. By doing so, we force the model to make predictions on the sole basis of fact descriptions.",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 277,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Preprocessing of Input Features",
"sec_num": "2.1"
},
{
"text": "We carefully combined NLP-engineered tools (regular expressions and the like) and some housing law expertise in order to pseudo-automatically identify 23 labels that we believe are representative of the rulings and that cover the diversity of the verdicts at a fine grain. These labels are cumulative and three are shown as an example in Table 1 for the decision in Figure 1 . The average and median numbers of labels per decision both amount to 3 with a standard deviation of 1.5. Nearly half of all rulings involves an eviction (48.1%) and a termination of the lease (46.1%), hinting that a significant part of the cases involve an unfavourable outcome for tenants sued by landlords. Further investigation confirms a bias favourable for landlords as 80.3% of cases with the top frequent label monetary penalty for defendant have a tenant as the (penalized) defendant.",
"cite_spans": [],
"ref_spans": [
{
"start": 338,
"end": 345,
"text": "Table 1",
"ref_id": null
},
{
"start": 366,
"end": 374,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Making Target Labels from the Verdict Section of the Decisions",
"sec_num": "2.2"
},
{
"text": "Overall, 0.05% of all instances were not assigned any labels and 18.2% did not contain any articles. For the design of our experiments, all instances with no article or no verdict label were excluded, thus resulting in a final corpus of 544,857 documents with an average of 3.3 labels per instance (standard deviation of 1.4 and median of 4). The average and median numbers of articles per document both amount to 2. Our instances are randomly divided into training, validation and test sets with a 60-20-20 ratio.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Making Target Labels from the Verdict Section of the Decisions",
"sec_num": "2.2"
},
{
"text": "We aim at designing a multilabel classification task in which a model has to return the labels corresponding to the verdict on the basis of the preverdict section of each decision. Our baseline is a One-Versus-Rest Logistic Regression, i.e. each label has its own classifier. The input text is vectorized through character-based TF-IDF spanning bigrams to 8-grams (character-based features outperformed token-based). Only the top 100k n-grams are retained. We also use CamemBERT (Martin et al., 2019) , a variant of BERT (Devlin et al., 2019) that was pre-trained on generic French corpora. We further pre-trained the camembert-base default parameters (unsupervised masked language modelling task) from Wolf et al. (2020) on the train set during 20 epochs 2 . We eventually fine-tuned them during the multilabel classification task. The batch size and the maximum number of fine-tuning epochs are set at 50 and 20 respectively. Training is stopped when no further improvement is obtained in terms of exact match on validation set. We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate at 10 -5 (all other hyperparameters are left at default value). The optimization criterion is the binary cross entropy with logits loss for numerical stability during optimization. Therefore the final output is that of a logit function, with scores ranging from -inf to +inf. A label is returned whenever its associated output value exceeds 0. We use a vanilla CamemBERT model whose only input 2 Exact match achieved by further pre-trained models is around one percent point greater than models with default pre-trained parameters.",
"cite_spans": [
{
"start": 479,
"end": 500,
"text": "(Martin et al., 2019)",
"ref_id": null
},
{
"start": 521,
"end": 542,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 703,
"end": 721,
"text": "Wolf et al. (2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "is the pre-verdict text and three other variants described in the next subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "For each instance, the mention/absence of each article is one-hot encoded through a 445-dimensional vector, each dimension corresponding to one article. We have one model named BERT-OH ( Figure 2 part a) in which the BERT output of a decision text (768-dimensional vector corresponding to the first token from hidden states) and the articles one-hot vector are concatenated and passed through fully connected layers before outputting the verdict labels. Given the heavily skewed distribution of these articles among the documents, these discrete onehot vectors are sparse and likely not very expressive (in the case of Figure 1 for instance, all dimensions except three would be zeroed because of only three articles extracted as shown in Table 1 ). As a result, we thought of a continuous and more expressive representation that could embed the articles organization within the law. In the one-hot vector approach, each article is assigned one dimension independent from the other as if all articles were completely unrelated to each other. But when having a closer look at the C.C.Q. Book Five -Obligations 3 on Figure 3 that concentrates the rules related to landlord-tenant disputes, the articles are actually organized into titles, chapters, divisions, paragraphs and so on, down to the articles themselves. As each subcategory becomes more and more precise, the articles encompassed in it are related to closer and closer legal concepts. For instance, articles 1957 to 1970 are especially dedicated to repossession of a dwelling and eviction and can be expected to relate to the same legal objects. Therefore, we wanted to make an embedding that could capture the structural closeness between articles, that is, two articles located in the same subsection would have closer representations. Another argument in favour of using embeddings based on the topological relatedness among articles is the fact that articles with close numbers have a tendency to co-occur together in the decisions, as shown on the diagonal of Figure 4 along which distinct articles with close numbers tend to belong to the same subsections and to have higher correlation values.",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 195,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 619,
"end": 627,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 739,
"end": 746,
"text": "Table 1",
"ref_id": null
},
{
"start": 1114,
"end": 1122,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 2024,
"end": 2032,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "One-Hot and Node2Vec Encoding of Articles",
"sec_num": "3.1"
},
{
"text": "One method for representing the topological organization of the law is Node2Vec (Grover and Leskovec, 2016) . We first built a tree whose root is Book Five -Obligations and added all of the subsequent sections as nodes. Articles were added as leaves. Next, the edges were placed by linking each node/section to the subsequent nodes/sections that it encapsulates. For instance, in Figure 3 , CHAPTER IV -LEASE is linked to DIVISION IV -SPECIAL RULES FOR LEASES OF DWELLINGS which is linked to \u00a7 7 -Right to maintain occupancy, and so on, until placing the edges between I. -Beneficiaries of the right and each of the leaves/articles 1936 to 1940. An edge between two nodes cannot \"skip\" any node of intermediate level between the two if there is any. Overall, we made a graph with 1,565 nodes (subsections and articles) and 1,564 edges. From this graph, we made an embedding for each article such that articles belonging to the same subsection and related to the same legal concepts, would have embeddings close to each other. Following the Node2Vec technique, we generated 200 random walks from each node (article or section) that gave a total of 313k sequences of 200 nodes by following the edges. Then, we trained a Word2Vec model (Mikolov et al., 2013) on these sequences during 10 epochs and with a window size of 10, so that each node is assigned a Node2Vec embedding (256-sized vector) that captures the proximity of the neighbouring nodes. We eventually only kept the Node2Vec representations of the leaves that correspond to the 445 retained articles. For the sake of clarity, the vectors of two articles randomly drawn would have a higher cosine similarity if the articles belong to the same subsection than if they belong to distinct ones. For the BERT-N2V (node2vec) model, the input contains the pre-verdict text (passed through CamemBERT) plus an average of the Node2Vec embeddings of all articles mentioned in the documents (e.g. average embedding of articles 1619, 1971 and 1883 in Table 1 for the case in Figure 1 ).",
"cite_spans": [
{
"start": 80,
"end": 107,
"text": "(Grover and Leskovec, 2016)",
"ref_id": "BIBREF5"
},
{
"start": 1233,
"end": 1255,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 380,
"end": 388,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 1997,
"end": 2004,
"text": "Table 1",
"ref_id": null
},
{
"start": 2021,
"end": 2029,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "One-Hot and Node2Vec Encoding of Articles",
"sec_num": "3.1"
},
{
"text": "The one-hot and the Node2Vec encodings are still shallow representations of the articles as they do not even use the text of Civil Code of Quebec. At most, Node2Vec is only capturing a topological representation of how the 445 articles are organized in the law. This is why we considered another model called BAFA (BERT model with Adapters applied on Facts and Articles, Figure 2 part b) that is given as input the pre-verdict text of the decision and the text of all articles cited in it. The 445 retained articles have a average length of 34 tokens (median of 32 and standard deviation of 17). The pre-verdict section and the articles are encoded through two distinct CamemBERT models so that one is fine-tuned on the decision text and the other on the law text. The BERT output (i.e. first token of hidden states) of the decision is then passed through a 12-head attention mechanism as a query while the BERT output obtained from each cited articles are concatenated (up to 22 cited articles per decision) and passed as a key-value pair. The output of the attention mechanism is then passed through two fully connected layers. As we use two distinct CamemBERT modules, the batch size is reduced to 4 and we added adapters (Pfeiffer et al., 2020) as an attempt to speed up computation.",
"cite_spans": [
{
"start": 1225,
"end": 1248,
"text": "(Pfeiffer et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 371,
"end": 379,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Applying BERT on the Law Articles Text",
"sec_num": "3.2"
},
{
"text": "To the best of our knowledge, there is no other work that uses the fine-tuning of pre-trained BERT models on the text of cited articles for verdict prediction of court decisions.When it comes to LJP tasks formalized as text classification, many of most recent works usually aimed at charge prediction or law articles prediction on the sole basis of the facts description . \u015e ulea et al. (2017) made a ruling prediction task comparable to ours but without the text of the articles. When it comes to experiments that actually use the text of law articles, Hu et al. 2018 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying BERT on the Law Articles Text",
"sec_num": "3.2"
},
{
"text": "The classification results are shown in Table 2 . For each label, we compute the F1 score (harmonic mean between precision and recall) obtained by each model and add the label distribution across the test documents. For each model, the last two rows of Table 2 present two overall scores based on metrics that we believe are constraining enough and appropriate for the evaluation of systems that could one day be deployed in real life scenarios. F1 macro average is unweighted average of all labels F1 scores, and thus penalizes models that delivers poor F1 scores for a large number of labels. It measures the ability of the model to predict a large variety of rulings. We also compute exact match which corresponds to the ratio of instances for which a model is able to return the exact set of labels assigned to them. Therefore, an instance is considered as misclassified whenever its prediction has a label in excess or one missing.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 2",
"ref_id": null
},
{
"start": 253,
"end": 260,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "One goal of our experiment is to assess how articles can improve the prediction of cases in which they are cited. Figure 5 shows a heatmap detailing the correlation among articles and verdict labels. In the top left corner, monetary penalty for defendant, eviction, termination lease and provisional enforcement are strongly correlated with articles 1619, 1971 and 1883 which respectively define: the computation of an additional indemnity that can be added to damages ; a rule that allows the termination of the lease if rent is over three weeks late in payment ; a rule so that the tenant may avoid lease termination by paying the due rent plus interest before the judgment. Although these articles make a consistent legal ground with the aforementioned verdict labels, inputting them into the models through any representation (be it onehot/node2vec/BERT encoding) added very little improvement for the F1 scores of these labels (by at most 2.8 percent point on average relative to CamemBERT alone), very likely because of their already high frequency in the corpus. Furthermore, on the heatmap on Figure 5 , tenant ordered pay rent is strongly correlated with article 1973 that defines the conditions allowing the judge to grant lease termination (unless the payment of the rent is over three weeks late, the judge may choose to either terminate the lease immediately or either order the tenant to pay the rent) and the articlebased features help in dramatically improving the corresponding F1 score by 18-22 percent points compared to a sole CamemBERT setting. We also observe that landlord repossesses rental unit and monetary penalty for applicant are strongly associated with several articles, especially 1963 and 1967 which establish respectively the conditions by which the judge can authorize a landlord-applicant to repossess their dwelling from which a tenant refuses to depart and the indemnities that the landlord must pay to the tenant for moving expenses when repossession is granted. Article-based models also improved the F1 scores of these two labels, though not as important as for tenant ordered to pay rent, with average gains of 4.1 and 10.1 points respectively. All in all, the inclusion of article-based features has a negligible impact when the labels already have a high support in the documents, but the improvement is more significant for labels that are rarer (landlord repossesses rental unit, monetary penalty for applicant and tenant ordered to pay rent have supports below 5%) and that have a high correlation value with the articles cited in them (for the three aforementioned labels and their articles, the average correlation is around 0.75). A counter-example to that principle would be agreement between parties and tribunal sets new rent",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 122,
"text": "Figure 5",
"ref_id": "FIGREF5"
},
{
"start": 1101,
"end": 1109,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Gains Obtained across Verdict Labels with Article-Based Features",
"sec_num": "4.1"
},
{
"text": "whose correlation values with cited articles are not that important (below 0.5) and for which no significant improvement on F1 scores is observed with article-based features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gains Obtained across Verdict Labels with Article-Based Features",
"sec_num": "4.1"
},
{
"text": "In general, BERT-based models do better than the baseline in terms of macro-averaged F1 score and exact match. Furthermore, the scores show that article-based features help in outperforming a sole CamemBERT model with higher exact match scores by up to 3.1 percent points and a higher F1 score macro-average by up to 3.8 points. The best F1 macro average score is achieved by the model with node2vec (63.2%) while best exact match score is obtained by CamemBERT with one-hot vectors (67.0%). Still, such results must be nuanced: the performance gains are mostly obtained with either high frequency labels (five most frequent plus lease already terminated) or labels that are strongly correlated with certain articles, which can explain the marginal improvements achieved in the coarse scores at the bottom of Table 2 . Furthermore, the use of article-based features seem to sometimes harm the performance for lowfrequency labels relative to vanilla BERT (defendant ordered some action, penalty misc, tribunal cancels past ruling, discontinuance claim, schedule new audience, tribunal upholds past ruling), which suggests that such features add noise rather than help the model in accurately predicting these verdicts.",
"cite_spans": [],
"ref_spans": [
{
"start": 809,
"end": 816,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparing Performance among Models",
"sec_num": "4.2"
},
{
"text": "Although BAFA used the text of both decisions and articles, both encoded through two distinct BERT models, the overall performance is disappointing compared to the variants with one-hot and node2vec features: despite achieving the best F1 scores for some top frequent labels, the overall coarse scores remain below those of BERT-OH and BERT-N2V. Plus, the training of BAFA also is significantly longer 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Performance among Models",
"sec_num": "4.2"
},
{
"text": "For illustrative purposes, if we were to compare the performance achieved by our models in this task with other similar works, we could cite:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Performance among Models",
"sec_num": "4.2"
},
{
"text": "\u2022 Aletras et al. 2016and Chalkidis et al. (2019) who achieved respectively an accuracy of 79% and a F1 macro-average score of 80.2% in binary classifications for violation of human rights ;",
"cite_spans": [
{
"start": 25,
"end": 48,
"text": "Chalkidis et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Performance among Models",
"sec_num": "4.2"
},
{
"text": "\u2022 \u015e ulea et al. 2017who achieved an accuracy of 92.8% for an 8-mutually-exclusive classes classification task for ruling prediction (our task has 23 cumulative labels) ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Performance among Models",
"sec_num": "4.2"
},
{
"text": "\u2022 Luo et al. (2017) got a macro-average F1 score of 95.4% in a charge prediction task that use articles text as input ; in another charge prediction task, achieved 49.1% and 70.9% for that score on two other datasets (our task is about verdict prediction).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Performance among Models",
"sec_num": "4.2"
},
{
"text": "Two main points can be made from these results. First of all, shallow articles embeddings (onehot and node2vec) do better than BERT-encoding of law text at allowing a marginal improvement over some low-support labels (though not all), with BERT-N2V reaching the highest F1 macro-average score at 63.2%. A tentative explanation is that directly inputting the text of the cited articles adds noisy information that confuses rather than helps the model in the task while a \"fuzzier\" representation of the articles gives a broader information about articles (Node2Vec embeds the topological position of articles in Civil Code of Quebec) without forcing the model to combine the legal terminology of the law and the text input of the decisions. The second point is that although articlebased models outperform vanilla CamemBERT, this is mainly due to the marginal improvement over some of the top frequent labels and to the improvement over some verdict that are strongly correlated with certain articles. This suggests that such models only excel in predicting the most recurrent and stereotypical landlord-tenant disputes (eviction of a non-paying tenant; moving indemnities for a tenant whose rental unit is repossessed by the landlord).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Performance among Models",
"sec_num": "4.2"
},
{
"text": "All in all, these models would be unusable for providing legal assistance for a large variety of cases as they would only excel in predicting accurately the most frequent rulings to the detriment of other types of cases. Also, as these models deal with housing law domain that is related to sensitive social issues (Galli\u00e9 et al. (2016) emphasized that tenants are less confident in dealing with judicial proceedings), we tried to extract some significant patterns from the self-attention weights in the CamemBERT architecture that could help in understanding what causes the model to return some prediction, but found nothing prone to interpretation, which is consistent with a statement from Jain and Wallace (2019).",
"cite_spans": [
{
"start": 315,
"end": 336,
"text": "(Galli\u00e9 et al. (2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Performance among Models",
"sec_num": "4.2"
},
{
"text": "The results obtained seem consistent with observations from Holzenberger et al. (2020) who stated that even a further pre-trained BERT model struggles with a legal entailment task, thus suggesting that the fine-tuning of pre-trained BERT models on statutes and law articles text is not sufficient for solving tasks in very specific domains such as tax law or housing law. Regardless of the method used for inputting articles into the models, all of the approaches combining description of the facts and articles just excel at predicting the most frequent verdicts, which suggests that they would be unusable as is at a higher scale as they would not be able to provide satisfactory legal assistance for cases different from the most recurrent ones. Bender et al. (2021) emphasized the risks involved in using large pre-trained models that tend to encode and amplify biases already present in the training data. To paraphrase the title of their paper, their remarks are consistent with our observations as we end up with \"legal parrots\" which would not be able to accurately address the variety of real world landlord-tenant disputes.",
"cite_spans": [
{
"start": 60,
"end": 86,
"text": "Holzenberger et al. (2020)",
"ref_id": "BIBREF6"
},
{
"start": 749,
"end": 769,
"text": "Bender et al. (2021)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion about the Experiment Setting",
"sec_num": "4.3"
},
{
"text": "The fact that article-based features allow for significant improvement under the condition that the labels are strongly correlated with articles also raises questions about the setting of LJP experiments: in the charge prediction task made by , charges and laws with frequency below 30 were removed from the CAIL2018 dataset and each charge label is strongly associated with one specific article. In contrast, in our corpus, each label is not always strongly correlated with some law article, as shown in Figure 5 . Some works using the CAIL2018 dataset such as that from made further changes in the dataset by removing target labels with frequency below 100. Unlike them, in our work, we were much more permissive during the creation of our corpus as we retained articles with a frequency of at least 2 and made labels to exhaustively cover as many verdicts as possible (i.e. 1102 unique combinations of labels), even though some labels could have been merged together (e.g. landlord repossesses rental unit and monetary penalty for applicant tend to cooccur together) or discarded/weighted down. For instance, schedule new audience and applicant forbidden seek recourse have a low frequency and are rather technical legal details that would be more relevant for a legal expert than for a layman seeking general advice. If we computed F1 average score weighted by each label's support, BERT models would have an average performance of 91.6%, but that coarse metric is mostly pulled upwards by scores achieved for most frequent labels. We must also emphasize that in our dataset there is no 1-to-1 correspondence between labels and articles as in CAIL2018 in which articles not relevant to specific charges were removed beforehand. This illustrates the difficulty in automating legal reasoning over cases and unfiltered law articles in a realistic context.",
"cite_spans": [],
"ref_spans": [
{
"start": 505,
"end": 513,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Discussion about the Experiment Setting",
"sec_num": "4.3"
},
{
"text": "We designed a LJP task as text multilabel classification for verdict prediction based on a collection of landlord-tenant disputes in French for which we used a further pre-trained CamemBERT model and applied different types of features derived from the articles cited in the decisions (one-hot, Node2Vec, BERT encoding of the text of articles). By doing so, we noticed that leveraging articles as input features (regardless of the representation used) made either marginal improvements for F1 scores of most frequent labels, either significant improvements for labels that are strongly correlated with certain articles. The use of article-based one-hot features achieves best exact match score (67.0%) while node2vec features achieve best F1 macro average score (63.2%). The model that encodes the text of the articles with BERT does not outperform the two previous methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5"
},
{
"text": "As future work, we plan on comparing how models perform under both \"realistic\" setting (several rare target labels with no or few connections with the law available, as we did in this work) and \"laboratory\" setting (where low frequency targets and laws are aggressively filtered out). We also plan to assess whether the patterns observed in our work (performance improves when articles are strongly correlated with labels) also exist in other LJP datasets beyond housing law and Canadian cases. Furthermore, we plan to study further the attention weights and the mechanisms underlying the significant prediction improvement observed for certain labels when the input contains the text of articles that are highly correlated with the corresponding verdicts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5"
},
{
"text": "Screenshot from CANLII http://canlii.ca/t/ 533nd last accessed on January 20th 2021",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Over 1 hour per epoch for a one-BERT model, over 14 hours per epoch for two BERTs combined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the Cyberjustice Laboratory at the Universit\u00e9 de Montr\u00e9al, the LexUM Chair on Legal Information and the Autonomy through Cyberjustice Technologies project for supporting this research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Predicting judicial decisions of the European Court of Human Rights: a natural language processing perspective",
"authors": [
{
"first": "",
"middle": [],
"last": "N Aletras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tsarapatsanis",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Preo\u0163iuc-Pietro",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lampos",
"suffix": ""
}
],
"year": 2016,
"venue": "peerj comput sci",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N Aletras, D Tsarapatsanis, D Preo\u0163iuc-Pietro, and V Lampos. 2016. Predicting judicial decisions of the European Court of Human Rights: a natural lan- guage processing perspective. peerj comput sci 2: e93.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On the dangers of stochastic parrots: Can language models be too big",
"authors": [
{
"first": "M",
"middle": [],
"last": "Emily",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Angelina",
"middle": [],
"last": "Gebru",
"suffix": ""
},
{
"first": "Shmargaret",
"middle": [],
"last": "Mcmillan-Major",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shmitchell",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency; Association for Computing Machinery",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily M Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency; As- sociation for Computing Machinery: New York, NY, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural legal judgment prediction in english",
"authors": [],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4317--4323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019. Neural legal judgment prediction in english. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4317-4323.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Les expulsions pour arri\u00e9r\u00e9s de loyer au qu\u00e9bec: un contentieux de masse",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Galli\u00e9",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Brunet",
"suffix": ""
},
{
"first": "Richard-Alexandre",
"middle": [],
"last": "Laniel",
"suffix": ""
}
],
"year": 2016,
"venue": "McGill Law Journal/Revue de droit de McGill",
"volume": "61",
"issue": "3",
"pages": "611--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Galli\u00e9, Julie Brunet, and Richard-Alexandre Laniel. 2016. Les expulsions pour arri\u00e9r\u00e9s de loyer au qu\u00e9bec: un contentieux de masse. McGill Law Journal/Revue de droit de McGill, 61(3):611-666.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "node2vec: Scalable feature learning for networks",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "855--864",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on Knowledge discovery and data mining, pages 855-864.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A dataset for statutory reasoning in tax law entailment and question answering",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Holzenberger",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Blair-Stanek",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Natural Legal Language Processing (NLLP) Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Holzenberger, Andrew Blair-Stanek, and Ben- jamin Van Durme. 2020. A dataset for statutory rea- soning in tax law entailment and question answering. In Proceedings of the 2020 Natural Legal Language Processing (NLLP) Workshop, 24 August 2020, San Diego, US.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Few-shot charge prediction with discriminative legal attributes",
"authors": [
{
"first": "Zikun",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Cunchao",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "487--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zikun Hu, Xiang Li, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2018. Few-shot charge prediction with discriminative legal attributes. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 487-498.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Attention is not explanation",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Byron",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3543--3556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3543-3556.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A general approach for predicting the behavior of the Supreme Court of the United States",
"authors": [
{
"first": "Michael",
"middle": [
"J"
],
"last": "Daniel Martin Katz",
"suffix": ""
},
{
"first": "I",
"middle": [
"I"
],
"last": "Bommarito",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Blackman",
"suffix": ""
}
],
"year": 2017,
"venue": "PloS one",
"volume": "12",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Martin Katz, Michael J Bommarito II, and Josh Blackman. 2017. A general approach for predict- ing the behavior of the Supreme Court of the United States. PloS one, 12(4):e0174698.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR (Poster)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR (Poster).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The gap between deep learning and law: Predicting employment notice",
"authors": [
{
"first": "T",
"middle": [],
"last": "Jason",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Farhana",
"middle": [],
"last": "Dahan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zulkernine",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Natural Legal Language Processing (NLLP) Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason T Lam, David Liang, Samuel Dahan, and Farhana Zulkernine. 2020. The gap between deep learning and law: Predicting employment notice. In Proceedings of the 2020 Natural Legal Language Processing (NLLP) Workshop, 24 August 2020, San Diego, US.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A predictive performance comparison of machine learning models for judicial cases",
"authors": [
{
"first": "Zhenyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanhuan",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Symposium Series on Computational Intelligence (SSCI)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenyu Liu and Huanhuan Chen. 2017. A predictive performance comparison of machine learning mod- els for judicial cases. 2017 IEEE Symposium Series on Computational Intelligence (SSCI), pages 1-6.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic judgment prediction via legal reading comprehension",
"authors": [
{
"first": "Shangbang",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Cunchao",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "China National Conference on Chinese Computational Linguistics",
"volume": "",
"issue": "",
"pages": "558--572",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shangbang Long, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2019. Automatic judgment predic- tion via legal reading comprehension. In China Na- tional Conference on Chinese Computational Lin- guistics, pages 558-572. Springer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning to predict charges for criminal cases with legal basis",
"authors": [
{
"first": "Bingfeng",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jianbo",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2727--2736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bingfeng Luo, Yansong Feng, Jianbo Xu, Xiang Zhang, and Dongyan Zhao. 2017. Learning to predict charges for criminal cases with legal basis. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 2727- 2736.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "\u00c9ric Villemonte de la Clergerie",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Yoann",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": null,
"venue": "Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2019. CamemBERT: a tasty French language model",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03894"
]
},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Ortiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric Ville- monte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2019. CamemBERT: a tasty French language model. arXiv preprint arXiv:1911.03894.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Legal docket-entry classification: Where machine learning stumbles",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "438--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati and Christopher D Manning. 2008. Legal docket-entry classification: Where machine learning stumbles. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing, pages 438-446. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adapterhub: A framework for adapting transformers",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "R\u00fcckl\u00e9",
"suffix": ""
},
{
"first": "Clifton",
"middle": [],
"last": "Poth",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Kamath",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "46--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Pfeiffer, Andreas R\u00fcckl\u00e9, Clifton Poth, Aish- warya Kamath, Ivan Vuli\u0107, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. Adapterhub: A framework for adapting transform- ers. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 46-54.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Analysis and Multilabel Classification of Quebec Court Decisions in the Domain of Housing Law",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Sala\u00fcn",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
},
{
"first": "Andr\u00e9s",
"middle": [],
"last": "Lou",
"suffix": ""
},
{
"first": "Hannes",
"middle": [],
"last": "Westermann",
"suffix": ""
},
{
"first": "Karim",
"middle": [],
"last": "Benyekhlef",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Applications of Natural Language to Information Systems",
"volume": "",
"issue": "",
"pages": "135--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Sala\u00fcn, Philippe Langlais, Andr\u00e9s Lou, Hannes Westermann, and Karim Benyekhlef. 2020. Anal- ysis and Multilabel Classification of Quebec Court Decisions in the Domain of Housing Law. In In- ternational Conference on Applications of Natural Language to Information Systems, pages 135-143. Springer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Taking advantage of models for legal classification",
"authors": [
{
"first": "B",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Skalak",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the 2nd international conference on Artificial intelligence and law",
"volume": "",
"issue": "",
"pages": "234--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David B Skalak. 1989. Taking advantage of models for legal classification. In Proceedings of the 2nd in- ternational conference on Artificial intelligence and law, pages 234-241. ACM.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Legal Area Classification: A Comparative Study of Text Classifiers on Singapore Supreme Court Judgments",
"authors": [
{
"first": "Jerrold",
"middle": [],
"last": "Soh",
"suffix": ""
},
{
"first": "Khang",
"middle": [],
"last": "How",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"Ernst"
],
"last": "Lim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chai",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Natural Legal Language Processing Workshop 2019",
"volume": "",
"issue": "",
"pages": "67--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerrold Soh, How Khang Lim, and Ian Ernst Chai. 2019. Legal Area Classification: A Comparative Study of Text Classifiers on Singapore Supreme Court Judgments. In Proceedings of the Natural Le- gal Language Processing Workshop 2019, pages 67- 77, Minneapolis, Minnesota. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Predicting the Law Area and Decisions of French Supreme Court Cases",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Octavia-Maria \u015e Ulea",
"suffix": ""
},
{
"first": "Mihaela",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Vela",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "716--722",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Octavia-Maria \u015e ulea, Marcos Zampieri, Mihaela Vela, and Josef van Genabith. 2017. Predicting the Law Area and Decisions of French Supreme Court Cases. In Proceedings of the International Confer- ence Recent Advances in Natural Language Process- ing, RANLP 2017, pages 716-722, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Deep learning algorithm for judicial judgment prediction based on bert",
"authors": [
{
"first": "Yongjun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 5th International Conference on Computing, Communication and Security (ICCCS)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yongjun Wang, Jing Gao, and Junjie Chen. 2020. Deep learning algorithm for judicial judgment prediction based on bert. In 2020 5th International Confer- ence on Computing, Communication and Security (ICCCS), pages 1-6. IEEE.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Using factors to predict and analyze landlord-tenant decisions to increase access to justice",
"authors": [
{
"first": "Hannes",
"middle": [],
"last": "Westermann",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vern",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"D"
],
"last": "Walker",
"suffix": ""
},
{
"first": "Karim",
"middle": [],
"last": "Ashley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Benyekhlef",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law",
"volume": "",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannes Westermann, Vern R Walker, Kevin D Ash- ley, and Karim Benyekhlef. 2019. Using factors to predict and analyze landlord-tenant decisions to in- crease access to justice. In Proceedings of the Sev- enteenth International Conference on Artificial Intel- ligence and Law, pages 133-142.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Cail2018: A large-scale legal dataset for judgment prediction",
"authors": [
{
"first": "Chaojun",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Haoxi",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Zhipeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Cunchao",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xianpei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.02478"
]
},
"num": null,
"urls": [],
"raw_text": "Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, et al. 2018. Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Distinguish confusing law articles for legal judgment prediction",
"authors": [
{
"first": "Nuo",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Pinghui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Junzhou",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3086--3095",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nuo Xu, Pinghui Wang, Long Chen, Li Pan, Xiaoyan Wang, and Junzhou Zhao. 2020. Distinguish con- fusing law articles for legal judgment prediction. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3086- 3095.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Legal judgment prediction via topological learning",
"authors": [
{
"first": "Haoxi",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Zhipeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Cunchao",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Chaojun",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3540--3549",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Chaojun Xiao, Zhiyuan Liu, and Maosong Sun. 2018. Le- gal judgment prediction via topological learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3540-3549.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Excerpt of a decision translated from French. The text in italics is the verdict while underlined text contains references to articles."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Architecture diagrams of BERT-OH, BERT-N2V both on part a) and BAFA on part b)."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Excerpt of the Book Five of the Civil Code of Quebec with articles related to dwelling rental lease."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Correlation matrices for 200 articles (sorted by increasing article numbers)."
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "used it for improving prediction of confusing charges only whileLuo et al. (2017) andLong et al. (2019) used a combination of recurrent neural nets with attention mechanisms for encoding it into their models for charge prediction and divorce verdict respectively. Still, none of these works involve transformer architecture. Concerning the experiments that use BERT, Chalkidis et al. (2019) and used pre-trained models for prediction of violation of human rights article and of charges respectively on the basis of the facts only."
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Heatmap of the correlation matrix of the verdict labels and the 30 most frequent articles. The verdict labels and the articles are sorted by decreasing frequency on their respective axes."
},
"TABREF0": {
"content": "<table><tr><td>Verdict label</td><td colspan=\"2\">Baseline CamemBERT</td><td>CamemBERT +one-hot</td><td>CamemBERT + node2vec</td><td>BAFA</td><td>Support</td></tr><tr><td>monetary penalty for defendant</td><td>98.2</td><td>98.3</td><td>98.4</td><td>98.5</td><td>98.6</td><td>92.7</td></tr><tr><td>eviction</td><td>94.8</td><td>94.0</td><td>96.7</td><td>96.5</td><td>97.1</td><td>57.7</td></tr><tr><td>termination lease</td><td>95.2</td><td>94.2</td><td>96.8</td><td>96.6</td><td>97.2</td><td>55.6</td></tr><tr><td>applicant request denied</td><td>71.6</td><td>77.6</td><td>78.0</td><td>78.5</td><td>77.9</td><td>37.0</td></tr><tr><td>provisional enforcement</td><td>87.4</td><td>88.8</td><td>89.3</td><td>89.3</td><td>89.7</td><td>25.5</td></tr><tr><td>applicant is reserved recourses</td><td>76.3</td><td>80.9</td><td>80.7</td><td>80.8</td><td>79.8</td><td>12.4</td></tr><tr><td>lease already terminated</td><td>85.8</td><td>86.8</td><td>88.2</td><td>88.0</td><td>88.1</td><td>4.3</td></tr><tr><td>tenant ordered pay rent</td><td>65.8</td><td>69.0</td><td>90.7</td><td>88.0</td><td>91.3</td><td>1.7</td></tr><tr><td>one party ordered some action</td><td>51.3</td><td>60.9</td><td>66.4</td><td>65.4</td><td>67.4</td><td>1.6</td></tr><tr><td>landlord reposesses rental unit</td><td>85.1</td><td>86.2</td><td>89.7</td><td>88.5</td><td>92.6</td><td>1.4</td></tr><tr><td>monetary penalty for applicant</td><td>69.3</td><td>72.5</td><td>83.6</td><td>78.3</td><td>86.0</td><td>1.1</td></tr><tr><td>agreement between parties</td><td>74.3</td><td>75.7</td><td>76.7</td><td>76.8</td><td>76.0</td><td>1.0</td></tr><tr><td>tribunal sets new rent</td><td>59.6</td><td>69.6</td><td>70.3</td><td>71.1</td><td>66.9</td><td>0.6</td></tr><tr><td>defendant ordered some action</td><td>39.0</td><td>57.3</td><td>51.1</td><td>56.0</td><td>51.9</td><td>0.4</td></tr><tr><td>penalty misc</td><td>10.5</td><td>33.0</td><td>36.6</td><td>28.7</td><td>22.6</td><td>0.2</td></tr><tr><td>tribunal cancels past ruling</td><td>61.5</td><td>76.2</td><td>75.8</td><td>72.3</td><td>71.9</td><td>0.2</td></tr><tr><td>discountinuance claim</td><td>22.0</td><td>50.6</td><td>46.6</td><td>49.3</td><td>33.7</td><td>0.1</td></tr><tr><td>tribunal declines jurisdiction</td><td>14.3</td><td>1.5</td><td>31.2</td><td>39.8</td><td>50.5</td><td>0.1</td></tr><tr><td>schedule new audience</td><td>49.7</td><td>52.9</td><td>50.6</td><td>50.0</td><td>49.4</td><td>0.1</td></tr><tr><td>tribunal upholds past ruling</td><td>19.2</td><td>38.9</td><td>42.5</td><td>42.9</td><td>16.3</td><td>0.1</td></tr><tr><td>applicant forbidden seek recourse</td><td>0.0</td><td>0.0</td><td>0.0</td><td>17.4</td><td>0.0</td><td>0.1</td></tr><tr><td>applicant ordered some action</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>&lt;0.1</td></tr><tr><td>trib asserts jurisdiction</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>&lt;0.1</td></tr><tr><td>F1 across all labels (macro-average)</td><td>53.5</td><td>59.4</td><td>62.6</td><td>63.2</td><td>61.1</td><td/></tr><tr><td>Exact match</td><td>58.6</td><td>63.9</td><td>67.0</td><td>66.4</td><td>66.7</td><td/></tr><tr><td>Table 2:</td><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"num": null,
"text": "F1 scores for each label and model (percentage, the highest score of each label is in bold). The last two rows show macro-averaged F1 and exact match across all labels for each of the four settings. The last column on the right shows the distribution of the labels in the test set.",
"type_str": "table"
}
}
}
}