ACL-OCL / Base_JSON /prefixN /json /nllp /2021.nllp-1.23.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:46:54.863658Z"
},
"title": "Semi-automatic Triage of Requests for Free Legal Assistance",
"authors": [
{
"first": "Meladel",
"middle": [],
"last": "Mistica",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Queensland \u2663 Innovation and Engagement",
"location": {
"country": "Justice Connect"
}
},
"email": "m.mistica@unimelb.edu.au"
},
{
"first": "\u2666",
"middle": [
"\u2665"
],
"last": "Jey",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Queensland \u2663 Innovation and Engagement",
"location": {
"country": "Justice Connect"
}
},
"email": ""
},
{
"first": "Han",
"middle": [],
"last": "Lau",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Queensland \u2663 Innovation and Engagement",
"location": {
"country": "Justice Connect"
}
},
"email": "jeyhan.lau@unimelb.edu.au"
},
{
"first": "\u2666",
"middle": [],
"last": "Brayden",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Queensland \u2663 Innovation and Engagement",
"location": {
"country": "Justice Connect"
}
},
"email": ""
},
{
"first": "Kate",
"middle": [],
"last": "Fazio",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Queensland \u2663 Innovation and Engagement",
"location": {
"country": "Justice Connect"
}
},
"email": "kate.fazio@justiceconnect.org.au"
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Queensland \u2663 Innovation and Engagement",
"location": {
"country": "Justice Connect"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Free legal assistance is critically underresourced, and many of those who seek legal help have their needs unmet. A major bottleneck in the provision of free legal assistance to those most in need is the determination of the precise nature of the legal problem. This paper describes a collaboration with a major provider of free legal assistance, and the deployment of natural language processing models to assign area-of-law categories to real-world requests for legal assistance. In particular, we focus on an investigation of models to generate efficiencies in the triage process, but also the risks associated with naive use of model predictions, including fairness across different user demographics.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Free legal assistance is critically underresourced, and many of those who seek legal help have their needs unmet. A major bottleneck in the provision of free legal assistance to those most in need is the determination of the precise nature of the legal problem. This paper describes a collaboration with a major provider of free legal assistance, and the deployment of natural language processing models to assign area-of-law categories to real-world requests for legal assistance. In particular, we focus on an investigation of models to generate efficiencies in the triage process, but also the risks associated with naive use of model predictions, including fairness across different user demographics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The number of Australians with unmet legal needs is estimated to be over 4 million people per year and growing (out of a total population of around 25 million): each year approximately 8.5 million Australians will have a legal problem and only around 4.5 million will access any assistance (Coumarelos et al., 2012 ; The Department of Justice and Regulation, 2012) -an indication that free legal assistance services are critically under-resourced. A bottleneck for free legal assistance providers is the determination of what (if any) specific legal needs the individual has. We investigate the viability of semi-automating this step by building a model that suggests how to categorise lay descriptions of problems/incidents into legal areas. It is critical that we develop models which will perform equally well for users of all backgrounds, generalise well from small amounts of curated data, and potentially dynamically interact with the help-seeker to clarify the nature of the case. However, in this preliminary work, our aim is to develop initial models as a means to ascertain what biases manifest in our given data, and to have a workable model upon which we can make incremental measurable improvements.",
"cite_spans": [
{
"start": 290,
"end": 314,
"text": "(Coumarelos et al., 2012",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text classification of any real-world data can be a challenge for many reasons. In the case of legal text classification, the classes themselves or the legal categorisation of a possible case, can vary from organisation to organisation, and also from court to court; there is no universally agreed-upon set of areas of law neatly defined into a taxonomy (Goncalves and Quaresma, 2005; Sulea et al., 2017a; Soh et al., 2019; Tuggener et al., 2020) . Furthermore, a case can span multiple areas of law -for example, a FAMILY LAW matter could also fall under the umbrella of GUARDIANSHIP AND AD-MINISTRATION, or a CHARITIES LAW issue may also have aspects regarding EMPLOYEES AND VOLUNTEERS. In addition to the issues surrounding the inherent fuzziness of legal categories, the descriptions of legal issues themselves exhibit a range of language styles: those who seek free legal help are not versed in the legal domain, and may have varying linguistic styles, reflecting their social, cultural, and educational background.",
"cite_spans": [
{
"start": 354,
"end": 384,
"text": "(Goncalves and Quaresma, 2005;",
"ref_id": "BIBREF3"
},
{
"start": 385,
"end": 405,
"text": "Sulea et al., 2017a;",
"ref_id": "BIBREF7"
},
{
"start": 406,
"end": 423,
"text": "Soh et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 424,
"end": 446,
"text": "Tuggener et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We report on an ongoing collaboration between Justice Connect, a public benevolent institution, as defined by the Australian government, 1 that aims to ameliorate social inequalities through legal assistance and community engagement, and Melbourne University whose aim is to alleviate the help-seeker intake bottleneck. In Section 2, we outline the importance of accessible legal assistance to those most in need, and the barriers to be overcome in providing this service to the community. Section 3 details the data collection and corpus creation process. We designed and developed an annotation platform exclusively for volunteer lawyers to annotate online requests for help from the public through the Justice Connect intake portal. Our experiments and results are outlined in Section 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "and 5, which describe the initial fine-tuned BERT classifiers (Devlin et al., 2019) on the small curated help-seeker data informally describing issues in their own words on matters they believe require legal assistance. As a starting point, we wanted to leverage the patterns of language usage encoded in BERT given our relatively small data set. The main risk is that while robust results can be achieved by fine-tuning over relatively little labelled data in this manner, the data used in developing the pre-trained models can lead to these models implicitly capturing a variety of biases about the world (Bender et al., 2021) . In Section 6, we reveal how these biases manifest for our given data set, not only in terms of which areas of law the models can, or cannot, reliably predict, but also which demographic groups the model has inherent difficulty in representing. Finally, in Section 8, we discuss how we can overcome these biases for future iterations of the model while keeping in mind the protection and privacy of the help-seekers who are most vulnerable.",
"cite_spans": [
{
"start": 62,
"end": 83,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 607,
"end": 628,
"text": "(Bender et al., 2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unresolved legal problems have been shown to lead to significant life impacts at high levels of frequency, including financial strain (29%), stressrelated illness (20%), physical ill health (19%), relationship breakdown (10%), and having to move home (5%) (Coumarelos et al., 2012) . Even when a person is eligible for free legal assistance, there are various ecosystem-level barriers that increase the difficulty of finding and engaging with a legal service. One such barrier is the disconnection between a person who has recognised that their problem may have a legal dimension but does not yet know the technical terminology around their issue, and a legal service that can assist that person with the problem they have. This barrier is exacerbated in online settings, as people search via search engines and directories for legal services without the right search terms and technical language required to successfully reach relevant services.",
"cite_spans": [
{
"start": 256,
"end": 281,
"text": "(Coumarelos et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In a follow-up survey by Justice Connect, many applicants, in requesting help online, self-identify this issue: e.g. \"[there are] too many seperate courts and unclear what laws do what (sic)\", \"it's complex and i am not an expert!\" and \"[I'm unsure of the category] because of the family relationship together with financial issue\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Legal services also experience this issue as a bottleneck in their processes, where significant resources are required to assist applicants to determine the nature of the problem that they have (especially difficult given that many users of such services have little or no formal experience with the legal system), whether it is legal in nature, and what specific legal services should be provided. Lack of knowledge and capability often results in \"failing to identify a legal problem, consulting non-legal advisers instead of legal experts, or taking no action to resolve the problem\" (The Department of Justice and Regulation, 2016, p120). Legal area classification (Goncalves and Quaresma, 2005; Sulea et al., 2017a,b; Soh et al., 2019; Tuggener et al., 2020) can potentially help to alleviate this bottleneck, in providing semiautomatic legal triaging of user-supplied textual descriptions of the issue.",
"cite_spans": [
{
"start": 669,
"end": 699,
"text": "(Goncalves and Quaresma, 2005;",
"ref_id": "BIBREF3"
},
{
"start": 700,
"end": 722,
"text": "Sulea et al., 2017a,b;",
"ref_id": null
},
{
"start": 723,
"end": 740,
"text": "Soh et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 741,
"end": 763,
"text": "Tuggener et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The corpus upon which the data set is derived comes from requests for help via the Justice Connect web-based intake tool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set Development",
"sec_num": "3"
},
{
"text": "2 After a series of eligibility questions, the help-seeker is asked to provide more information about the issue that has brought them to the Justice Connect portal. The description entered by the help-seeker is manually de-identified for names, dates, locations, and any other sensitive information in preparation for annotation. This is then presented to an annotator via an interface which was developed by Justice Connect in consultation with Melbourne University. Based on the description, the annotator selects one or more areas of law that they deem appropriate from the 33 options (including NOT A LEGAL ISSUE) shown in Table 1 . The annotation task is a two stage process: first the areas of law are chosen including specifying certainty levels which reflect how well the text supports the area of law they have chosen, as shown in Figure 1 . In the second stage, the annotators are asked to highlight the relevant passages that support their decision to label the document as belonging to a particular legal area.",
"cite_spans": [],
"ref_spans": [
{
"start": 627,
"end": 634,
"text": "Table 1",
"ref_id": null
},
{
"start": 840,
"end": 848,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data Set Development",
"sec_num": "3"
},
{
"text": "The annotations were collected over a period 2 For the privacy and protection of the help seekers, we are not able to share the intake took data of real-world examples. Although the data has been anonymised, the risk too high given the sensitivity of the material and its recency. 3 We do not use the span-level information from Step 2 in this work, but the highlighting of passages can be seen in Figure 3 in the Appendix. unique descriptions. Each text sample was annotated by up to 7 different lawyers, noting that a single sample could be annotated as falling under multiple areas of law, making our task a multi-label classification problem. Our annotators are lawyers from firms that were approached based on their level of engagement with Justice Connect. A number of firms had previously shown interest in pilot and project opportunities through Justice Connect's subscription model. Annotators were self-elected, or chosen by each firm's pro bono coordinators. They were asked to disclose how many years they had been practicing in total. Of these firms, there were 231 lawyersall admitted to practice law in Australia -who were signed up for the annotation task. In addition, there were 12 Justice Connect-based lawyers who selfelected to participate as annotators, taking the total number of lawyers to 243 over 9 firms throughout Australia.",
"cite_spans": [
{
"start": 45,
"end": 46,
"text": "2",
"ref_id": null
},
{
"start": 281,
"end": 282,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 398,
"end": 406,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "We derive three different labellings of the data from these annotations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "Majority-vote: majority-vote labels for each text 4 From 16 November 2020 to 30 April 2021 sample based on a per-class majority vote over the annotators. That is, in order for a label to apply it must have been assigned by at least half of the annotators.",
"cite_spans": [
{
"start": 50,
"end": 51,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "Confidence-weighted: the weighted mean of the 'certainty' or 'confidence' score for each label, as self-assessed by the annotator on a scale of 1-100 (according to the placement of a slider), by averaging over the confidence scores for all annotators who assigned a given label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "Annotator-weighted: the proportion of annotators who assigned a given area of law to the instance, divided by the number of annotators (constrained to be at least 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "As an illustration of how confidence-and annotator-weighting work, one text sample (Entry ID 3085) had 3 annotators: the first annotator labelled it with the tags LITIGATION AND DISPUTE RES-OLUTION and EMPLOYEES AND VOLUNTEERS, giving a score of 42% and 100%, respectively; the second annotator applied the same labels but rated them both 100; and the third annotator tagged this excerpt with EMPLOYEES AND VOLUNTEERS and PER-SONAL SAFETY with confidence ratings 79 and 67, re- Table 1 : Areas of Law, and the number of text samples belonging to each in the majority-vote (M), confidenceweighted (C), and annotator-weighted (A) data sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 478,
"end": 485,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "spectively. Under confidence-weighting, therefore, the overall weight for LITIGATION AND DISPUTE RESO-LUTION is (42/100 + 100/100)/2 = 0.71, whereas for annotator-weighting, the score is 2/3 = 0.67. One difference between the annotator-weighted labelling as compared to the confidence-weighted or majority-vote labelling is that it has fewer instances: the annotator-weighted data set has 3,154 instances while the other two have 4,062. This is because of the constraint that at least 3 annotators tag the text for annotator-weighting. Table 1 shows the number of instances categorised under each area of law for the majorityvote (and confidence-weighted) data set and the annotator-weighted data set, noting that multiple areas of law can apply to the one text sample.",
"cite_spans": [],
"ref_spans": [
{
"start": 536,
"end": 543,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "We performed fine-tuning experiments using BERT, with early stopping based on the validation loss. In addition, we experimented with various values for the dropout rate during training, with the final value being set to 0.001. The batch size was set to 32 and the number of epochs was set to 50. All experiments are based on 20-fold cross-validation, and were run 10 times and averaged, with each run having the data split into 20 folds randomly. The non-testing portion of each fold was split into 90/10 for training/validation. For experiments over the confidence-weighted and annotator-weighted data sets, we train over the given label representation, but evaluate on the majority-vote data. We do this because the majority-vote labelled data is our approximation of a manually curated gold-standard data set, also for direct model comparability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Given that the number of labels is quite large (33 areas of law, including NOT A LEGAL ISSUE), we wanted to see if grouping the tags into small thematic groups would increase accuracy. We experimented with 2 grouping structures: (1) \"legal\", based on legal specialisations; and (2) \"theme\", based on topics or themes that may be shared between the areas of law. These groupings were agreed upon by trained lawyers at Justice Connect, where the first group was determined by answering the question, In general, if a lawyer specialises in area X, do they often specialise in area Y too?, and for the latter, What areas of law have common narratives or topics shared between them, when people describe issues pertaining to these areas of law?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We include three baselines to gauge how difficult the task is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "1. \"random\": choose 0 to 7 labels at random (based on uniform sampling, without replacement) for each instance, noting that there were between 0 and 7 areas of law for each instance in the majority-vote data set 5 2. \"shuffle\": select N labels at random, where N is the number of assigned labels to the instance in either the majority-vote or annotatorweighted data set (recognising that this information would not be available for a genuinely \"unseen\" instance)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "3. \"majority\": label using the most popular area(s) of law (ranked by how many instances they have been assigned in the training data).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For the majority-vote data set, on average 1.5 labels are applied to each text sample, and therefore the first version of majority, we assign the 2 most popular areas of law. The annotator-weighted data set has an average of 1.3 labels per instance, so for the second version of majority, we assign the single highestoccurring label. Table 2 shows the micro-averaged precision, recall, and f1-score for the BERT models trained over the three data sets (majority-vote, confidenceweighted, and annotator-weighted) but evaluated over majority-vote for direct comparability. For the baselines, the two numbers in each cell indicate the results obtained based on majority-vote vs. annotator-weighted.",
"cite_spans": [],
"ref_spans": [
{
"start": 334,
"end": 341,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The results show that while the confidenceweighted-trained system outperforms the other systems with respect to precision, recall is low and misses out on correctly classifying instances well over half the time. Although, when it does label instances, it gets them correct 83.8% of the time. The annotator-weighted-trained model obtains the highest recall, and also obtains the best trade-off between precision and recall, as reflected in the best f1-score. All models perform well above all three baselines, for all of precision, recall, and f1-score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "For all 3 BERT models, recall is rather low. In the original experiments, if the model predicted a given label with a probability \u226550%, we output it as a prediction, but it is also possible to adjust the probability threshold to a value other than 50% (with higher values expected to lead to higher precision and lower recall, and lower values expected to lead to higher recall and lower precision). In a follow-up experiment, we learned a threshold per label (area of law) based on the training data. While these experiments, shown in Table 3 generally led to improvements in recall, it degraded precision substantially, with the net effect of an overall drop in f1-score. As such, in the remainder of our experiments, we maintain a fixed threshold of 0.50.",
"cite_spans": [],
"ref_spans": [
{
"start": 536,
"end": 543,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In the previous experiments, the validation data was the same type as the training data (e.g. for 'annotator-weighted', the training and validation data were both labelled with the annotatorweighted approach), but final evaluation was based on the majority-vote labelling. This means that for the models trained on 'annotator-weighted' and 'confidence-weighted', we optimise hyperparameters on the basis of one labelling strategy, and perform our final evaluation based on a separate labelling strategy ('majority-vote'). In the next experiment, we seek to rectify this mismatch by also validating on 'majority-vote' data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The results in Table 4 show a slight boost in recall in both cases, but overall validating on majority-vote data has relatively little impact. Table 5 shows the results of the groupings experiment, in which we both label and evaluate all instances based on the coarser-grained 'theme' (groupings of the areas of law by topic) or 'legal' (groupings according to legal specialisation) label set, 6 using majority-vote labelling. Overall, recall improves slightly with grouped labels, in comparison to the original results over the fine-grained label set. However, precision does not improve, meaning it is difficult to justify employing the 'theme' (or the 'legal') system over the 'annotator-weighted' system, because the loss in the granularity of distinctions between the specific areas of law does not justify the small gain in recall. Summarising the findings of these experiments, the best of the models is trained on 'annotatorweighted' labels, without modifying the probability threshold or majority-vote data validation. It is this model that we experiment with in the remainder of the paper, in terms of scoping out its viability for live deployment in semi-automatically triaging of incoming requests for legal assistance.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 143,
"end": 150,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "As shown in Table 1 , not all areas of law are distributed equally. For example, there are far more instances tagged as EMPLOYEES AND VOLUNTEERS than PRIVACY. It would be natural to expect that the predictive performance over PRIVACY would be lower, given the sparsity of labelled instances in the training data. Table 6 shows the breakdown of precision, recall, and f1-score (along with the raw count of true negatives, false positives, false negatives, and true positives) for each label. There were no classes where the number of false positives was greater than the number of true positives, resulting in relatively high precision scores. Recall, on the other hand, is rather low, meaning that the model is conservative in its predictions. However, a more conservative low re- call, high precision, system provides greater utility than a high recall, low precision, system because we are able to trust the predictions from the system in assigning lawyers with speciality in different areas of law. That is, we want to be confident that a pro bono lawyer who is assigned to a client is suitably credentialled to provide assistance relevant to the specifics of the request, noting that they will quickly pick up on any aspects of the case which they are not qualified to deal with (i.e. areas of law the classifier has missed) and be able to potentially bring in extra expertise without extra overhead. That is, the cost of a lawyer getting up to speed with a particular case is very much higher than the cost of that lawyer identifying extra dimensions of legal expertise that need to be brought on board, such that precision is more important than recall.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": null
},
{
"start": 313,
"end": 320,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "To assist in the interpretation of the model after deployment, we further categorised the areas of law according to 4 tiers, as shown in Table 6 . The determining of these tiers is roughly guided by the f0.5-score (\u03b2 = 0.5) of each area of law, shown in the column f0.5. In constructing the tiers, we place SYSTEM p r f theme 0.776 0.629 0.695 legal 0.777 0.612 0.684 Table 5 : Groups by themes legal specialisation a greater importance on precision rather than recall. Tier III and IV classes, with an f0.5-score of < 0.55, are those that are least 'trustworthy' if the system was to emit them as a prediction. Even though some classes have a precision of 1.000 (e.g. INTEL-LECTUAL PROPERTY and TRUSTS/EQUITY), are still be treated as lower-tier classes for a number of reasons -these classes have far fewer instances and therefore the ability of the model to learn features for these classes is comparatively degraded (and the high precision is perhaps more luck than a reproducible trend). This is reflected in the very poor results in the recall and thus overall f1-score of these classes. The classes in Tiers I and II have higher precision and an f0.5 that range between 0.55 and 0.925. For the areas of law in these tiers with a higher precision, we expect that when a model predicts a text sample as one of these classes, that we can be fairly confident of that prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 368,
"end": 375,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "From the metadata provided by Justice Connect, we analyse 6 sub-groups of help-seeker:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fairness",
"sec_num": "7"
},
{
"text": "1. Seniors (SEN) = 102 instances;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fairness",
"sec_num": "7"
},
{
"text": "2. Aboriginal/Torres Strait Islanders (ATS) = 54 instances;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fairness",
"sec_num": "7"
},
{
"text": "3. public housing tenants (PUB) = 91 instances;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fairness",
"sec_num": "7"
},
{
"text": "4. those who do not identify as heterosexual or cisgender (LGB) = 104 instances;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fairness",
"sec_num": "7"
},
{
"text": "5. those who are homeless or as risk of becoming homeless (HOM) = 175; and 6. those who disclosed their household income as being less than $50K group (LOW) = 1,686 instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fairness",
"sec_num": "7"
},
{
"text": "Of these, the SEN group fared the worst with an overall f1-score of 0.540, as shown in Table 7 . This table shows that the ATS and SEN groups fared below the average and the worst for SEN.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fairness",
"sec_num": "7"
},
{
"text": "Aboriginal and Torres Strait Islanders make up 1.7% of the total data, and make up a comparable percentage of FAMILY LAW cases (1.8%) and CRIMI-NAL LAW cases (1.9%), however at a far lower performance than the average when compared to data for all demographics. For example CRIMINAL LAW cases have p = 0.571, r = 0.571, and f1 = 0.571 (vs. Table 7 : Results for self-identified groups in the data p = 0.721, r = 0.592, and f = 0.650 when evaluating over all the data). For LITIGATIONS AND DISPUTE RES-OLUTIONS, the largest overall class, the performance for ATS was p = 0.333, r = 0.600, and f = 0.429 (vs. p = 0.709, r = 0.552, and f1 = 0.621 when evaluating over all the data). However only 0.7% of documents in this class were submitted by those who identified as ATS.",
"cite_spans": [],
"ref_spans": [
{
"start": 340,
"end": 347,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fairness",
"sec_num": "7"
},
{
"text": "The number of submitted requests by seniors (SEN: those who identify as being over 65 years of age) are almost double the number of Aboriginal and Torres Strait Islanders, making up slightly over 3% of the total number of samples in the data (3.2% of the data used in the annotator-weighted system). Our initial hypothesis for the poor performance of the SEN data was that perhaps they made enquiries in certain areas of law that inherently had low performance. For example, if most of the seniors' enquiries fell in the Tier III and IV categories, then this would help explain the overall poorer performance of the SEN group. However, our analysis showed otherwise: the vast majority of classes with at least 5 instances from the SEN group are Tier I or Tier II, and yet for the majority of these, the relative performance is below the overall performance for that class, as seen in Table 8 . This table shows the breakdown of the areas of law for SEN where the number of true instances is at least 5.",
"cite_spans": [],
"ref_spans": [
{
"start": 884,
"end": 891,
"text": "Table 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Fairness",
"sec_num": "7"
},
{
"text": "Many of the enquiries by seniors fall in Tier I and II categories, which make up the best-performing classes in general. Even though the SEN group only makes up around 3% of the total number of instances, they do however make up almost 21% of all enquires regarding ELDER LAW, over 6% of all BANKING AND FINANCE, almost 6% of CONSUMER LAW, and 10% of PLANNING AND LOCAL GOVERNMENT. While PROPERTY LAW and PLANNING AND LOCAL GOV-ERNMENT perform well in comparison to the overall results for these classes, the other classes within Tier I and II categories underperform. In particular, EMPLOYEES AND VOLUNTEERS and FAMILY LAW are overall top-performing classes as shown in Table 6 , yet perform poorly for the SEN group, obtaining an f1-score of 0.571 and 0.545, respectively. Furthermore, in instances where precision is high for a class, recall often pulls down the f1-score for the SEN group.",
"cite_spans": [],
"ref_spans": [
{
"start": 671,
"end": 678,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Fairness",
"sec_num": "7"
},
{
"text": "7 This points to the model being systematically biased against this particular demographic, pointing to the need for explicit model debiasing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fairness",
"sec_num": "7"
},
{
"text": "The findings of our paper presents a preliminary exploration of the application of NLP for social good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "The results of this paper shows the importance of building fairer models. One approach as a future avenue in this endeavour is by incorporating adversarial learning (Li et al., 2018) or null space projection (Ravfogel et al., 2020) to learn representations that are invariant to subgroups, so as to limit the model from learning undesirable correlations between the legal categories and sub-group features. The challenge though, is that subgroup information is not always available in the data, particularly in the Justice Connect intake tool to request help where all demographic information is volunteered and not mandatory. This means that possibly identifying proxy attributes (e.g. postcode and education level as a potential means to identify income status) or areas of law that are highly associated with certain sub-groups (e.g. MIGRATION issues may be likely submitted by persons whose main language is not English, and ELDER LAW issues may likely be submitted by seniors). In addition to the subgroups already presented in this study, in our next iteration we will expand our set of the subgroups to include people who identify as CALD (culturally and linguistically diverse), those who identify as having a disability, as well as information on education levels. Step 2: Highlighting supporting text for the areas of law chosen",
"cite_spans": [
{
"start": 165,
"end": 182,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 208,
"end": 231,
"text": "(Ravfogel et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "https://www.acnc.gov.au/charity/ 232d6dcbcaa1550da90f825fe6fab643#history",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that in the case of 0 areas of law, NOT A LEGAL ISSUE is assigned, and in the case of >0 areas of law, they are drawn from the remainder of the labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "SeeFigure 2in Appendix for details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is the case for areas of law such as NEIGHBOUR-HOOD DISPUTES and CONSUMER LAW, which can be seen inTable 8(in the Appendix).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "109 56 FUNDRAISING LAW 0.000 0.000 0.000 0.000 3153 0 1 0 IT 0.000 0.000 0.000 0.000 3153 0 1 0 INQUIRIES 0.000 0.000 0.000 0.000 3150 0 4 0 ENVIRONMENT 0.000 0.000 0.000 0.000 3147 0 7 0 NATIVE TITLE 0.000 0.000 0.000 0.000 3153 0 1 0 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On the dangers of stochastic parrots: Can language models be too big?",
"authors": [
{
"first": "M",
"middle": [],
"last": "Emily",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Angelina",
"middle": [],
"last": "Gebru",
"suffix": ""
},
{
"first": "Shmargaret",
"middle": [],
"last": "Mcmillan-Major",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shmitchell",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency",
"volume": "",
"issue": "",
"pages": "610--623",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily M Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Confer- ence on Fairness, Accountability, and Transparency, pages 610-623.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Legal Australia-wide survey (LAW survey): Legal need in Australia. Law and Justice Foundation of NSW",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Coumarelos",
"suffix": ""
},
{
"first": "Deborah",
"middle": [],
"last": "Macourt",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "People",
"suffix": ""
},
{
"first": "Hugh",
"middle": [
"M"
],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Zhigang",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Reiny",
"middle": [],
"last": "Iriana",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Ramsey",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Coumarelos, Deborah Macourt, Julie People, Hugh M. McDonald, Zhigang Wei, Reiny Iriana, and Stephanie Ramsey. 2012. Legal Australia-wide survey (LAW survey): Legal need in Australia. Law and Justice Foundation of NSW.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Evaluating preprocessing techniques in a text classification problem",
"authors": [
{
"first": "Teresa",
"middle": [],
"last": "Goncalves",
"suffix": ""
},
{
"first": "Paulo",
"middle": [],
"last": "Quaresma",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Conference of the Brazilian Computer Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teresa Goncalves and Paulo Quaresma. 2005. Evaluat- ing preprocessing techniques in a text classification problem. In Proceedings of the Conference of the Brazilian Computer Society.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Towards robust and privacy-preserving text representations",
"authors": [
{
"first": "Yitong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text represen- tations. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 25-30, Melbourne, Australia.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Null it out: Guarding protected attributes by iterative nullspace projection",
"authors": [
{
"first": "Shauli",
"middle": [],
"last": "Ravfogel",
"suffix": ""
},
{
"first": "Yanai",
"middle": [],
"last": "Elazar",
"suffix": ""
},
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Twiton",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7237--7256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7237-7256, Online.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Legal area classification: A comparative study of text classifiers on Singapore Supreme Court judgments",
"authors": [
{
"first": "Jerrold",
"middle": [],
"last": "Soh",
"suffix": ""
},
{
"first": "Khang",
"middle": [],
"last": "How",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"Ernst"
],
"last": "Lim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chai",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Natural Legal Language Processing Workshop 2019",
"volume": "",
"issue": "",
"pages": "67--77",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2208"
]
},
"num": null,
"urls": [],
"raw_text": "Jerrold Soh, How Khang Lim, and Ian Ernst Chai. 2019. Legal area classification: A comparative study of text classifiers on Singapore Supreme Court judgments. In Proceedings of the Natural Legal Language Processing Workshop 2019, pages 67-77, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Exploring the use of text classification in the legal domain",
"authors": [
{
"first": "Octavia-Maria",
"middle": [],
"last": "Sulea",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Mihaela",
"middle": [],
"last": "Vela",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Liviu",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Octavia-Maria Sulea, Marcos Zampieri, Shervin Mal- masi, Mihaela Vela, Liviu P. Dinu, and Josef van Genabith. 2017a. Exploring the use of text classi- fication in the legal domain.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Predicting the law area and decisions of french supreme court cases",
"authors": [
{
"first": "Octavia-Maria",
"middle": [],
"last": "Sulea",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Mihaela",
"middle": [],
"last": "Vela",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Octavia-Maria Sulea, Marcos Zampieri, Mihaela Vela, and Josef van Genabith. 2017b. Predicting the law area and decisions of french supreme court cases.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Department of Justice and Regulation",
"authors": [],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The Department of Justice and Regulation. 2012. Ac- cess to justice review. Victorian Government Re- port.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Department of Justice and Regulation",
"authors": [],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The Department of Justice and Regulation. 2016. Ac- cess to justice review. Victorian Government Re- port.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "LEDGAR: A large-scale multi-label corpus for text classification of legal provisions in contracts",
"authors": [
{
"first": "Don",
"middle": [],
"last": "Tuggener",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Pius Von D\u00e4niken",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peetz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cieliebak",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "1235--1241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Don Tuggener, Pius von D\u00e4niken, Thomas Peetz, and Mark Cieliebak. 2020. LEDGAR: A large-scale multi-label corpus for text classification of legal pro- visions in contracts. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 1235-1241, Marseille, France. European Language Resources Association.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Step 1: Choosing the areas of law and certainty levels of five and half months,4 and are based on 4,062"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Figure 3: Step 2: Highlighting supporting text for the areas of law chosen"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>SYSTEM</td><td>p</td><td>r</td><td>f</td></tr><tr><td colspan=\"4\">majority-vote 0.565 0.555 0.560</td></tr><tr><td colspan=\"4\">confidence-weighted 0.622 0.521 0.567</td></tr><tr><td colspan=\"4\">annotator-weighted 0.743 0.569 0.645</td></tr></table>",
"text": "Results for the three baselines, and BERT trained on the three data sets (majority-vote, confidenceweighted, and annotator-weighted); evaluation in each case is against the majority-vote test set."
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">: Results for dynamic probability thresholding</td></tr><tr><td>of the BERT models</td><td/><td/><td/></tr><tr><td>SYSTEM</td><td>p</td><td>r</td><td>f</td></tr><tr><td colspan=\"4\">confidence-weighted 0.839 0.462 0.596</td></tr><tr><td colspan=\"4\">annotator-weighted 0.772 0.608 0.680</td></tr></table>",
"text": ""
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": ""
},
"TABREF5": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Breakdown of the performance for the 'annotator-weighted' system per class, as well as the TN (true negative), FP (false positive), FN (false negative), and TP (true positive) counts."
},
"TABREF7": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>AREA OF LAW</td><td>p</td><td>r</td><td>f</td><td>TIER</td></tr></table>",
"text": "AND VOLUNTEERS 0.500 0.667 0.571 Tier I FAMILY LAW 0.600 0.500 0.545 Tier I HOUSING AND RESIDENTIAL TENANCIES 0.750 0.462 0.571 Tier I LITIGATION AND DISPUTE RESOLUTION 0.556 0.652 0.600 Tier II NOT A LEGAL ISSUE 0.667 0.750 0.706 Tier II CRIMINAL LAW 0.714 0.500 0.588 Tier II NEIGHBOURHOOD DISPUTES 1.000 0.500 0.667 Tier II PLANNING AND LOCAL GOVERNMENT 1.000 0.600 0.750 Tier II BANKING AND FINANCE 0.400 0.333 0.364 Tier III CONSUMER LAW 1.000 0.167 0.286 Tier II ELDER LAW 0.000 0.000 0.000 Tier III TORTS AND COMPENSATION 0.000 0.000 0.000 Tier III PROPERTY LAW 1.000 0.700 0.824 Tier III"
},
"TABREF8": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Breakdown of areas of law for SEN for where number of true instances \u2265 5"
}
}
}
}