index int64 0 22.3k | modelId stringlengths 8 111 | label list | readme stringlengths 0 385k |
|---|---|---|---|
0 | distilbert-base-uncased-finetuned-sst-2-english | [
"NEGATIVE",
"POSITIVE"
] | ---
language: en
license: apache-2.0
datasets:
- sst2
- glue
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
- type: accuracy
value: 0.9105504587155964
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2YyOGMxYjY2Y2JhMjkxNjIzN2FmMjNiNmM2ZWViNGY3MTNmNWI2YzhiYjYxZTY0ZGUyN2M1NGIxZjRiMjQwZiIsInZlcnNpb24iOjF9.uui0srxV5ZHRhxbYN6082EZdwpnBgubPJ5R2-Wk8HTWqmxYE3QHidevR9LLAhidqGw6Ih93fK0goAXncld_gBg
- type: precision
value: 0.8978260869565218
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzgwYTYwYjA2MmM0ZTYwNDk0M2NmNTBkZmM2NGNhYzQ1OGEyN2NkNDQ3Mzc2NTQyMmZiNDJiNzBhNGVhZGUyOSIsInZlcnNpb24iOjF9.eHjLmw3K02OU69R2Au8eyuSqT3aBDHgZCn8jSzE3_urD6EUSSsLxUpiAYR4BGLD_U6-ZKcdxVo_A2rdXqvUJDA
- type: recall
value: 0.9301801801801802
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGIzM2E3MTI2Mzc2MDYwNmU3ZTVjYmZmZDBkNjY4ZTc5MGY0Y2FkNDU3NjY1MmVkNmE3Y2QzMzAwZDZhOWY1NiIsInZlcnNpb24iOjF9.PUZlqmct13-rJWBXdHm5tdkXgETL9F82GNbbSR4hI8MB-v39KrK59cqzFC2Ac7kJe_DtOeUyosj34O_mFt_1DQ
- type: auc
value: 0.9716626673402374
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDM0YWIwZmQ4YjUwOGZmMWU2MjI1YjIxZGQ2MzNjMzRmZmYxMzZkNGFjODhlMDcyZDM1Y2RkMWZlOWQ0MWYwNSIsInZlcnNpb24iOjF9.E7GRlAXmmpEkTHlXheVkuL1W4WNjv4JO3qY_WCVsTVKiO7bUu0UVjPIyQ6g-J1OxsfqZmW3Leli1wY8vPBNNCQ
- type: f1
value: 0.9137168141592922
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGU4MjNmOGYwZjZjMDQ1ZTkyZTA4YTc1MWYwOTM0NDM4ZWY1ZGVkNDY5MzNhYTQyZGFlNzIyZmUwMDg3NDU0NyIsInZlcnNpb24iOjF9.mW5ftkq50Se58M-jm6a2Pu93QeKa3MfV7xcBwvG3PSB_KNJxZWTCpfMQp-Cmx_EMlmI2siKOyd8akYjJUrzJCA
- type: loss
value: 0.39013850688934326
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTZiNzAyZDc0MzUzMmE1MGJiN2JlYzFiODE5ZTNlNGE4MmI4YzRiMTc2ODEzMTUwZmEzOTgxNzc4YjJjZTRmNiIsInZlcnNpb24iOjF9.VqIC7uYC-ZZ8ss9zQOlRV39YVOOLc5R36sIzCcVz8lolh61ux_5djm2XjpP6ARc6KqEnXC4ZtfNXsX2HZfrtCQ
- task:
type: text-classification
name: Text Classification
dataset:
name: sst2
type: sst2
config: default
split: train
metrics:
- type: accuracy
value: 0.9885521685548412
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2I3NzU3YzhmMDkxZTViY2M3OTY1NmI0ZTdmMDQxNjNjYzJiZmQxNzczM2E4YmExYTY5ODY0NDBkY2I4ZjNkOCIsInZlcnNpb24iOjF9.4Gtk3FeVc9sPWSqZIaeUXJ9oVlPzm-NmujnWpK2y5s1Vhp1l6Y1pK5_78wW0-NxSvQqV6qd5KQf_OAEpVAkQDA
- type: precision
value: 0.9881965062029833
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDdlZDMzY2I3MTAwYTljNmM4MGMyMzU2YjAzZDg1NDYwN2ZmM2Y5OWZhMjUyMGJiNjY1YmZiMzFhMDI2ODFhNyIsInZlcnNpb24iOjF9.cqmv6yBxu4St2mykRWrZ07tDsiSLdtLTz2hbqQ7Gm1rMzq9tdlkZ8MyJRxtME_Y8UaOG9rs68pV-gKVUs8wABw
- type: precision
value: 0.9885521685548412
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjFlYzAzNmE1YjljNjUwNzBjZjEzZDY0ZDQyMmY5ZWM2OTBhNzNjYjYzYTk1YWE1NjU3YTMxZDQwOTE1Y2FkNyIsInZlcnNpb24iOjF9.jnCHOkUHuAOZZ_ZMVOnetx__OVJCS6LOno4caWECAmfrUaIPnPNV9iJ6izRO3sqkHRmxYpWBb-27GJ4N3LU-BQ
- type: precision
value: 0.9885639626373408
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGUyODFjNjBlNTE2MTY3ZDAxOGU1N2U0YjUyY2NiZjhkOGVmYThjYjBkNGU3NTRkYzkzNDQ2MmMwMjkwMWNiMyIsInZlcnNpb24iOjF9.zTNabMwApiZyXdr76QUn7WgGB7D7lP-iqS3bn35piqVTNsv3wnKjZOaKFVLIUvtBXq4gKw7N2oWxvWc4OcSNDg
- type: recall
value: 0.9886145346602994
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTU1YjlhODU3YTkyNTdiZDcwZGFlZDBiYjY0N2NjMGM2NTRiNjQ3MDNjNGMxOWY2ZGQ4NWU1YmMzY2UwZTI3YSIsInZlcnNpb24iOjF9.xaLPY7U-wHsJ3DDui1yyyM-xWjL0Jz5puRThy7fczal9x05eKEQ9s0a_WD-iLmapvJs0caXpV70hDe2NLcs-DA
- type: recall
value: 0.9885521685548412
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODE0YTU0MDBlOGY4YzU0MjY5MzA3OTk2OGNhOGVkMmU5OGRjZmFiZWI2ZjY5ODEzZTQzMTI0N2NiOTVkNDliYiIsInZlcnNpb24iOjF9.SOt1baTBbuZRrsvGcak2sUwoTrQzmNCbyV2m1_yjGsU48SBH0NcKXicidNBSnJ6ihM5jf_Lv_B5_eOBkLfNWDQ
- type: recall
value: 0.9885521685548412
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWNkNmM0ZGRlNmYxYzIwNDk4OTI5MzIwZWU1NzZjZDVhMDcyNDFlMjBhNDQxODU5OWMwMWNhNGEzNjY3ZGUyOSIsInZlcnNpb24iOjF9.b15Fh70GwtlG3cSqPW-8VEZT2oy0CtgvgEOtWiYonOovjkIQ4RSLFVzVG-YfslaIyfg9RzMWzjhLnMY7Bpn2Aw
- type: f1
value: 0.9884019815052447
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmM4NjQ5Yjk5ODRhYTU1MTY3MmRhZDBmODM1NTg3OTFiNWM4NDRmYjI0MzZkNmQ1MzE3MzcxODZlYzBkYTMyYSIsInZlcnNpb24iOjF9.74RaDK8nBVuGRl2Se_-hwQvP6c4lvVxGHpcCWB4uZUCf2_HoC9NT9u7P3pMJfH_tK2cpV7U3VWGgSDhQDi-UBQ
- type: f1
value: 0.9885521685548412
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDRmYWRmMmQ0YjViZmQxMzhhYTUyOTE1MTc0ZDU1ZjQyZjFhMDYzYzMzZDE0NzZlYzQyOTBhMTBhNmM5NTlkMiIsInZlcnNpb24iOjF9.VMn_psdAHIZTlW6GbjERZDe8MHhwzJ0rbjV_VJyuMrsdOh5QDmko-wEvaBWNEdT0cEKsbggm-6jd3Gh81PfHAQ
- type: f1
value: 0.9885546181087554
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjUyZWFhZDZhMGQ3MzBmYmRiNDVmN2FkZDBjMjk3ODk0OTAxNGZkMWE0NzU5ZjI0NzE0NGZiNzM0N2Y2NDYyOSIsInZlcnNpb24iOjF9.YsXBhnzEEFEW6jw3mQlFUuIrW7Gabad2Ils-iunYJr-myg0heF8NEnEWABKFE1SnvCWt-69jkLza6SupeyLVCA
- type: loss
value: 0.040652573108673096
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTc3YjU3MjdjMzkxODA5MjU5NGUyY2NkMGVhZDg3ZWEzMmU1YWVjMmI0NmU2OWEyZTkzMTVjNDZiYTc0YjIyNCIsInZlcnNpb24iOjF9.lA90qXZVYiILHMFlr6t6H81Oe8a-4KmeX-vyCC1BDia2ofudegv6Vb46-4RzmbtuKeV6yy6YNNXxXxqVak1pAg
---
# DistilBERT base uncased finetuned SST-2
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
## Model Details
**Model Description:** This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned on SST-2.
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
- **Developed by:** Hugging Face
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased).
- **Resources for more information:**
- [Model Documentation](https://huggingface.co/docs/transformers/main/en/model_doc/distilbert#transformers.DistilBertForSequenceClassification)
- [DistilBERT paper](https://arxiv.org/abs/1910.01108)
## How to Get Started With the Model
Example of single-label classification:
```python
import torch
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
```
## Uses
#### Direct Use
This model can be used for topic classification. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.
For instance, for sentences like `This film was filmed in COUNTRY`, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this [colab](https://colab.research.google.com/gist/ageron/fb2f64fb145b4bc7c49efc97e5f114d3/biasmap.ipynb), [Aurélien Géron](https://twitter.com/aureliengeron) made an interesting map plotting these probabilities for each country.
<img src="https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/resolve/main/map.jpeg" alt="Map of positive probabilities per country." width="500"/>
We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: [WinoBias](https://huggingface.co/datasets/wino_bias), [WinoGender](https://huggingface.co/datasets/super_glue), [Stereoset](https://huggingface.co/datasets/stereoset).
# Training
#### Training Data
The authors use the following Stanford Sentiment Treebank([sst2](https://huggingface.co/datasets/sst2)) corpora for the model.
#### Training Procedure
###### Fine-tuning hyper-parameters
- learning_rate = 1e-5
- batch_size = 32
- warmup = 600
- max_seq_length = 128
- num_train_epochs = 3.0
|
1 | roberta-base-openai-detector | [
"Fake",
"Real"
] | ---
language: en
license: mit
tags:
- exbert
datasets:
- bookcorpus
- wikipedia
---
# RoBERTa Base OpenAI Detector
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:** RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the [largest GPT-2 model](https://huggingface.co/gpt2-xl), the 1.5B parameter version.
- **Developed by:** OpenAI, see [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) and [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for full author list
- **Model Type:** Fine-tuned transformer-based language model
- **Language(s):** English
- **License:** MIT
- **Related Models:** [RoBERTa base](https://huggingface.co/roberta-base), [GPT-XL (1.5B parameter version)](https://huggingface.co/gpt2-xl), [GPT-Large (the 774M parameter version)](https://huggingface.co/gpt2-large), [GPT-Medium (the 355M parameter version)](https://huggingface.co/gpt2-medium) and [GPT-2 (the 124M parameter version)](https://huggingface.co/gpt2)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) (see, in particular, the section beginning on page 12 about Automated ML-based detection).
- [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector)
- [OpenAI Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- [Explore the detector model here](https://huggingface.co/openai-detector )
## Uses
#### Direct Use
The model is a classifier that can be used to detect text generated by GPT-2 models. However, it is strongly suggested not to use it as a ChatGPT detector for the purposes of making grave allegations of academic misconduct against undergraduates and others, as this model might give inaccurate results in the case of ChatGPT-generated input.
#### Downstream Use
The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further discussion.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
In their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research.
In a related [blog post](https://openai.com/blog/gpt-2-1-5b-release/), the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write:
> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective.
The model developers also [report](https://openai.com/blog/gpt-2-1-5b-release/) finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.
#### Bias
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the [RoBERTa base](https://huggingface.co/roberta-base) and [GPT-2 XL](https://huggingface.co/gpt2-xl) model cards for more information). The developers of this model discuss these issues further in their [paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
## Training
#### Training Data
The model is a sequence classifier based on RoBERTa base (see the [RoBERTa base model card](https://huggingface.co/roberta-base) for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available [here](https://github.com/openai/gpt-2-output-dataset)).
#### Training Procedure
The model developers write that:
> We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.
They later state:
> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the training procedure.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
#### Testing Data, Factors and Metrics
The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by:
> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.
#### Results
The model developers [find](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf):
> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling ([Holtzman et al., 2019](https://arxiv.org/abs/1904.09751). Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), Figure 1 (on page 14) and Figure 2 (on page 16) for full results.
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
The model developers write that:
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the modeling architecture and training details.
## Citation Information
```bibtex
@article{solaiman2019release,
title={Release strategies and the social impacts of language models},
author={Solaiman, Irene and Brundage, Miles and Clark, Jack and Askell, Amanda and Herbert-Voss, Ariel and Wu, Jeff and Radford, Alec and Krueger, Gretchen and Kim, Jong Wook and Kreps, Sarah and others},
journal={arXiv preprint arXiv:1908.09203},
year={2019}
}
```
APA:
- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.
## Model Card Authors
This model card was written by the team at Hugging Face.
## How to Get Started with the Model
This model can be instantiated and run with a Transformers pipeline:
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="roberta-base-openai-detector")
print(pipe("Hello world! Is this content AI-generated?")) # [{'label': 'Real', 'score': 0.8036582469940186}]
``` |
2 | roberta-large-mnli | [
"CONTRADICTION",
"NEUTRAL",
"ENTAILMENT"
] | ---
language:
- en
license: mit
tags:
- autogenerated-modelcard
datasets:
- multi_nli
- wikipedia
- bookcorpus
---
# roberta-large-mnli
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation-results)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
## Model Details
**Model Description:** roberta-large-mnli is the [RoBERTa large model](https://huggingface.co/roberta-large) fine-tuned on the [Multi-Genre Natural Language Inference (MNLI)](https://huggingface.co/datasets/multi_nli) corpus. The model is a pretrained model on English language text using a masked language modeling (MLM) objective.
- **Developed by:** See [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) for model developers
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** MIT
- **Parent Model:** This model is a fine-tuned version of the RoBERTa large model. Users should see the [RoBERTa large model card](https://huggingface.co/roberta-large) for relevant information.
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/1907.11692)
- [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta)
## How to Get Started with the Model
Use the code below to get started with the model. The model can be loaded with the zero-shot-classification pipeline like so:
```python
from transformers import pipeline
classifier = pipeline('zero-shot-classification', model='roberta-large-mnli')
```
You can then use this pipeline to classify sequences into any of the class names you specify. For example:
```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
```
## Uses
#### Direct Use
This fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the [GitHub repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) for examples) and zero-shot sequence classification.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The [RoBERTa large model card](https://huggingface.co/roberta-large) notes that: "The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral."
Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
sequence_to_classify = "The CEO had a strong handshake."
candidate_labels = ['male', 'female']
hypothesis_template = "This text speaks about a {} profession."
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
```
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
This model was fine-tuned on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus. Also see the [MNLI data card](https://huggingface.co/datasets/multi_nli) for more information.
As described in the [RoBERTa large model card](https://huggingface.co/roberta-large):
> The RoBERTa model was pretrained on the reunion of five datasets:
>
> - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books;
> - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ;
> - [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news articles crawled between September 2016 and February 2019.
> - [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to train GPT-2,
> - [Stories](https://arxiv.org/abs/1806.02847), a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas.
>
> Together theses datasets weight 160GB of text.
Also see the [bookcorpus data card](https://huggingface.co/datasets/bookcorpus) and the [wikipedia data card](https://huggingface.co/datasets/wikipedia) for additional information.
#### Training Procedure
##### Preprocessing
As described in the [RoBERTa large model card](https://huggingface.co/roberta-large):
> The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
> the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
> with `<s>` and the end of one by `</s>`
>
> The details of the masking procedure for each sentence are the following:
> - 15% of the tokens are masked.
> - In 80% of the cases, the masked tokens are replaced by `<mask>`.
> - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
> - In the 10% remaining cases, the masked tokens are left as is.
>
> Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
##### Pretraining
Also as described in the [RoBERTa large model card](https://huggingface.co/roberta-large):
> The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
> optimizer used is Adam with a learning rate of 4e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and
> \\(\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning
> rate after.
## Evaluation
The following evaluation information is extracted from the associated [GitHub repo for RoBERTa](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta).
#### Testing Data, Factors and Metrics
The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics:
- **Dataset:** Part of [GLUE (Wang et al., 2019)](https://arxiv.org/pdf/1804.07461.pdf), the General Language Understanding Evaluation benchmark, a collection of 9 datasets for evaluating natural language understanding systems. Specifically, the model was evaluated on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus. See the [GLUE data card](https://huggingface.co/datasets/glue) or [Wang et al. (2019)](https://arxiv.org/pdf/1804.07461.pdf) for further information.
- **Tasks:** NLI. [Wang et al. (2019)](https://arxiv.org/pdf/1804.07461.pdf) describe the inference task for MNLI as:
> The Multi-Genre Natural Language Inference Corpus [(Williams et al., 2018)](https://arxiv.org/abs/1704.05426) is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) sections. We also use and recommend the SNLI corpus [(Bowman et al., 2015)](https://arxiv.org/abs/1508.05326) as 550k examples of auxiliary training data.
- **Metrics:** Accuracy
- **Dataset:** [XNLI (Conneau et al., 2018)](https://arxiv.org/pdf/1809.05053.pdf), the extension of the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus to 15 languages: English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi, Swahili and Urdu. See the [XNLI data card](https://huggingface.co/datasets/xnli) or [Conneau et al. (2018)](https://arxiv.org/pdf/1809.05053.pdf) for further information.
- **Tasks:** Translate-test (e.g., the model is used to translate input sentences in other languages to the training language)
- **Metrics:** Accuracy
#### Results
GLUE test results (dev set, single model, single-task fine-tuning): 90.2 on MNLI
XNLI test results:
| Task | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur |
|:----:|:--:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| |91.3|82.91|84.27|81.24|81.74|83.13|78.28|76.79|76.64|74.17|74.05| 77.5| 70.9|66.65|66.81|
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1907.11692.pdf).
- **Hardware Type:** 1024 V100 GPUs
- **Hours used:** 24 hours (one day)
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/pdf/1907.11692.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{liu2019roberta,
title = {RoBERTa: A Robustly Optimized BERT Pretraining Approach},
author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and
Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and
Luke Zettlemoyer and Veselin Stoyanov},
journal={arXiv preprint arXiv:1907.11692},
year = {2019},
}
``` |
8 | AIDA-UPM/bertweet-base-multi-mami | [
"misogynous",
"objectification",
"shaming",
"stereotype",
"violence"
] | ---
pipeline_tag: text-classification
tags:
- text-classification
- misogyny
language: en
license: apache-2.0
widget:
- text: "Women wear yoga pants because men don't stare at their personality"
example_title: "Misogyny detection"
---
# bertweet-base-multi-mami
This is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.
# Multilabels
label2id={
"misogynous": 0,
"shaming": 1,
"stereotype": 2,
"objectification": 3,
"violence": 4,
},
|
9 | ASCCCCCCCC/PENGMENGJIE-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
model_index:
- name: PENGMENGJIE-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PENGMENGJIE-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.9.0
- Pytorch 1.7.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
|
10 | ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-chinese-finetuned-amazon_zh_20000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-amazon_zh_20000
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1683
- Accuracy: 0.5224
- F1: 0.5194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.2051 | 1.0 | 2500 | 1.1717 | 0.506 | 0.4847 |
| 1.0035 | 2.0 | 5000 | 1.1683 | 0.5224 | 0.5194 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
11 | ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-chinese-amazon_zh_20000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-chinese-amazon_zh_20000
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1518
- Accuracy: 0.5092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.196 | 1.0 | 1250 | 1.1518 | 0.5092 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
12 | ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-amazon_zh_20000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3031
- Accuracy: 0.4406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.396 | 1.0 | 1250 | 1.3031 | 0.4406 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
13 | ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-amazon_zh_20000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3516
- Accuracy: 0.414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4343 | 1.0 | 1250 | 1.3516 | 0.414 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
14 | ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declined",
"carry_on",
"change_accent",
"change_ai_name",
"change_language",
"change_speed",
"change_user_name",
"change_volume",
"confirm_reservation",
"cook_time",
"credit_limit",
"credit_limit_change",
"credit_score",
"current_location",
"damaged_card",
"date",
"definition",
"direct_deposit",
"directions",
"distance",
"do_you_have_pets",
"exchange_rate",
"expiration_date",
"find_phone",
"flight_status",
"flip_coin",
"food_last",
"freeze_account",
"fun_fact",
"gas",
"gas_type",
"goodbye",
"greeting",
"how_busy",
"how_old_are_you",
"improve_credit_score",
"income",
"ingredient_substitution",
"ingredients_list",
"insurance",
"insurance_change",
"interest_rate",
"international_fees",
"international_visa",
"jump_start",
"last_maintenance",
"lost_luggage",
"make_call",
"maybe",
"meal_suggestion",
"meaning_of_life",
"measurement_conversion",
"meeting_schedule",
"min_payment",
"mpg",
"new_card",
"next_holiday",
"next_song",
"no",
"nutrition_info",
"oil_change_how",
"oil_change_when",
"oos",
"order",
"order_checks",
"order_status",
"pay_bill",
"payday",
"pin_change",
"play_music",
"plug_type",
"pto_balance",
"pto_request",
"pto_request_status",
"pto_used",
"recipe",
"redeem_rewards",
"reminder",
"reminder_update",
"repeat",
"replacement_card_duration",
"report_fraud",
"report_lost_card",
"reset_settings",
"restaurant_reservation",
"restaurant_reviews",
"restaurant_suggestion",
"rewards_balance",
"roll_dice",
"rollover_401k",
"routing",
"schedule_maintenance",
"schedule_meeting",
"share_location",
"shopping_list",
"shopping_list_update",
"smart_home",
"spelling",
"spending_history",
"sync_device",
"taxes",
"tell_joke",
"text",
"thank_you",
"time",
"timer",
"timezone",
"tire_change",
"tire_pressure",
"todo_list",
"todo_list_update",
"traffic",
"transactions",
"transfer",
"translate",
"travel_alert",
"travel_notification",
"travel_suggestion",
"uber",
"update_playlist",
"user_name",
"vaccines",
"w2",
"weather",
"what_are_your_hobbies",
"what_can_i_ask_you",
"what_is_your_name",
"what_song",
"where_are_you_from",
"whisper_mode",
"who_do_you_work_for",
"who_made_you",
"yes"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
model_index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.9.0
- Pytorch 1.7.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
|
15 | AWTStress/stress_classifier | [
"Emotional Turmoil",
"Everyday Decision Making",
"Family Issues",
"Financial Problem",
"Health, Fatigue, or Physical Pain",
"Other",
"School",
"Social Relationships",
"Work"
] | ---
tags:
- generated_from_keras_callback
model-index:
- name: tmp_znj9o4r
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tmp_znj9o4r
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
16 | AWTStress/stress_score | [
"LABEL_0"
] | ---
tags:
- generated_from_keras_callback
model-index:
- name: stress_score
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# stress_score
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
17 | Abirate/bert_fine_tuned_cola | [
"acceptable",
"unacceptable"
] |
## Petrained Model BERT: base model (cased)
BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/1810.04805) and first released in this [repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between english and English.
## Pretained Model Description
BERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives:
- Masked language modeling (MLM)
- Next sentence prediction (NSP)
## Fine-tuned Model Description: BERT fine-tuned Cola
The pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK.
By fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable
## How to use ?
###### Directly with a pipeline for a text-classification NLP task
```python
from transformers import pipeline
cola = pipeline('text-classification', model='Abirate/bert_fine_tuned_cola')
cola("Tunisia is a beautiful country")
[{'label': 'acceptable', 'score': 0.989352285861969}]
```
###### Breaking down all the steps (Tokenization, Modeling, Postprocessing)
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
import tensorflow as tf
import numpy as np
tokenizer = AutoTokenizer.from_pretrained('Abirate/bert_fine_tuned_cola')
model = TFAutoModelForSequenceClassification.from_pretrained("Abirate/bert_fine_tuned_cola")
text = "Tunisia is a beautiful country."
encoded_input = tokenizer(text, return_tensors='tf')
#The logits
output = model(encoded_input)
#Postprocessing
probas_output = tf.math.softmax(tf.squeeze(output['logits']), axis = -1)
class_preds = np.argmax(probas_output, axis = -1)
#Predicting the class acceptable or not acceptable
model.config.id2label[class_preds]
#Result
'acceptable'
``` |
18 | ActivationAI/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9280065074208208
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2128
- Accuracy: 0.928
- F1: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8151 | 1.0 | 250 | 0.3043 | 0.907 | 0.9035 |
| 0.24 | 2.0 | 500 | 0.2128 | 0.928 | 0.9280 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
65 | Adi2K/Priv-Consent | [
"CON",
"NOT"
] | ---
language: eng
widget:
- text: "You can control cookies and tracking tools. To learn how to manage how we - and our vendors - use cookies and other tracking tools, please click here."
datasets:
- Adi2K/autonlp-data-Priv-Consent
---
# Model
- Problem type: Binary Classification
- Model ID: 12592372
## Validation Metrics
- Loss: 0.23033875226974487
- Accuracy: 0.9138655462184874
- Precision: 0.9087136929460581
- Recall: 0.9201680672268907
- AUC: 0.9690346726926065
- F1: 0.9144050104384133
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Adi2K/autonlp-Priv-Consent-12592372
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
66 | AhmedBou/TuniBert | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
language:
- ar
tags:
- sentiment analysis
- classification
- arabic dialect
- tunisian dialect
---
This is a fineTued Bert model on Tunisian dialect text (Used dataset: AhmedBou/Tunisian-Dialect-Corpus), ready for sentiment analysis and classification tasks.
LABEL_1: Positive
LABEL_2: Negative
LABEL_0: Neutral
This work is an integral component of my Master's degree thesis and represents the culmination of extensive research and labor.
If you wish to utilize the Tunisian-Dialect-Corpus or the TuniBert model, kindly refer to the directory provided. [huggingface.co/AhmedBou][github.com/BoulahiaAhmed] |
67 | Aimendo/autonlp-triage-35248482 | [
"acknowledgement",
"ads",
"approval",
"away",
"cancellation",
"doc_request",
"inquirey",
"modification",
"new_booking",
"refund"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Aimendo/autonlp-data-triage
co2_eq_emissions: 7.989144645413398
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 35248482
- CO2 Emissions (in grams): 7.989144645413398
## Validation Metrics
- Loss: 0.13783401250839233
- Accuracy: 0.9728654124457308
- Macro F1: 0.949537871674076
- Micro F1: 0.9728654124457308
- Weighted F1: 0.9732422812610365
- Macro Precision: 0.9380372699332605
- Micro Precision: 0.9728654124457308
- Weighted Precision: 0.974548513256663
- Macro Recall: 0.9689346153591594
- Micro Recall: 0.9728654124457308
- Weighted Recall: 0.9728654124457308
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Aimendo/autonlp-triage-35248482
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
68 | Ajay191191/autonlp-Test-530014983 | [
"0",
"1"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Ajay191191/autonlp-data-Test
co2_eq_emissions: 55.10196329868386
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 530014983
- CO2 Emissions (in grams): 55.10196329868386
## Validation Metrics
- Loss: 0.23171618580818176
- Accuracy: 0.9298837645294338
- Precision: 0.9314414866901055
- Recall: 0.9279459594696022
- AUC: 0.979447403984557
- F1: 0.9296904373981703
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Ajay191191/autonlp-Test-530014983
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
71 | AkshatSurolia/ICD-10-Code-Prediction | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_100",
"LABEL_1000",
"LABEL_10000",
"LABEL_10001",
"LABEL_10002",
"LABEL_10003",
"LABEL_10004",
"LABEL_10005",
"LABEL_10006",
"LABEL_10007",
"LABEL_10008",
"LABEL_10009",
"LABEL_1001",
"LABEL_10010",
"LABEL_10011",
"LABEL_10012",
"LABEL_10013",
"LABEL_10014",
"LABEL_10015",
"LABEL_10016",
"LABEL_10017",
"LABEL_10018",
"LABEL_10019",
"LABEL_1002",
"LABEL_10020",
"LABEL_10021",
"LABEL_10022",
"LABEL_10023",
"LABEL_10024",
"LABEL_10025",
"LABEL_10026",
"LABEL_10027",
"LABEL_10028",
"LABEL_10029",
"LABEL_1003",
"LABEL_10030",
"LABEL_10031",
"LABEL_10032",
"LABEL_10033",
"LABEL_10034",
"LABEL_10035",
"LABEL_10036",
"LABEL_10037",
"LABEL_10038",
"LABEL_10039",
"LABEL_1004",
"LABEL_10040",
"LABEL_10041",
"LABEL_10042",
"LABEL_10043",
"LABEL_10044",
"LABEL_10045",
"LABEL_10046",
"LABEL_10047",
"LABEL_10048",
"LABEL_10049",
"LABEL_1005",
"LABEL_10050",
"LABEL_10051",
"LABEL_10052",
"LABEL_10053",
"LABEL_10054",
"LABEL_10055",
"LABEL_10056",
"LABEL_10057",
"LABEL_10058",
"LABEL_10059",
"LABEL_1006",
"LABEL_10060",
"LABEL_10061",
"LABEL_10062",
"LABEL_10063",
"LABEL_10064",
"LABEL_10065",
"LABEL_10066",
"LABEL_10067",
"LABEL_10068",
"LABEL_10069",
"LABEL_1007",
"LABEL_10070",
"LABEL_10071",
"LABEL_10072",
"LABEL_10073",
"LABEL_10074",
"LABEL_10075",
"LABEL_10076",
"LABEL_10077",
"LABEL_10078",
"LABEL_10079",
"LABEL_1008",
"LABEL_10080",
"LABEL_10081",
"LABEL_10082",
"LABEL_10083",
"LABEL_10084",
"LABEL_10085",
"LABEL_10086",
"LABEL_10087",
"LABEL_10088",
"LABEL_10089",
"LABEL_1009",
"LABEL_10090",
"LABEL_10091",
"LABEL_10092",
"LABEL_10093",
"LABEL_10094",
"LABEL_10095",
"LABEL_10096",
"LABEL_10097",
"LABEL_10098",
"LABEL_10099",
"LABEL_101",
"LABEL_1010",
"LABEL_10100",
"LABEL_10101",
"LABEL_10102",
"LABEL_10103",
"LABEL_10104",
"LABEL_10105",
"LABEL_10106",
"LABEL_10107",
"LABEL_10108",
"LABEL_10109",
"LABEL_1011",
"LABEL_10110",
"LABEL_10111",
"LABEL_10112",
"LABEL_10113",
"LABEL_10114",
"LABEL_10115",
"LABEL_10116",
"LABEL_10117",
"LABEL_10118",
"LABEL_10119",
"LABEL_1012",
"LABEL_10120",
"LABEL_10121",
"LABEL_10122",
"LABEL_10123",
"LABEL_10124",
"LABEL_10125",
"LABEL_10126",
"LABEL_10127",
"LABEL_10128",
"LABEL_10129",
"LABEL_1013",
"LABEL_10130",
"LABEL_10131",
"LABEL_10132",
"LABEL_10133",
"LABEL_10134",
"LABEL_10135",
"LABEL_10136",
"LABEL_10137",
"LABEL_10138",
"LABEL_10139",
"LABEL_1014",
"LABEL_10140",
"LABEL_10141",
"LABEL_10142",
"LABEL_10143",
"LABEL_10144",
"LABEL_10145",
"LABEL_10146",
"LABEL_10147",
"LABEL_10148",
"LABEL_10149",
"LABEL_1015",
"LABEL_10150",
"LABEL_10151",
"LABEL_10152",
"LABEL_10153",
"LABEL_10154",
"LABEL_10155",
"LABEL_10156",
"LABEL_10157",
"LABEL_10158",
"LABEL_10159",
"LABEL_1016",
"LABEL_10160",
"LABEL_10161",
"LABEL_10162",
"LABEL_10163",
"LABEL_10164",
"LABEL_10165",
"LABEL_10166",
"LABEL_10167",
"LABEL_10168",
"LABEL_10169",
"LABEL_1017",
"LABEL_10170",
"LABEL_10171",
"LABEL_10172",
"LABEL_10173",
"LABEL_10174",
"LABEL_10175",
"LABEL_10176",
"LABEL_10177",
"LABEL_10178",
"LABEL_10179",
"LABEL_1018",
"LABEL_10180",
"LABEL_10181",
"LABEL_10182",
"LABEL_10183",
"LABEL_10184",
"LABEL_10185",
"LABEL_10186",
"LABEL_10187",
"LABEL_10188",
"LABEL_10189",
"LABEL_1019",
"LABEL_10190",
"LABEL_10191",
"LABEL_10192",
"LABEL_10193",
"LABEL_10194",
"LABEL_10195",
"LABEL_10196",
"LABEL_10197",
"LABEL_10198",
"LABEL_10199",
"LABEL_102",
"LABEL_1020",
"LABEL_10200",
"LABEL_10201",
"LABEL_10202",
"LABEL_10203",
"LABEL_10204",
"LABEL_10205",
"LABEL_10206",
"LABEL_10207",
"LABEL_10208",
"LABEL_10209",
"LABEL_1021",
"LABEL_10210",
"LABEL_10211",
"LABEL_10212",
"LABEL_10213",
"LABEL_10214",
"LABEL_10215",
"LABEL_10216",
"LABEL_10217",
"LABEL_10218",
"LABEL_10219",
"LABEL_1022",
"LABEL_10220",
"LABEL_10221",
"LABEL_10222",
"LABEL_10223",
"LABEL_10224",
"LABEL_10225",
"LABEL_10226",
"LABEL_10227",
"LABEL_10228",
"LABEL_10229",
"LABEL_1023",
"LABEL_10230",
"LABEL_10231",
"LABEL_10232",
"LABEL_10233",
"LABEL_10234",
"LABEL_10235",
"LABEL_10236",
"LABEL_10237",
"LABEL_10238",
"LABEL_10239",
"LABEL_1024",
"LABEL_10240",
"LABEL_10241",
"LABEL_10242",
"LABEL_10243",
"LABEL_10244",
"LABEL_10245",
"LABEL_10246",
"LABEL_10247",
"LABEL_10248",
"LABEL_10249",
"LABEL_1025",
"LABEL_10250",
"LABEL_10251",
"LABEL_10252",
"LABEL_10253",
"LABEL_10254",
"LABEL_10255",
"LABEL_10256",
"LABEL_10257",
"LABEL_10258",
"LABEL_10259",
"LABEL_1026",
"LABEL_10260",
"LABEL_10261",
"LABEL_10262",
"LABEL_10263",
"LABEL_10264",
"LABEL_10265",
"LABEL_10266",
"LABEL_10267",
"LABEL_10268",
"LABEL_10269",
"LABEL_1027",
"LABEL_10270",
"LABEL_10271",
"LABEL_10272",
"LABEL_10273",
"LABEL_10274",
"LABEL_10275",
"LABEL_10276",
"LABEL_10277",
"LABEL_10278",
"LABEL_10279",
"LABEL_1028",
"LABEL_10280",
"LABEL_10281",
"LABEL_10282",
"LABEL_10283",
"LABEL_10284",
"LABEL_10285",
"LABEL_10286",
"LABEL_10287",
"LABEL_10288",
"LABEL_10289",
"LABEL_1029",
"LABEL_10290",
"LABEL_10291",
"LABEL_10292",
"LABEL_10293",
"LABEL_10294",
"LABEL_10295",
"LABEL_10296",
"LABEL_10297",
"LABEL_10298",
"LABEL_10299",
"LABEL_103",
"LABEL_1030",
"LABEL_10300",
"LABEL_10301",
"LABEL_10302",
"LABEL_10303",
"LABEL_10304",
"LABEL_10305",
"LABEL_10306",
"LABEL_10307",
"LABEL_10308",
"LABEL_10309",
"LABEL_1031",
"LABEL_10310",
"LABEL_10311",
"LABEL_10312",
"LABEL_10313",
"LABEL_10314",
"LABEL_10315",
"LABEL_10316",
"LABEL_10317",
"LABEL_10318",
"LABEL_10319",
"LABEL_1032",
"LABEL_10320",
"LABEL_10321",
"LABEL_10322",
"LABEL_10323",
"LABEL_10324",
"LABEL_10325",
"LABEL_10326",
"LABEL_10327",
"LABEL_10328",
"LABEL_10329",
"LABEL_1033",
"LABEL_10330",
"LABEL_10331",
"LABEL_10332",
"LABEL_10333",
"LABEL_10334",
"LABEL_10335",
"LABEL_10336",
"LABEL_10337",
"LABEL_10338",
"LABEL_10339",
"LABEL_1034",
"LABEL_10340",
"LABEL_10341",
"LABEL_10342",
"LABEL_10343",
"LABEL_10344",
"LABEL_10345",
"LABEL_10346",
"LABEL_10347",
"LABEL_10348",
"LABEL_10349",
"LABEL_1035",
"LABEL_10350",
"LABEL_10351",
"LABEL_10352",
"LABEL_10353",
"LABEL_10354",
"LABEL_10355",
"LABEL_10356",
"LABEL_10357",
"LABEL_10358",
"LABEL_10359",
"LABEL_1036",
"LABEL_10360",
"LABEL_10361",
"LABEL_10362",
"LABEL_10363",
"LABEL_10364",
"LABEL_10365",
"LABEL_10366",
"LABEL_10367",
"LABEL_10368",
"LABEL_10369",
"LABEL_1037",
"LABEL_10370",
"LABEL_10371",
"LABEL_10372",
"LABEL_10373",
"LABEL_10374",
"LABEL_10375",
"LABEL_10376",
"LABEL_10377",
"LABEL_10378",
"LABEL_10379",
"LABEL_1038",
"LABEL_10380",
"LABEL_10381",
"LABEL_10382",
"LABEL_10383",
"LABEL_10384",
"LABEL_10385",
"LABEL_10386",
"LABEL_10387",
"LABEL_10388",
"LABEL_10389",
"LABEL_1039",
"LABEL_10390",
"LABEL_10391",
"LABEL_10392",
"LABEL_10393",
"LABEL_10394",
"LABEL_10395",
"LABEL_10396",
"LABEL_10397",
"LABEL_10398",
"LABEL_10399",
"LABEL_104",
"LABEL_1040",
"LABEL_10400",
"LABEL_10401",
"LABEL_10402",
"LABEL_10403",
"LABEL_10404",
"LABEL_10405",
"LABEL_10406",
"LABEL_10407",
"LABEL_10408",
"LABEL_10409",
"LABEL_1041",
"LABEL_10410",
"LABEL_10411",
"LABEL_10412",
"LABEL_10413",
"LABEL_10414",
"LABEL_10415",
"LABEL_10416",
"LABEL_10417",
"LABEL_10418",
"LABEL_10419",
"LABEL_1042",
"LABEL_10420",
"LABEL_10421",
"LABEL_10422",
"LABEL_10423",
"LABEL_10424",
"LABEL_10425",
"LABEL_10426",
"LABEL_10427",
"LABEL_10428",
"LABEL_10429",
"LABEL_1043",
"LABEL_10430",
"LABEL_10431",
"LABEL_10432",
"LABEL_10433",
"LABEL_10434",
"LABEL_10435",
"LABEL_10436",
"LABEL_10437",
"LABEL_10438",
"LABEL_10439",
"LABEL_1044",
"LABEL_10440",
"LABEL_10441",
"LABEL_10442",
"LABEL_10443",
"LABEL_10444",
"LABEL_10445",
"LABEL_10446",
"LABEL_10447",
"LABEL_10448",
"LABEL_10449",
"LABEL_1045",
"LABEL_10450",
"LABEL_10451",
"LABEL_10452",
"LABEL_10453",
"LABEL_10454",
"LABEL_10455",
"LABEL_10456",
"LABEL_10457",
"LABEL_10458",
"LABEL_10459",
"LABEL_1046",
"LABEL_10460",
"LABEL_10461",
"LABEL_10462",
"LABEL_10463",
"LABEL_10464",
"LABEL_10465",
"LABEL_10466",
"LABEL_10467",
"LABEL_10468",
"LABEL_10469",
"LABEL_1047",
"LABEL_10470",
"LABEL_10471",
"LABEL_10472",
"LABEL_10473",
"LABEL_10474",
"LABEL_10475",
"LABEL_10476",
"LABEL_10477",
"LABEL_10478",
"LABEL_10479",
"LABEL_1048",
"LABEL_10480",
"LABEL_10481",
"LABEL_10482",
"LABEL_10483",
"LABEL_10484",
"LABEL_10485",
"LABEL_10486",
"LABEL_10487",
"LABEL_10488",
"LABEL_10489",
"LABEL_1049",
"LABEL_10490",
"LABEL_10491",
"LABEL_10492",
"LABEL_10493",
"LABEL_10494",
"LABEL_10495",
"LABEL_10496",
"LABEL_10497",
"LABEL_10498",
"LABEL_10499",
"LABEL_105",
"LABEL_1050",
"LABEL_10500",
"LABEL_10501",
"LABEL_10502",
"LABEL_10503",
"LABEL_10504",
"LABEL_10505",
"LABEL_10506",
"LABEL_10507",
"LABEL_10508",
"LABEL_10509",
"LABEL_1051",
"LABEL_10510",
"LABEL_10511",
"LABEL_10512",
"LABEL_10513",
"LABEL_10514",
"LABEL_10515",
"LABEL_10516",
"LABEL_10517",
"LABEL_10518",
"LABEL_10519",
"LABEL_1052",
"LABEL_10520",
"LABEL_10521",
"LABEL_10522",
"LABEL_10523",
"LABEL_10524",
"LABEL_10525",
"LABEL_10526",
"LABEL_10527",
"LABEL_10528",
"LABEL_10529",
"LABEL_1053",
"LABEL_10530",
"LABEL_10531",
"LABEL_10532",
"LABEL_10533",
"LABEL_10534",
"LABEL_10535",
"LABEL_10536",
"LABEL_10537",
"LABEL_10538",
"LABEL_10539",
"LABEL_1054",
"LABEL_10540",
"LABEL_10541",
"LABEL_10542",
"LABEL_10543",
"LABEL_10544",
"LABEL_10545",
"LABEL_10546",
"LABEL_10547",
"LABEL_10548",
"LABEL_10549",
"LABEL_1055",
"LABEL_10550",
"LABEL_10551",
"LABEL_10552",
"LABEL_10553",
"LABEL_10554",
"LABEL_10555",
"LABEL_10556",
"LABEL_10557",
"LABEL_10558",
"LABEL_10559",
"LABEL_1056",
"LABEL_10560",
"LABEL_10561",
"LABEL_10562",
"LABEL_10563",
"LABEL_10564",
"LABEL_10565",
"LABEL_10566",
"LABEL_10567",
"LABEL_10568",
"LABEL_10569",
"LABEL_1057",
"LABEL_10570",
"LABEL_10571",
"LABEL_10572",
"LABEL_10573",
"LABEL_10574",
"LABEL_10575",
"LABEL_10576",
"LABEL_10577",
"LABEL_10578",
"LABEL_10579",
"LABEL_1058",
"LABEL_10580",
"LABEL_10581",
"LABEL_10582",
"LABEL_10583",
"LABEL_10584",
"LABEL_10585",
"LABEL_10586",
"LABEL_10587",
"LABEL_10588",
"LABEL_10589",
"LABEL_1059",
"LABEL_10590",
"LABEL_10591",
"LABEL_10592",
"LABEL_10593",
"LABEL_10594",
"LABEL_10595",
"LABEL_10596",
"LABEL_10597",
"LABEL_10598",
"LABEL_10599",
"LABEL_106",
"LABEL_1060",
"LABEL_10600",
"LABEL_10601",
"LABEL_10602",
"LABEL_10603",
"LABEL_10604",
"LABEL_10605",
"LABEL_10606",
"LABEL_10607",
"LABEL_10608",
"LABEL_10609",
"LABEL_1061",
"LABEL_10610",
"LABEL_10611",
"LABEL_10612",
"LABEL_10613",
"LABEL_10614",
"LABEL_10615",
"LABEL_10616",
"LABEL_10617",
"LABEL_10618",
"LABEL_10619",
"LABEL_1062",
"LABEL_10620",
"LABEL_10621",
"LABEL_10622",
"LABEL_10623",
"LABEL_10624",
"LABEL_10625",
"LABEL_10626",
"LABEL_10627",
"LABEL_10628",
"LABEL_10629",
"LABEL_1063",
"LABEL_10630",
"LABEL_10631",
"LABEL_10632",
"LABEL_10633",
"LABEL_10634",
"LABEL_10635",
"LABEL_10636",
"LABEL_10637",
"LABEL_10638",
"LABEL_10639",
"LABEL_1064",
"LABEL_10640",
"LABEL_10641",
"LABEL_10642",
"LABEL_10643",
"LABEL_10644",
"LABEL_10645",
"LABEL_10646",
"LABEL_10647",
"LABEL_10648",
"LABEL_10649",
"LABEL_1065",
"LABEL_10650",
"LABEL_10651",
"LABEL_10652",
"LABEL_10653",
"LABEL_10654",
"LABEL_10655",
"LABEL_10656",
"LABEL_10657",
"LABEL_10658",
"LABEL_10659",
"LABEL_1066",
"LABEL_10660",
"LABEL_10661",
"LABEL_10662",
"LABEL_10663",
"LABEL_10664",
"LABEL_10665",
"LABEL_10666",
"LABEL_10667",
"LABEL_10668",
"LABEL_10669",
"LABEL_1067",
"LABEL_10670",
"LABEL_10671",
"LABEL_10672",
"LABEL_10673",
"LABEL_10674",
"LABEL_10675",
"LABEL_10676",
"LABEL_10677",
"LABEL_10678",
"LABEL_10679",
"LABEL_1068",
"LABEL_10680",
"LABEL_10681",
"LABEL_10682",
"LABEL_10683",
"LABEL_10684",
"LABEL_10685",
"LABEL_10686",
"LABEL_10687",
"LABEL_10688",
"LABEL_10689",
"LABEL_1069",
"LABEL_10690",
"LABEL_10691",
"LABEL_10692",
"LABEL_10693",
"LABEL_10694",
"LABEL_10695",
"LABEL_10696",
"LABEL_10697",
"LABEL_10698",
"LABEL_10699",
"LABEL_107",
"LABEL_1070",
"LABEL_10700",
"LABEL_10701",
"LABEL_10702",
"LABEL_10703",
"LABEL_10704",
"LABEL_10705",
"LABEL_10706",
"LABEL_10707",
"LABEL_10708",
"LABEL_10709",
"LABEL_1071",
"LABEL_10710",
"LABEL_10711",
"LABEL_10712",
"LABEL_10713",
"LABEL_10714",
"LABEL_10715",
"LABEL_10716",
"LABEL_10717",
"LABEL_10718",
"LABEL_10719",
"LABEL_1072",
"LABEL_10720",
"LABEL_10721",
"LABEL_10722",
"LABEL_10723",
"LABEL_10724",
"LABEL_10725",
"LABEL_10726",
"LABEL_10727",
"LABEL_10728",
"LABEL_10729",
"LABEL_1073",
"LABEL_10730",
"LABEL_10731",
"LABEL_10732",
"LABEL_10733",
"LABEL_10734",
"LABEL_10735",
"LABEL_10736",
"LABEL_10737",
"LABEL_10738",
"LABEL_10739",
"LABEL_1074",
"LABEL_10740",
"LABEL_10741",
"LABEL_10742",
"LABEL_10743",
"LABEL_10744",
"LABEL_10745",
"LABEL_10746",
"LABEL_10747",
"LABEL_10748",
"LABEL_10749",
"LABEL_1075",
"LABEL_10750",
"LABEL_10751",
"LABEL_10752",
"LABEL_10753",
"LABEL_10754",
"LABEL_10755",
"LABEL_10756",
"LABEL_10757",
"LABEL_10758",
"LABEL_10759",
"LABEL_1076",
"LABEL_10760",
"LABEL_10761",
"LABEL_10762",
"LABEL_10763",
"LABEL_10764",
"LABEL_10765",
"LABEL_10766",
"LABEL_10767",
"LABEL_10768",
"LABEL_10769",
"LABEL_1077",
"LABEL_10770",
"LABEL_10771",
"LABEL_10772",
"LABEL_10773",
"LABEL_10774",
"LABEL_10775",
"LABEL_10776",
"LABEL_10777",
"LABEL_10778",
"LABEL_10779",
"LABEL_1078",
"LABEL_10780",
"LABEL_10781",
"LABEL_10782",
"LABEL_10783",
"LABEL_10784",
"LABEL_10785",
"LABEL_10786",
"LABEL_10787",
"LABEL_10788",
"LABEL_10789",
"LABEL_1079",
"LABEL_10790",
"LABEL_10791",
"LABEL_10792",
"LABEL_10793",
"LABEL_10794",
"LABEL_10795",
"LABEL_10796",
"LABEL_10797",
"LABEL_10798",
"LABEL_10799",
"LABEL_108",
"LABEL_1080",
"LABEL_10800",
"LABEL_10801",
"LABEL_10802",
"LABEL_10803",
"LABEL_10804",
"LABEL_10805",
"LABEL_10806",
"LABEL_10807",
"LABEL_10808",
"LABEL_10809",
"LABEL_1081",
"LABEL_10810",
"LABEL_10811",
"LABEL_10812",
"LABEL_10813",
"LABEL_10814",
"LABEL_10815",
"LABEL_10816",
"LABEL_10817",
"LABEL_10818",
"LABEL_10819",
"LABEL_1082",
"LABEL_10820",
"LABEL_10821",
"LABEL_10822",
"LABEL_10823",
"LABEL_10824",
"LABEL_10825",
"LABEL_10826",
"LABEL_10827",
"LABEL_10828",
"LABEL_10829",
"LABEL_1083",
"LABEL_10830",
"LABEL_10831",
"LABEL_10832",
"LABEL_10833",
"LABEL_10834",
"LABEL_10835",
"LABEL_10836",
"LABEL_10837",
"LABEL_10838",
"LABEL_10839",
"LABEL_1084",
"LABEL_10840",
"LABEL_10841",
"LABEL_10842",
"LABEL_10843",
"LABEL_10844",
"LABEL_10845",
"LABEL_10846",
"LABEL_10847",
"LABEL_10848",
"LABEL_10849",
"LABEL_1085",
"LABEL_10850",
"LABEL_10851",
"LABEL_10852",
"LABEL_10853",
"LABEL_10854",
"LABEL_10855",
"LABEL_10856",
"LABEL_10857",
"LABEL_10858",
"LABEL_10859",
"LABEL_1086",
"LABEL_10860",
"LABEL_10861",
"LABEL_10862",
"LABEL_10863",
"LABEL_10864",
"LABEL_10865",
"LABEL_10866",
"LABEL_10867",
"LABEL_10868",
"LABEL_10869",
"LABEL_1087",
"LABEL_10870",
"LABEL_10871",
"LABEL_10872",
"LABEL_10873",
"LABEL_10874",
"LABEL_10875",
"LABEL_10876",
"LABEL_10877",
"LABEL_10878",
"LABEL_10879",
"LABEL_1088",
"LABEL_10880",
"LABEL_10881",
"LABEL_10882",
"LABEL_10883",
"LABEL_10884",
"LABEL_10885",
"LABEL_10886",
"LABEL_10887",
"LABEL_10888",
"LABEL_10889",
"LABEL_1089",
"LABEL_10890",
"LABEL_10891",
"LABEL_10892",
"LABEL_10893",
"LABEL_10894",
"LABEL_10895",
"LABEL_10896",
"LABEL_10897",
"LABEL_10898",
"LABEL_10899",
"LABEL_109",
"LABEL_1090",
"LABEL_10900",
"LABEL_10901",
"LABEL_10902",
"LABEL_10903",
"LABEL_10904",
"LABEL_10905",
"LABEL_10906",
"LABEL_10907",
"LABEL_10908",
"LABEL_10909",
"LABEL_1091",
"LABEL_10910",
"LABEL_10911",
"LABEL_10912",
"LABEL_10913",
"LABEL_10914",
"LABEL_10915",
"LABEL_10916",
"LABEL_10917",
"LABEL_10918",
"LABEL_10919",
"LABEL_1092",
"LABEL_10920",
"LABEL_10921",
"LABEL_10922",
"LABEL_10923",
"LABEL_10924",
"LABEL_10925",
"LABEL_10926",
"LABEL_10927",
"LABEL_10928",
"LABEL_10929",
"LABEL_1093",
"LABEL_10930",
"LABEL_10931",
"LABEL_10932",
"LABEL_10933",
"LABEL_10934",
"LABEL_10935",
"LABEL_10936",
"LABEL_10937",
"LABEL_10938",
"LABEL_10939",
"LABEL_1094",
"LABEL_10940",
"LABEL_10941",
"LABEL_10942",
"LABEL_10943",
"LABEL_10944",
"LABEL_10945",
"LABEL_10946",
"LABEL_10947",
"LABEL_10948",
"LABEL_10949",
"LABEL_1095",
"LABEL_10950",
"LABEL_10951",
"LABEL_10952",
"LABEL_10953",
"LABEL_10954",
"LABEL_10955",
"LABEL_10956",
"LABEL_10957",
"LABEL_10958",
"LABEL_10959",
"LABEL_1096",
"LABEL_10960",
"LABEL_10961",
"LABEL_10962",
"LABEL_10963",
"LABEL_10964",
"LABEL_10965",
"LABEL_10966",
"LABEL_10967",
"LABEL_10968",
"LABEL_10969",
"LABEL_1097",
"LABEL_10970",
"LABEL_10971",
"LABEL_10972",
"LABEL_10973",
"LABEL_10974",
"LABEL_10975",
"LABEL_10976",
"LABEL_10977",
"LABEL_10978",
"LABEL_10979",
"LABEL_1098",
"LABEL_10980",
"LABEL_10981",
"LABEL_10982",
"LABEL_10983",
"LABEL_10984",
"LABEL_10985",
"LABEL_10986",
"LABEL_10987",
"LABEL_10988",
"LABEL_10989",
"LABEL_1099",
"LABEL_10990",
"LABEL_10991",
"LABEL_10992",
"LABEL_10993",
"LABEL_10994",
"LABEL_10995",
"LABEL_10996",
"LABEL_10997",
"LABEL_10998",
"LABEL_10999",
"LABEL_11",
"LABEL_110",
"LABEL_1100",
"LABEL_11000",
"LABEL_11001",
"LABEL_11002",
"LABEL_11003",
"LABEL_11004",
"LABEL_11005",
"LABEL_11006",
"LABEL_11007",
"LABEL_11008",
"LABEL_11009",
"LABEL_1101",
"LABEL_11010",
"LABEL_11011",
"LABEL_11012",
"LABEL_11013",
"LABEL_11014",
"LABEL_11015",
"LABEL_11016",
"LABEL_11017",
"LABEL_11018",
"LABEL_11019",
"LABEL_1102",
"LABEL_11020",
"LABEL_11021",
"LABEL_11022",
"LABEL_11023",
"LABEL_11024",
"LABEL_11025",
"LABEL_11026",
"LABEL_11027",
"LABEL_11028",
"LABEL_11029",
"LABEL_1103",
"LABEL_11030",
"LABEL_11031",
"LABEL_11032",
"LABEL_11033",
"LABEL_11034",
"LABEL_11035",
"LABEL_11036",
"LABEL_11037",
"LABEL_11038",
"LABEL_11039",
"LABEL_1104",
"LABEL_11040",
"LABEL_11041",
"LABEL_11042",
"LABEL_11043",
"LABEL_11044",
"LABEL_11045",
"LABEL_11046",
"LABEL_11047",
"LABEL_11048",
"LABEL_11049",
"LABEL_1105",
"LABEL_11050",
"LABEL_11051",
"LABEL_11052",
"LABEL_11053",
"LABEL_11054",
"LABEL_11055",
"LABEL_11056",
"LABEL_11057",
"LABEL_11058",
"LABEL_11059",
"LABEL_1106",
"LABEL_11060",
"LABEL_11061",
"LABEL_11062",
"LABEL_11063",
"LABEL_11064",
"LABEL_11065",
"LABEL_11066",
"LABEL_11067",
"LABEL_11068",
"LABEL_11069",
"LABEL_1107",
"LABEL_11070",
"LABEL_11071",
"LABEL_11072",
"LABEL_11073",
"LABEL_11074",
"LABEL_11075",
"LABEL_11076",
"LABEL_11077",
"LABEL_11078",
"LABEL_11079",
"LABEL_1108",
"LABEL_11080",
"LABEL_11081",
"LABEL_11082",
"LABEL_11083",
"LABEL_11084",
"LABEL_11085",
"LABEL_11086",
"LABEL_11087",
"LABEL_11088",
"LABEL_11089",
"LABEL_1109",
"LABEL_11090",
"LABEL_11091",
"LABEL_11092",
"LABEL_11093",
"LABEL_11094",
"LABEL_11095",
"LABEL_11096",
"LABEL_11097",
"LABEL_11098",
"LABEL_11099",
"LABEL_111",
"LABEL_1110",
"LABEL_11100",
"LABEL_11101",
"LABEL_11102",
"LABEL_11103",
"LABEL_11104",
"LABEL_11105",
"LABEL_11106",
"LABEL_11107",
"LABEL_11108",
"LABEL_11109",
"LABEL_1111",
"LABEL_11110",
"LABEL_11111",
"LABEL_11112",
"LABEL_11113",
"LABEL_11114",
"LABEL_11115",
"LABEL_11116",
"LABEL_11117",
"LABEL_11118",
"LABEL_11119",
"LABEL_1112",
"LABEL_11120",
"LABEL_11121",
"LABEL_11122",
"LABEL_11123",
"LABEL_11124",
"LABEL_11125",
"LABEL_11126",
"LABEL_11127",
"LABEL_11128",
"LABEL_11129",
"LABEL_1113",
"LABEL_11130",
"LABEL_11131",
"LABEL_11132",
"LABEL_11133",
"LABEL_11134",
"LABEL_11135",
"LABEL_11136",
"LABEL_11137",
"LABEL_11138",
"LABEL_11139",
"LABEL_1114",
"LABEL_11140",
"LABEL_11141",
"LABEL_11142",
"LABEL_11143",
"LABEL_11144",
"LABEL_11145",
"LABEL_11146",
"LABEL_11147",
"LABEL_11148",
"LABEL_11149",
"LABEL_1115",
"LABEL_11150",
"LABEL_11151",
"LABEL_11152",
"LABEL_11153",
"LABEL_11154",
"LABEL_11155",
"LABEL_11156",
"LABEL_11157",
"LABEL_11158",
"LABEL_11159",
"LABEL_1116",
"LABEL_11160",
"LABEL_11161",
"LABEL_11162",
"LABEL_11163",
"LABEL_11164",
"LABEL_11165",
"LABEL_11166",
"LABEL_11167",
"LABEL_11168",
"LABEL_11169",
"LABEL_1117",
"LABEL_11170",
"LABEL_11171",
"LABEL_11172",
"LABEL_11173",
"LABEL_11174",
"LABEL_11175",
"LABEL_11176",
"LABEL_11177",
"LABEL_11178",
"LABEL_11179",
"LABEL_1118",
"LABEL_11180",
"LABEL_11181",
"LABEL_11182",
"LABEL_11183",
"LABEL_11184",
"LABEL_11185",
"LABEL_11186",
"LABEL_11187",
"LABEL_11188",
"LABEL_11189",
"LABEL_1119",
"LABEL_11190",
"LABEL_11191",
"LABEL_11192",
"LABEL_11193",
"LABEL_11194",
"LABEL_11195",
"LABEL_11196",
"LABEL_11197",
"LABEL_11198",
"LABEL_11199",
"LABEL_112",
"LABEL_1120",
"LABEL_11200",
"LABEL_11201",
"LABEL_11202",
"LABEL_11203",
"LABEL_11204",
"LABEL_11205",
"LABEL_11206",
"LABEL_11207",
"LABEL_11208",
"LABEL_11209",
"LABEL_1121",
"LABEL_11210",
"LABEL_11211",
"LABEL_11212",
"LABEL_11213",
"LABEL_11214",
"LABEL_11215",
"LABEL_11216",
"LABEL_11217",
"LABEL_11218",
"LABEL_11219",
"LABEL_1122",
"LABEL_11220",
"LABEL_11221",
"LABEL_11222",
"LABEL_11223",
"LABEL_11224",
"LABEL_11225",
"LABEL_11226",
"LABEL_11227",
"LABEL_11228",
"LABEL_11229",
"LABEL_1123",
"LABEL_11230",
"LABEL_11231",
"LABEL_11232",
"LABEL_11233",
"LABEL_11234",
"LABEL_11235",
"LABEL_11236",
"LABEL_11237",
"LABEL_11238",
"LABEL_11239",
"LABEL_1124",
"LABEL_11240",
"LABEL_11241",
"LABEL_11242",
"LABEL_11243",
"LABEL_11244",
"LABEL_11245",
"LABEL_11246",
"LABEL_11247",
"LABEL_11248",
"LABEL_11249",
"LABEL_1125",
"LABEL_11250",
"LABEL_11251",
"LABEL_11252",
"LABEL_11253",
"LABEL_11254",
"LABEL_11255",
"LABEL_11256",
"LABEL_11257",
"LABEL_11258",
"LABEL_11259",
"LABEL_1126",
"LABEL_11260",
"LABEL_11261",
"LABEL_11262",
"LABEL_11263",
"LABEL_11264",
"LABEL_11265",
"LABEL_11266",
"LABEL_11267",
"LABEL_11268",
"LABEL_11269",
"LABEL_1127",
"LABEL_11270",
"LABEL_11271",
"LABEL_11272",
"LABEL_11273",
"LABEL_11274",
"LABEL_11275",
"LABEL_11276",
"LABEL_11277",
"LABEL_11278",
"LABEL_11279",
"LABEL_1128",
"LABEL_11280",
"LABEL_11281",
"LABEL_11282",
"LABEL_11283",
"LABEL_11284",
"LABEL_11285",
"LABEL_11286",
"LABEL_11287",
"LABEL_11288",
"LABEL_11289",
"LABEL_1129",
"LABEL_11290",
"LABEL_11291",
"LABEL_11292",
"LABEL_11293",
"LABEL_11294",
"LABEL_11295",
"LABEL_11296",
"LABEL_11297",
"LABEL_11298",
"LABEL_11299",
"LABEL_113",
"LABEL_1130",
"LABEL_11300",
"LABEL_11301",
"LABEL_11302",
"LABEL_11303",
"LABEL_11304",
"LABEL_11305",
"LABEL_11306",
"LABEL_11307",
"LABEL_11308",
"LABEL_11309",
"LABEL_1131",
"LABEL_11310",
"LABEL_11311",
"LABEL_11312",
"LABEL_11313",
"LABEL_11314",
"LABEL_11315",
"LABEL_11316",
"LABEL_11317",
"LABEL_11318",
"LABEL_11319",
"LABEL_1132",
"LABEL_11320",
"LABEL_11321",
"LABEL_11322",
"LABEL_11323",
"LABEL_11324",
"LABEL_11325",
"LABEL_11326",
"LABEL_11327",
"LABEL_11328",
"LABEL_11329",
"LABEL_1133",
"LABEL_11330",
"LABEL_11331",
"LABEL_11332",
"LABEL_11333",
"LABEL_11334",
"LABEL_11335",
"LABEL_11336",
"LABEL_11337",
"LABEL_11338",
"LABEL_11339",
"LABEL_1134",
"LABEL_11340",
"LABEL_11341",
"LABEL_11342",
"LABEL_11343",
"LABEL_11344",
"LABEL_11345",
"LABEL_11346",
"LABEL_11347",
"LABEL_11348",
"LABEL_11349",
"LABEL_1135",
"LABEL_11350",
"LABEL_11351",
"LABEL_11352",
"LABEL_11353",
"LABEL_11354",
"LABEL_11355",
"LABEL_11356",
"LABEL_11357",
"LABEL_11358",
"LABEL_11359",
"LABEL_1136",
"LABEL_11360",
"LABEL_11361",
"LABEL_11362",
"LABEL_11363",
"LABEL_11364",
"LABEL_11365",
"LABEL_11366",
"LABEL_11367",
"LABEL_11368",
"LABEL_11369",
"LABEL_1137",
"LABEL_11370",
"LABEL_11371",
"LABEL_11372",
"LABEL_11373",
"LABEL_11374",
"LABEL_11375",
"LABEL_11376",
"LABEL_11377",
"LABEL_11378",
"LABEL_11379",
"LABEL_1138",
"LABEL_11380",
"LABEL_11381",
"LABEL_11382",
"LABEL_11383",
"LABEL_11384",
"LABEL_11385",
"LABEL_11386",
"LABEL_11387",
"LABEL_11388",
"LABEL_11389",
"LABEL_1139",
"LABEL_11390",
"LABEL_11391",
"LABEL_11392",
"LABEL_11393",
"LABEL_11394",
"LABEL_11395",
"LABEL_11396",
"LABEL_11397",
"LABEL_11398",
"LABEL_11399",
"LABEL_114",
"LABEL_1140",
"LABEL_11400",
"LABEL_11401",
"LABEL_11402",
"LABEL_11403",
"LABEL_11404",
"LABEL_11405",
"LABEL_11406",
"LABEL_11407",
"LABEL_11408",
"LABEL_11409",
"LABEL_1141",
"LABEL_11410",
"LABEL_11411",
"LABEL_11412",
"LABEL_11413",
"LABEL_11414",
"LABEL_11415",
"LABEL_11416",
"LABEL_11417",
"LABEL_11418",
"LABEL_11419",
"LABEL_1142",
"LABEL_11420",
"LABEL_11421",
"LABEL_11422",
"LABEL_11423",
"LABEL_11424",
"LABEL_11425",
"LABEL_11426",
"LABEL_11427",
"LABEL_11428",
"LABEL_11429",
"LABEL_1143",
"LABEL_11430",
"LABEL_11431",
"LABEL_11432",
"LABEL_11433",
"LABEL_11434",
"LABEL_11435",
"LABEL_11436",
"LABEL_11437",
"LABEL_11438",
"LABEL_11439",
"LABEL_1144",
"LABEL_11440",
"LABEL_11441",
"LABEL_11442",
"LABEL_11443",
"LABEL_11444",
"LABEL_11445",
"LABEL_11446",
"LABEL_11447",
"LABEL_11448",
"LABEL_11449",
"LABEL_1145",
"LABEL_11450",
"LABEL_11451",
"LABEL_11452",
"LABEL_11453",
"LABEL_11454",
"LABEL_11455",
"LABEL_11456",
"LABEL_11457",
"LABEL_11458",
"LABEL_11459",
"LABEL_1146",
"LABEL_11460",
"LABEL_11461",
"LABEL_11462",
"LABEL_11463",
"LABEL_11464",
"LABEL_11465",
"LABEL_11466",
"LABEL_11467",
"LABEL_11468",
"LABEL_11469",
"LABEL_1147",
"LABEL_11470",
"LABEL_11471",
"LABEL_11472",
"LABEL_11473",
"LABEL_11474",
"LABEL_11475",
"LABEL_11476",
"LABEL_11477",
"LABEL_11478",
"LABEL_11479",
"LABEL_1148",
"LABEL_11480",
"LABEL_11481",
"LABEL_11482",
"LABEL_11483",
"LABEL_11484",
"LABEL_11485",
"LABEL_11486",
"LABEL_11487",
"LABEL_11488",
"LABEL_11489",
"LABEL_1149",
"LABEL_11490",
"LABEL_11491",
"LABEL_11492",
"LABEL_11493",
"LABEL_11494",
"LABEL_11495",
"LABEL_11496",
"LABEL_11497",
"LABEL_11498",
"LABEL_11499",
"LABEL_115",
"LABEL_1150",
"LABEL_11500",
"LABEL_11501",
"LABEL_11502",
"LABEL_11503",
"LABEL_11504",
"LABEL_11505",
"LABEL_11506",
"LABEL_11507",
"LABEL_11508",
"LABEL_11509",
"LABEL_1151",
"LABEL_11510",
"LABEL_11511",
"LABEL_11512",
"LABEL_11513",
"LABEL_11514",
"LABEL_11515",
"LABEL_11516",
"LABEL_11517",
"LABEL_11518",
"LABEL_11519",
"LABEL_1152",
"LABEL_11520",
"LABEL_11521",
"LABEL_11522",
"LABEL_11523",
"LABEL_11524",
"LABEL_11525",
"LABEL_11526",
"LABEL_11527",
"LABEL_11528",
"LABEL_11529",
"LABEL_1153",
"LABEL_11530",
"LABEL_11531",
"LABEL_11532",
"LABEL_11533",
"LABEL_11534",
"LABEL_11535",
"LABEL_11536",
"LABEL_11537",
"LABEL_11538",
"LABEL_11539",
"LABEL_1154",
"LABEL_11540",
"LABEL_11541",
"LABEL_11542",
"LABEL_11543",
"LABEL_11544",
"LABEL_11545",
"LABEL_11546",
"LABEL_11547",
"LABEL_11548",
"LABEL_11549",
"LABEL_1155",
"LABEL_11550",
"LABEL_11551",
"LABEL_11552",
"LABEL_11553",
"LABEL_11554",
"LABEL_11555",
"LABEL_11556",
"LABEL_11557",
"LABEL_11558",
"LABEL_11559",
"LABEL_1156",
"LABEL_11560",
"LABEL_11561",
"LABEL_11562",
"LABEL_11563",
"LABEL_11564",
"LABEL_11565",
"LABEL_11566",
"LABEL_11567",
"LABEL_11568",
"LABEL_11569",
"LABEL_1157",
"LABEL_11570",
"LABEL_11571",
"LABEL_11572",
"LABEL_11573",
"LABEL_11574",
"LABEL_11575",
"LABEL_11576",
"LABEL_11577",
"LABEL_11578",
"LABEL_11579",
"LABEL_1158",
"LABEL_11580",
"LABEL_11581",
"LABEL_11582",
"LABEL_11583",
"LABEL_11584",
"LABEL_11585",
"LABEL_11586",
"LABEL_11587",
"LABEL_11588",
"LABEL_11589",
"LABEL_1159",
"LABEL_11590",
"LABEL_11591",
"LABEL_11592",
"LABEL_11593",
"LABEL_11594",
"LABEL_11595",
"LABEL_11596",
"LABEL_11597",
"LABEL_11598",
"LABEL_11599",
"LABEL_116",
"LABEL_1160",
"LABEL_11600",
"LABEL_11601",
"LABEL_11602",
"LABEL_11603",
"LABEL_11604",
"LABEL_11605",
"LABEL_11606",
"LABEL_11607",
"LABEL_11608",
"LABEL_11609",
"LABEL_1161",
"LABEL_11610",
"LABEL_11611",
"LABEL_11612",
"LABEL_11613",
"LABEL_11614",
"LABEL_11615",
"LABEL_11616",
"LABEL_11617",
"LABEL_11618",
"LABEL_11619",
"LABEL_1162",
"LABEL_11620",
"LABEL_11621",
"LABEL_11622",
"LABEL_11623",
"LABEL_11624",
"LABEL_11625",
"LABEL_11626",
"LABEL_11627",
"LABEL_11628",
"LABEL_11629",
"LABEL_1163",
"LABEL_11630",
"LABEL_11631",
"LABEL_11632",
"LABEL_11633",
"LABEL_11634",
"LABEL_11635",
"LABEL_11636",
"LABEL_11637",
"LABEL_11638",
"LABEL_11639",
"LABEL_1164",
"LABEL_11640",
"LABEL_11641",
"LABEL_11642",
"LABEL_11643",
"LABEL_11644",
"LABEL_11645",
"LABEL_11646",
"LABEL_11647",
"LABEL_11648",
"LABEL_11649",
"LABEL_1165",
"LABEL_11650",
"LABEL_11651",
"LABEL_11652",
"LABEL_11653",
"LABEL_11654",
"LABEL_11655",
"LABEL_11656",
"LABEL_11657",
"LABEL_11658",
"LABEL_11659",
"LABEL_1166",
"LABEL_11660",
"LABEL_11661",
"LABEL_11662",
"LABEL_11663",
"LABEL_11664",
"LABEL_11665",
"LABEL_11666",
"LABEL_11667",
"LABEL_11668",
"LABEL_11669",
"LABEL_1167",
"LABEL_11670",
"LABEL_11671",
"LABEL_11672",
"LABEL_11673",
"LABEL_11674",
"LABEL_11675",
"LABEL_11676",
"LABEL_11677",
"LABEL_11678",
"LABEL_11679",
"LABEL_1168",
"LABEL_11680",
"LABEL_11681",
"LABEL_11682",
"LABEL_11683",
"LABEL_11684",
"LABEL_11685",
"LABEL_11686",
"LABEL_11687",
"LABEL_11688",
"LABEL_11689",
"LABEL_1169",
"LABEL_11690",
"LABEL_11691",
"LABEL_11692",
"LABEL_11693",
"LABEL_11694",
"LABEL_11695",
"LABEL_11696",
"LABEL_11697",
"LABEL_11698",
"LABEL_11699",
"LABEL_117",
"LABEL_1170",
"LABEL_11700",
"LABEL_11701",
"LABEL_11702",
"LABEL_11703",
"LABEL_11704",
"LABEL_11705",
"LABEL_11706",
"LABEL_11707",
"LABEL_11708",
"LABEL_11709",
"LABEL_1171",
"LABEL_11710",
"LABEL_11711",
"LABEL_11712",
"LABEL_11713",
"LABEL_11714",
"LABEL_11715",
"LABEL_11716",
"LABEL_11717",
"LABEL_11718",
"LABEL_11719",
"LABEL_1172",
"LABEL_11720",
"LABEL_11721",
"LABEL_11722",
"LABEL_11723",
"LABEL_11724",
"LABEL_11725",
"LABEL_11726",
"LABEL_11727",
"LABEL_11728",
"LABEL_11729",
"LABEL_1173",
"LABEL_11730",
"LABEL_11731",
"LABEL_11732",
"LABEL_11733",
"LABEL_11734",
"LABEL_11735",
"LABEL_11736",
"LABEL_11737",
"LABEL_11738",
"LABEL_11739",
"LABEL_1174",
"LABEL_11740",
"LABEL_11741",
"LABEL_11742",
"LABEL_11743",
"LABEL_11744",
"LABEL_11745",
"LABEL_11746",
"LABEL_11747",
"LABEL_11748",
"LABEL_11749",
"LABEL_1175",
"LABEL_11750",
"LABEL_11751",
"LABEL_11752",
"LABEL_11753",
"LABEL_11754",
"LABEL_11755",
"LABEL_11756",
"LABEL_11757",
"LABEL_11758",
"LABEL_11759",
"LABEL_1176",
"LABEL_11760",
"LABEL_11761",
"LABEL_11762",
"LABEL_11763",
"LABEL_11764",
"LABEL_11765",
"LABEL_11766",
"LABEL_11767",
"LABEL_11768",
"LABEL_11769",
"LABEL_1177",
"LABEL_11770",
"LABEL_11771",
"LABEL_11772",
"LABEL_11773",
"LABEL_11774",
"LABEL_11775",
"LABEL_11776",
"LABEL_11777",
"LABEL_11778",
"LABEL_11779",
"LABEL_1178",
"LABEL_11780",
"LABEL_11781",
"LABEL_11782",
"LABEL_11783",
"LABEL_11784",
"LABEL_11785",
"LABEL_11786",
"LABEL_11787",
"LABEL_11788",
"LABEL_11789",
"LABEL_1179",
"LABEL_11790",
"LABEL_11791",
"LABEL_11792",
"LABEL_11793",
"LABEL_11794",
"LABEL_11795",
"LABEL_11796",
"LABEL_11797",
"LABEL_11798",
"LABEL_11799",
"LABEL_118",
"LABEL_1180",
"LABEL_11800",
"LABEL_11801",
"LABEL_11802",
"LABEL_11803",
"LABEL_11804",
"LABEL_11805",
"LABEL_11806",
"LABEL_11807",
"LABEL_11808",
"LABEL_11809",
"LABEL_1181",
"LABEL_11810",
"LABEL_11811",
"LABEL_11812",
"LABEL_11813",
"LABEL_11814",
"LABEL_11815",
"LABEL_11816",
"LABEL_11817",
"LABEL_11818",
"LABEL_11819",
"LABEL_1182",
"LABEL_11820",
"LABEL_11821",
"LABEL_11822",
"LABEL_11823",
"LABEL_11824",
"LABEL_11825",
"LABEL_11826",
"LABEL_11827",
"LABEL_11828",
"LABEL_11829",
"LABEL_1183",
"LABEL_11830",
"LABEL_11831",
"LABEL_11832",
"LABEL_11833",
"LABEL_11834",
"LABEL_11835",
"LABEL_11836",
"LABEL_11837",
"LABEL_11838",
"LABEL_11839",
"LABEL_1184",
"LABEL_11840",
"LABEL_11841",
"LABEL_11842",
"LABEL_11843",
"LABEL_11844",
"LABEL_11845",
"LABEL_11846",
"LABEL_11847",
"LABEL_11848",
"LABEL_11849",
"LABEL_1185",
"LABEL_11850",
"LABEL_11851",
"LABEL_11852",
"LABEL_11853",
"LABEL_11854",
"LABEL_11855",
"LABEL_11856",
"LABEL_11857",
"LABEL_11858",
"LABEL_11859",
"LABEL_1186",
"LABEL_11860",
"LABEL_11861",
"LABEL_11862",
"LABEL_11863",
"LABEL_11864",
"LABEL_11865",
"LABEL_11866",
"LABEL_11867",
"LABEL_11868",
"LABEL_11869",
"LABEL_1187",
"LABEL_11870",
"LABEL_11871",
"LABEL_11872",
"LABEL_11873",
"LABEL_11874",
"LABEL_11875",
"LABEL_11876",
"LABEL_11877",
"LABEL_11878",
"LABEL_11879",
"LABEL_1188",
"LABEL_11880",
"LABEL_11881",
"LABEL_11882",
"LABEL_11883",
"LABEL_11884",
"LABEL_11885",
"LABEL_11886",
"LABEL_11887",
"LABEL_11888",
"LABEL_11889",
"LABEL_1189",
"LABEL_11890",
"LABEL_11891",
"LABEL_11892",
"LABEL_11893",
"LABEL_11894",
"LABEL_11895",
"LABEL_11896",
"LABEL_11897",
"LABEL_11898",
"LABEL_11899",
"LABEL_119",
"LABEL_1190",
"LABEL_11900",
"LABEL_11901",
"LABEL_11902",
"LABEL_11903",
"LABEL_11904",
"LABEL_11905",
"LABEL_11906",
"LABEL_11907",
"LABEL_11908",
"LABEL_11909",
"LABEL_1191",
"LABEL_11910",
"LABEL_11911",
"LABEL_11912",
"LABEL_11913",
"LABEL_11914",
"LABEL_11915",
"LABEL_11916",
"LABEL_11917",
"LABEL_11918",
"LABEL_11919",
"LABEL_1192",
"LABEL_11920",
"LABEL_11921",
"LABEL_11922",
"LABEL_11923",
"LABEL_11924",
"LABEL_11925",
"LABEL_11926",
"LABEL_11927",
"LABEL_11928",
"LABEL_11929",
"LABEL_1193",
"LABEL_11930",
"LABEL_11931",
"LABEL_11932",
"LABEL_11933",
"LABEL_11934",
"LABEL_11935",
"LABEL_11936",
"LABEL_11937",
"LABEL_11938",
"LABEL_11939",
"LABEL_1194",
"LABEL_11940",
"LABEL_11941",
"LABEL_11942",
"LABEL_11943",
"LABEL_11944",
"LABEL_11945",
"LABEL_11946",
"LABEL_11947",
"LABEL_11948",
"LABEL_11949",
"LABEL_1195",
"LABEL_11950",
"LABEL_11951",
"LABEL_11952",
"LABEL_11953",
"LABEL_11954",
"LABEL_11955",
"LABEL_11956",
"LABEL_11957",
"LABEL_11958",
"LABEL_11959",
"LABEL_1196",
"LABEL_11960",
"LABEL_11961",
"LABEL_11962",
"LABEL_11963",
"LABEL_11964",
"LABEL_11965",
"LABEL_11966",
"LABEL_11967",
"LABEL_11968",
"LABEL_11969",
"LABEL_1197",
"LABEL_11970",
"LABEL_11971",
"LABEL_11972",
"LABEL_11973",
"LABEL_11974",
"LABEL_11975",
"LABEL_11976",
"LABEL_11977",
"LABEL_11978",
"LABEL_11979",
"LABEL_1198",
"LABEL_11980",
"LABEL_11981",
"LABEL_11982",
"LABEL_11983",
"LABEL_11984",
"LABEL_11985",
"LABEL_11986",
"LABEL_11987",
"LABEL_11988",
"LABEL_11989",
"LABEL_1199",
"LABEL_11990",
"LABEL_11991",
"LABEL_11992",
"LABEL_11993",
"LABEL_11994",
"LABEL_11995",
"LABEL_11996",
"LABEL_11997",
"LABEL_11998",
"LABEL_11999",
"LABEL_12",
"LABEL_120",
"LABEL_1200",
"LABEL_12000",
"LABEL_12001",
"LABEL_12002",
"LABEL_12003",
"LABEL_12004",
"LABEL_12005",
"LABEL_12006",
"LABEL_12007",
"LABEL_12008",
"LABEL_12009",
"LABEL_1201",
"LABEL_12010",
"LABEL_12011",
"LABEL_12012",
"LABEL_12013",
"LABEL_12014",
"LABEL_12015",
"LABEL_12016",
"LABEL_12017",
"LABEL_12018",
"LABEL_12019",
"LABEL_1202",
"LABEL_12020",
"LABEL_12021",
"LABEL_12022",
"LABEL_12023",
"LABEL_12024",
"LABEL_12025",
"LABEL_12026",
"LABEL_12027",
"LABEL_12028",
"LABEL_12029",
"LABEL_1203",
"LABEL_12030",
"LABEL_12031",
"LABEL_12032",
"LABEL_12033",
"LABEL_12034",
"LABEL_12035",
"LABEL_12036",
"LABEL_12037",
"LABEL_12038",
"LABEL_12039",
"LABEL_1204",
"LABEL_12040",
"LABEL_12041",
"LABEL_12042",
"LABEL_12043",
"LABEL_12044",
"LABEL_12045",
"LABEL_12046",
"LABEL_12047",
"LABEL_12048",
"LABEL_12049",
"LABEL_1205",
"LABEL_12050",
"LABEL_12051",
"LABEL_12052",
"LABEL_12053",
"LABEL_12054",
"LABEL_12055",
"LABEL_12056",
"LABEL_12057",
"LABEL_12058",
"LABEL_12059",
"LABEL_1206",
"LABEL_12060",
"LABEL_12061",
"LABEL_12062",
"LABEL_12063",
"LABEL_12064",
"LABEL_12065",
"LABEL_12066",
"LABEL_12067",
"LABEL_12068",
"LABEL_12069",
"LABEL_1207",
"LABEL_12070",
"LABEL_12071",
"LABEL_12072",
"LABEL_12073",
"LABEL_12074",
"LABEL_12075",
"LABEL_12076",
"LABEL_12077",
"LABEL_12078",
"LABEL_12079",
"LABEL_1208",
"LABEL_12080",
"LABEL_12081",
"LABEL_12082",
"LABEL_12083",
"LABEL_12084",
"LABEL_12085",
"LABEL_12086",
"LABEL_12087",
"LABEL_12088",
"LABEL_12089",
"LABEL_1209",
"LABEL_12090",
"LABEL_12091",
"LABEL_12092",
"LABEL_12093",
"LABEL_12094",
"LABEL_12095",
"LABEL_12096",
"LABEL_12097",
"LABEL_12098",
"LABEL_12099",
"LABEL_121",
"LABEL_1210",
"LABEL_12100",
"LABEL_12101",
"LABEL_12102",
"LABEL_12103",
"LABEL_12104",
"LABEL_12105",
"LABEL_12106",
"LABEL_12107",
"LABEL_12108",
"LABEL_12109",
"LABEL_1211",
"LABEL_12110",
"LABEL_12111",
"LABEL_12112",
"LABEL_12113",
"LABEL_12114",
"LABEL_12115",
"LABEL_12116",
"LABEL_12117",
"LABEL_12118",
"LABEL_12119",
"LABEL_1212",
"LABEL_12120",
"LABEL_12121",
"LABEL_12122",
"LABEL_12123",
"LABEL_12124",
"LABEL_12125",
"LABEL_12126",
"LABEL_12127",
"LABEL_12128",
"LABEL_12129",
"LABEL_1213",
"LABEL_12130",
"LABEL_12131",
"LABEL_12132",
"LABEL_12133",
"LABEL_12134",
"LABEL_12135",
"LABEL_12136",
"LABEL_12137",
"LABEL_12138",
"LABEL_12139",
"LABEL_1214",
"LABEL_12140",
"LABEL_12141",
"LABEL_12142",
"LABEL_12143",
"LABEL_12144",
"LABEL_12145",
"LABEL_12146",
"LABEL_12147",
"LABEL_12148",
"LABEL_12149",
"LABEL_1215",
"LABEL_12150",
"LABEL_12151",
"LABEL_12152",
"LABEL_12153",
"LABEL_12154",
"LABEL_12155",
"LABEL_12156",
"LABEL_12157",
"LABEL_12158",
"LABEL_12159",
"LABEL_1216",
"LABEL_12160",
"LABEL_12161",
"LABEL_12162",
"LABEL_12163",
"LABEL_12164",
"LABEL_12165",
"LABEL_12166",
"LABEL_12167",
"LABEL_12168",
"LABEL_12169",
"LABEL_1217",
"LABEL_12170",
"LABEL_12171",
"LABEL_12172",
"LABEL_12173",
"LABEL_12174",
"LABEL_12175",
"LABEL_12176",
"LABEL_12177",
"LABEL_12178",
"LABEL_12179",
"LABEL_1218",
"LABEL_12180",
"LABEL_12181",
"LABEL_12182",
"LABEL_12183",
"LABEL_12184",
"LABEL_12185",
"LABEL_12186",
"LABEL_12187",
"LABEL_12188",
"LABEL_12189",
"LABEL_1219",
"LABEL_12190",
"LABEL_12191",
"LABEL_12192",
"LABEL_12193",
"LABEL_12194",
"LABEL_12195",
"LABEL_12196",
"LABEL_12197",
"LABEL_12198",
"LABEL_12199",
"LABEL_122",
"LABEL_1220",
"LABEL_12200",
"LABEL_12201",
"LABEL_12202",
"LABEL_12203",
"LABEL_12204",
"LABEL_12205",
"LABEL_12206",
"LABEL_12207",
"LABEL_12208",
"LABEL_12209",
"LABEL_1221",
"LABEL_12210",
"LABEL_12211",
"LABEL_12212",
"LABEL_12213",
"LABEL_12214",
"LABEL_12215",
"LABEL_12216",
"LABEL_12217",
"LABEL_12218",
"LABEL_12219",
"LABEL_1222",
"LABEL_12220",
"LABEL_12221",
"LABEL_12222",
"LABEL_12223",
"LABEL_12224",
"LABEL_12225",
"LABEL_12226",
"LABEL_12227",
"LABEL_12228",
"LABEL_12229",
"LABEL_1223",
"LABEL_12230",
"LABEL_12231",
"LABEL_12232",
"LABEL_12233",
"LABEL_12234",
"LABEL_12235",
"LABEL_12236",
"LABEL_12237",
"LABEL_12238",
"LABEL_12239",
"LABEL_1224",
"LABEL_12240",
"LABEL_12241",
"LABEL_12242",
"LABEL_12243",
"LABEL_12244",
"LABEL_12245",
"LABEL_12246",
"LABEL_12247",
"LABEL_12248",
"LABEL_12249",
"LABEL_1225",
"LABEL_12250",
"LABEL_12251",
"LABEL_12252",
"LABEL_12253",
"LABEL_12254",
"LABEL_12255",
"LABEL_12256",
"LABEL_12257",
"LABEL_12258",
"LABEL_12259",
"LABEL_1226",
"LABEL_12260",
"LABEL_12261",
"LABEL_12262",
"LABEL_12263",
"LABEL_12264",
"LABEL_12265",
"LABEL_12266",
"LABEL_12267",
"LABEL_12268",
"LABEL_12269",
"LABEL_1227",
"LABEL_12270",
"LABEL_12271",
"LABEL_12272",
"LABEL_12273",
"LABEL_12274",
"LABEL_12275",
"LABEL_12276",
"LABEL_12277",
"LABEL_12278",
"LABEL_12279",
"LABEL_1228",
"LABEL_12280",
"LABEL_12281",
"LABEL_12282",
"LABEL_12283",
"LABEL_12284",
"LABEL_12285",
"LABEL_12286",
"LABEL_12287",
"LABEL_12288",
"LABEL_12289",
"LABEL_1229",
"LABEL_12290",
"LABEL_12291",
"LABEL_12292",
"LABEL_12293",
"LABEL_12294",
"LABEL_12295",
"LABEL_12296",
"LABEL_12297",
"LABEL_12298",
"LABEL_12299",
"LABEL_123",
"LABEL_1230",
"LABEL_12300",
"LABEL_12301",
"LABEL_12302",
"LABEL_12303",
"LABEL_12304",
"LABEL_12305",
"LABEL_12306",
"LABEL_12307",
"LABEL_12308",
"LABEL_12309",
"LABEL_1231",
"LABEL_12310",
"LABEL_12311",
"LABEL_12312",
"LABEL_12313",
"LABEL_12314",
"LABEL_12315",
"LABEL_12316",
"LABEL_12317",
"LABEL_12318",
"LABEL_12319",
"LABEL_1232",
"LABEL_12320",
"LABEL_12321",
"LABEL_12322",
"LABEL_12323",
"LABEL_12324",
"LABEL_12325",
"LABEL_12326",
"LABEL_12327",
"LABEL_12328",
"LABEL_12329",
"LABEL_1233",
"LABEL_12330",
"LABEL_12331",
"LABEL_12332",
"LABEL_12333",
"LABEL_12334",
"LABEL_12335",
"LABEL_12336",
"LABEL_12337",
"LABEL_12338",
"LABEL_12339",
"LABEL_1234",
"LABEL_12340",
"LABEL_12341",
"LABEL_12342",
"LABEL_12343",
"LABEL_12344",
"LABEL_12345",
"LABEL_12346",
"LABEL_12347",
"LABEL_12348",
"LABEL_12349",
"LABEL_1235",
"LABEL_12350",
"LABEL_12351",
"LABEL_12352",
"LABEL_12353",
"LABEL_12354",
"LABEL_12355",
"LABEL_12356",
"LABEL_12357",
"LABEL_12358",
"LABEL_12359",
"LABEL_1236",
"LABEL_12360",
"LABEL_12361",
"LABEL_12362",
"LABEL_12363",
"LABEL_12364",
"LABEL_12365",
"LABEL_12366",
"LABEL_12367",
"LABEL_12368",
"LABEL_12369",
"LABEL_1237",
"LABEL_12370",
"LABEL_12371",
"LABEL_12372",
"LABEL_12373",
"LABEL_12374",
"LABEL_12375",
"LABEL_12376",
"LABEL_12377",
"LABEL_12378",
"LABEL_12379",
"LABEL_1238",
"LABEL_12380",
"LABEL_12381",
"LABEL_12382",
"LABEL_12383",
"LABEL_12384",
"LABEL_12385",
"LABEL_12386",
"LABEL_12387",
"LABEL_12388",
"LABEL_12389",
"LABEL_1239",
"LABEL_12390",
"LABEL_12391",
"LABEL_12392",
"LABEL_12393",
"LABEL_12394",
"LABEL_12395",
"LABEL_12396",
"LABEL_12397",
"LABEL_12398",
"LABEL_12399",
"LABEL_124",
"LABEL_1240",
"LABEL_12400",
"LABEL_12401",
"LABEL_12402",
"LABEL_12403",
"LABEL_12404",
"LABEL_12405",
"LABEL_12406",
"LABEL_12407",
"LABEL_12408",
"LABEL_12409",
"LABEL_1241",
"LABEL_12410",
"LABEL_12411",
"LABEL_12412",
"LABEL_12413",
"LABEL_12414",
"LABEL_12415",
"LABEL_12416",
"LABEL_12417",
"LABEL_12418",
"LABEL_12419",
"LABEL_1242",
"LABEL_12420",
"LABEL_12421",
"LABEL_12422",
"LABEL_12423",
"LABEL_12424",
"LABEL_12425",
"LABEL_12426",
"LABEL_12427",
"LABEL_12428",
"LABEL_12429",
"LABEL_1243",
"LABEL_12430",
"LABEL_12431",
"LABEL_12432",
"LABEL_12433",
"LABEL_12434",
"LABEL_12435",
"LABEL_12436",
"LABEL_12437",
"LABEL_12438",
"LABEL_12439",
"LABEL_1244",
"LABEL_12440",
"LABEL_12441",
"LABEL_12442",
"LABEL_12443",
"LABEL_12444",
"LABEL_12445",
"LABEL_12446",
"LABEL_12447",
"LABEL_12448",
"LABEL_12449",
"LABEL_1245",
"LABEL_12450",
"LABEL_12451",
"LABEL_12452",
"LABEL_12453",
"LABEL_12454",
"LABEL_12455",
"LABEL_12456",
"LABEL_12457",
"LABEL_12458",
"LABEL_12459",
"LABEL_1246",
"LABEL_12460",
"LABEL_12461",
"LABEL_12462",
"LABEL_12463",
"LABEL_12464",
"LABEL_12465",
"LABEL_12466",
"LABEL_12467",
"LABEL_12468",
"LABEL_12469",
"LABEL_1247",
"LABEL_12470",
"LABEL_12471",
"LABEL_12472",
"LABEL_12473",
"LABEL_12474",
"LABEL_12475",
"LABEL_12476",
"LABEL_12477",
"LABEL_12478",
"LABEL_12479",
"LABEL_1248",
"LABEL_12480",
"LABEL_12481",
"LABEL_12482",
"LABEL_12483",
"LABEL_12484",
"LABEL_12485",
"LABEL_12486",
"LABEL_12487",
"LABEL_12488",
"LABEL_12489",
"LABEL_1249",
"LABEL_12490",
"LABEL_12491",
"LABEL_12492",
"LABEL_12493",
"LABEL_12494",
"LABEL_12495",
"LABEL_12496",
"LABEL_12497",
"LABEL_12498",
"LABEL_12499",
"LABEL_125",
"LABEL_1250",
"LABEL_12500",
"LABEL_12501",
"LABEL_12502",
"LABEL_12503",
"LABEL_12504",
"LABEL_12505",
"LABEL_12506",
"LABEL_12507",
"LABEL_12508",
"LABEL_12509",
"LABEL_1251",
"LABEL_12510",
"LABEL_12511",
"LABEL_12512",
"LABEL_12513",
"LABEL_12514",
"LABEL_12515",
"LABEL_12516",
"LABEL_12517",
"LABEL_12518",
"LABEL_12519",
"LABEL_1252",
"LABEL_12520",
"LABEL_12521",
"LABEL_12522",
"LABEL_12523",
"LABEL_12524",
"LABEL_12525",
"LABEL_12526",
"LABEL_12527",
"LABEL_12528",
"LABEL_12529",
"LABEL_1253",
"LABEL_12530",
"LABEL_12531",
"LABEL_12532",
"LABEL_12533",
"LABEL_12534",
"LABEL_12535",
"LABEL_12536",
"LABEL_12537",
"LABEL_12538",
"LABEL_12539",
"LABEL_1254",
"LABEL_12540",
"LABEL_12541",
"LABEL_12542",
"LABEL_12543",
"LABEL_12544",
"LABEL_12545",
"LABEL_12546",
"LABEL_12547",
"LABEL_12548",
"LABEL_12549",
"LABEL_1255",
"LABEL_12550",
"LABEL_12551",
"LABEL_12552",
"LABEL_12553",
"LABEL_12554",
"LABEL_12555",
"LABEL_12556",
"LABEL_12557",
"LABEL_12558",
"LABEL_12559",
"LABEL_1256",
"LABEL_12560",
"LABEL_12561",
"LABEL_12562",
"LABEL_12563",
"LABEL_12564",
"LABEL_12565",
"LABEL_12566",
"LABEL_12567",
"LABEL_12568",
"LABEL_12569",
"LABEL_1257",
"LABEL_12570",
"LABEL_12571",
"LABEL_12572",
"LABEL_12573",
"LABEL_12574",
"LABEL_12575",
"LABEL_12576",
"LABEL_12577",
"LABEL_12578",
"LABEL_12579",
"LABEL_1258",
"LABEL_12580",
"LABEL_12581",
"LABEL_12582",
"LABEL_12583",
"LABEL_12584",
"LABEL_12585",
"LABEL_12586",
"LABEL_12587",
"LABEL_12588",
"LABEL_12589",
"LABEL_1259",
"LABEL_12590",
"LABEL_12591",
"LABEL_12592",
"LABEL_12593",
"LABEL_12594",
"LABEL_12595",
"LABEL_12596",
"LABEL_12597",
"LABEL_12598",
"LABEL_12599",
"LABEL_126",
"LABEL_1260",
"LABEL_12600",
"LABEL_12601",
"LABEL_12602",
"LABEL_12603",
"LABEL_12604",
"LABEL_12605",
"LABEL_12606",
"LABEL_12607",
"LABEL_12608",
"LABEL_12609",
"LABEL_1261",
"LABEL_12610",
"LABEL_12611",
"LABEL_12612",
"LABEL_12613",
"LABEL_12614",
"LABEL_12615",
"LABEL_12616",
"LABEL_12617",
"LABEL_12618",
"LABEL_12619",
"LABEL_1262",
"LABEL_12620",
"LABEL_12621",
"LABEL_12622",
"LABEL_12623",
"LABEL_12624",
"LABEL_12625",
"LABEL_12626",
"LABEL_12627",
"LABEL_12628",
"LABEL_12629",
"LABEL_1263",
"LABEL_12630",
"LABEL_12631",
"LABEL_12632",
"LABEL_12633",
"LABEL_12634",
"LABEL_12635",
"LABEL_12636",
"LABEL_12637",
"LABEL_12638",
"LABEL_12639",
"LABEL_1264",
"LABEL_12640",
"LABEL_12641",
"LABEL_12642",
"LABEL_12643",
"LABEL_12644",
"LABEL_12645",
"LABEL_12646",
"LABEL_12647",
"LABEL_12648",
"LABEL_12649",
"LABEL_1265",
"LABEL_12650",
"LABEL_12651",
"LABEL_12652",
"LABEL_12653",
"LABEL_12654",
"LABEL_12655",
"LABEL_12656",
"LABEL_12657",
"LABEL_12658",
"LABEL_12659",
"LABEL_1266",
"LABEL_12660",
"LABEL_12661",
"LABEL_12662",
"LABEL_12663",
"LABEL_12664",
"LABEL_12665",
"LABEL_12666",
"LABEL_12667",
"LABEL_12668",
"LABEL_12669",
"LABEL_1267",
"LABEL_12670",
"LABEL_12671",
"LABEL_12672",
"LABEL_12673",
"LABEL_12674",
"LABEL_12675",
"LABEL_12676",
"LABEL_12677",
"LABEL_12678",
"LABEL_12679",
"LABEL_1268",
"LABEL_12680",
"LABEL_12681",
"LABEL_12682",
"LABEL_12683",
"LABEL_12684",
"LABEL_12685",
"LABEL_12686",
"LABEL_12687",
"LABEL_12688",
"LABEL_12689",
"LABEL_1269",
"LABEL_12690",
"LABEL_12691",
"LABEL_12692",
"LABEL_12693",
"LABEL_12694",
"LABEL_12695",
"LABEL_12696",
"LABEL_12697",
"LABEL_12698",
"LABEL_12699",
"LABEL_127",
"LABEL_1270",
"LABEL_12700",
"LABEL_12701",
"LABEL_12702",
"LABEL_12703",
"LABEL_12704",
"LABEL_12705",
"LABEL_12706",
"LABEL_12707",
"LABEL_12708",
"LABEL_12709",
"LABEL_1271",
"LABEL_12710",
"LABEL_12711",
"LABEL_12712",
"LABEL_12713",
"LABEL_12714",
"LABEL_12715",
"LABEL_12716",
"LABEL_12717",
"LABEL_12718",
"LABEL_12719",
"LABEL_1272",
"LABEL_12720",
"LABEL_12721",
"LABEL_12722",
"LABEL_12723",
"LABEL_12724",
"LABEL_12725",
"LABEL_12726",
"LABEL_12727",
"LABEL_12728",
"LABEL_12729",
"LABEL_1273",
"LABEL_12730",
"LABEL_12731",
"LABEL_12732",
"LABEL_12733",
"LABEL_12734",
"LABEL_12735",
"LABEL_12736",
"LABEL_12737",
"LABEL_12738",
"LABEL_12739",
"LABEL_1274",
"LABEL_12740",
"LABEL_12741",
"LABEL_12742",
"LABEL_12743",
"LABEL_12744",
"LABEL_12745",
"LABEL_12746",
"LABEL_12747",
"LABEL_12748",
"LABEL_12749",
"LABEL_1275",
"LABEL_12750",
"LABEL_12751",
"LABEL_12752",
"LABEL_12753",
"LABEL_12754",
"LABEL_12755",
"LABEL_12756",
"LABEL_12757",
"LABEL_12758",
"LABEL_12759",
"LABEL_1276",
"LABEL_12760",
"LABEL_12761",
"LABEL_12762",
"LABEL_12763",
"LABEL_12764",
"LABEL_12765",
"LABEL_12766",
"LABEL_12767",
"LABEL_12768",
"LABEL_12769",
"LABEL_1277",
"LABEL_12770",
"LABEL_12771",
"LABEL_12772",
"LABEL_12773",
"LABEL_12774",
"LABEL_12775",
"LABEL_12776",
"LABEL_12777",
"LABEL_12778",
"LABEL_12779",
"LABEL_1278",
"LABEL_12780",
"LABEL_12781",
"LABEL_12782",
"LABEL_12783",
"LABEL_12784",
"LABEL_12785",
"LABEL_12786",
"LABEL_12787",
"LABEL_12788",
"LABEL_12789",
"LABEL_1279",
"LABEL_12790",
"LABEL_12791",
"LABEL_12792",
"LABEL_12793",
"LABEL_12794",
"LABEL_12795",
"LABEL_12796",
"LABEL_12797",
"LABEL_12798",
"LABEL_12799",
"LABEL_128",
"LABEL_1280",
"LABEL_12800",
"LABEL_12801",
"LABEL_12802",
"LABEL_12803",
"LABEL_12804",
"LABEL_12805",
"LABEL_12806",
"LABEL_12807",
"LABEL_12808",
"LABEL_12809",
"LABEL_1281",
"LABEL_12810",
"LABEL_12811",
"LABEL_12812",
"LABEL_12813",
"LABEL_12814",
"LABEL_12815",
"LABEL_12816",
"LABEL_12817",
"LABEL_12818",
"LABEL_12819",
"LABEL_1282",
"LABEL_12820",
"LABEL_12821",
"LABEL_12822",
"LABEL_12823",
"LABEL_12824",
"LABEL_12825",
"LABEL_12826",
"LABEL_12827",
"LABEL_12828",
"LABEL_12829",
"LABEL_1283",
"LABEL_12830",
"LABEL_12831",
"LABEL_12832",
"LABEL_12833",
"LABEL_12834",
"LABEL_12835",
"LABEL_12836",
"LABEL_12837",
"LABEL_12838",
"LABEL_12839",
"LABEL_1284",
"LABEL_12840",
"LABEL_12841",
"LABEL_12842",
"LABEL_12843",
"LABEL_12844",
"LABEL_12845",
"LABEL_12846",
"LABEL_12847",
"LABEL_12848",
"LABEL_12849",
"LABEL_1285",
"LABEL_12850",
"LABEL_12851",
"LABEL_12852",
"LABEL_12853",
"LABEL_12854",
"LABEL_12855",
"LABEL_12856",
"LABEL_12857",
"LABEL_12858",
"LABEL_12859",
"LABEL_1286",
"LABEL_12860",
"LABEL_12861",
"LABEL_12862",
"LABEL_12863",
"LABEL_12864",
"LABEL_12865",
"LABEL_12866",
"LABEL_12867",
"LABEL_12868",
"LABEL_12869",
"LABEL_1287",
"LABEL_12870",
"LABEL_12871",
"LABEL_12872",
"LABEL_12873",
"LABEL_12874",
"LABEL_12875",
"LABEL_12876",
"LABEL_12877",
"LABEL_12878",
"LABEL_12879",
"LABEL_1288",
"LABEL_12880",
"LABEL_12881",
"LABEL_12882",
"LABEL_12883",
"LABEL_12884",
"LABEL_12885",
"LABEL_12886",
"LABEL_12887",
"LABEL_12888",
"LABEL_12889",
"LABEL_1289",
"LABEL_12890",
"LABEL_12891",
"LABEL_12892",
"LABEL_12893",
"LABEL_12894",
"LABEL_12895",
"LABEL_12896",
"LABEL_12897",
"LABEL_12898",
"LABEL_12899",
"LABEL_129",
"LABEL_1290",
"LABEL_12900",
"LABEL_12901",
"LABEL_12902",
"LABEL_12903",
"LABEL_12904",
"LABEL_12905",
"LABEL_12906",
"LABEL_12907",
"LABEL_12908",
"LABEL_12909",
"LABEL_1291",
"LABEL_12910",
"LABEL_12911",
"LABEL_12912",
"LABEL_12913",
"LABEL_12914",
"LABEL_12915",
"LABEL_12916",
"LABEL_12917",
"LABEL_12918",
"LABEL_12919",
"LABEL_1292",
"LABEL_12920",
"LABEL_12921",
"LABEL_12922",
"LABEL_12923",
"LABEL_12924",
"LABEL_12925",
"LABEL_12926",
"LABEL_12927",
"LABEL_12928",
"LABEL_12929",
"LABEL_1293",
"LABEL_12930",
"LABEL_12931",
"LABEL_12932",
"LABEL_12933",
"LABEL_12934",
"LABEL_12935",
"LABEL_12936",
"LABEL_12937",
"LABEL_12938",
"LABEL_12939",
"LABEL_1294",
"LABEL_12940",
"LABEL_12941",
"LABEL_12942",
"LABEL_12943",
"LABEL_12944",
"LABEL_12945",
"LABEL_12946",
"LABEL_12947",
"LABEL_12948",
"LABEL_12949",
"LABEL_1295",
"LABEL_12950",
"LABEL_12951",
"LABEL_12952",
"LABEL_12953",
"LABEL_12954",
"LABEL_12955",
"LABEL_12956",
"LABEL_12957",
"LABEL_12958",
"LABEL_12959",
"LABEL_1296",
"LABEL_12960",
"LABEL_12961",
"LABEL_12962",
"LABEL_12963",
"LABEL_12964",
"LABEL_12965",
"LABEL_12966",
"LABEL_12967",
"LABEL_12968",
"LABEL_12969",
"LABEL_1297",
"LABEL_12970",
"LABEL_12971",
"LABEL_12972",
"LABEL_12973",
"LABEL_12974",
"LABEL_12975",
"LABEL_12976",
"LABEL_12977",
"LABEL_12978",
"LABEL_12979",
"LABEL_1298",
"LABEL_12980",
"LABEL_12981",
"LABEL_12982",
"LABEL_12983",
"LABEL_12984",
"LABEL_12985",
"LABEL_12986",
"LABEL_12987",
"LABEL_12988",
"LABEL_12989",
"LABEL_1299",
"LABEL_12990",
"LABEL_12991",
"LABEL_12992",
"LABEL_12993",
"LABEL_12994",
"LABEL_12995",
"LABEL_12996",
"LABEL_12997",
"LABEL_12998",
"LABEL_12999",
"LABEL_13",
"LABEL_130",
"LABEL_1300",
"LABEL_13000",
"LABEL_13001",
"LABEL_13002",
"LABEL_13003",
"LABEL_13004",
"LABEL_13005",
"LABEL_13006",
"LABEL_13007",
"LABEL_13008",
"LABEL_13009",
"LABEL_1301",
"LABEL_13010",
"LABEL_13011",
"LABEL_13012",
"LABEL_13013",
"LABEL_13014",
"LABEL_13015",
"LABEL_13016",
"LABEL_13017",
"LABEL_13018",
"LABEL_13019",
"LABEL_1302",
"LABEL_13020",
"LABEL_13021",
"LABEL_13022",
"LABEL_13023",
"LABEL_13024",
"LABEL_13025",
"LABEL_13026",
"LABEL_13027",
"LABEL_13028",
"LABEL_13029",
"LABEL_1303",
"LABEL_13030",
"LABEL_13031",
"LABEL_13032",
"LABEL_13033",
"LABEL_13034",
"LABEL_13035",
"LABEL_13036",
"LABEL_13037",
"LABEL_13038",
"LABEL_13039",
"LABEL_1304",
"LABEL_13040",
"LABEL_13041",
"LABEL_13042",
"LABEL_13043",
"LABEL_13044",
"LABEL_13045",
"LABEL_13046",
"LABEL_13047",
"LABEL_13048",
"LABEL_13049",
"LABEL_1305",
"LABEL_13050",
"LABEL_13051",
"LABEL_13052",
"LABEL_13053",
"LABEL_13054",
"LABEL_13055",
"LABEL_13056",
"LABEL_13057",
"LABEL_13058",
"LABEL_13059",
"LABEL_1306",
"LABEL_13060",
"LABEL_13061",
"LABEL_13062",
"LABEL_13063",
"LABEL_13064",
"LABEL_13065",
"LABEL_13066",
"LABEL_13067",
"LABEL_13068",
"LABEL_13069",
"LABEL_1307",
"LABEL_13070",
"LABEL_13071",
"LABEL_13072",
"LABEL_13073",
"LABEL_13074",
"LABEL_13075",
"LABEL_13076",
"LABEL_13077",
"LABEL_13078",
"LABEL_13079",
"LABEL_1308",
"LABEL_13080",
"LABEL_13081",
"LABEL_13082",
"LABEL_13083",
"LABEL_13084",
"LABEL_13085",
"LABEL_13086",
"LABEL_13087",
"LABEL_13088",
"LABEL_13089",
"LABEL_1309",
"LABEL_13090",
"LABEL_13091",
"LABEL_13092",
"LABEL_13093",
"LABEL_13094",
"LABEL_13095",
"LABEL_13096",
"LABEL_13097",
"LABEL_13098",
"LABEL_13099",
"LABEL_131",
"LABEL_1310",
"LABEL_13100",
"LABEL_13101",
"LABEL_13102",
"LABEL_13103",
"LABEL_13104",
"LABEL_13105",
"LABEL_13106",
"LABEL_13107",
"LABEL_13108",
"LABEL_13109",
"LABEL_1311",
"LABEL_13110",
"LABEL_13111",
"LABEL_13112",
"LABEL_13113",
"LABEL_13114",
"LABEL_13115",
"LABEL_13116",
"LABEL_13117",
"LABEL_13118",
"LABEL_13119",
"LABEL_1312",
"LABEL_13120",
"LABEL_13121",
"LABEL_13122",
"LABEL_13123",
"LABEL_13124",
"LABEL_13125",
"LABEL_13126",
"LABEL_13127",
"LABEL_13128",
"LABEL_13129",
"LABEL_1313",
"LABEL_13130",
"LABEL_13131",
"LABEL_13132",
"LABEL_13133",
"LABEL_13134",
"LABEL_13135",
"LABEL_13136",
"LABEL_13137",
"LABEL_13138",
"LABEL_13139",
"LABEL_1314",
"LABEL_13140",
"LABEL_13141",
"LABEL_13142",
"LABEL_13143",
"LABEL_13144",
"LABEL_13145",
"LABEL_13146",
"LABEL_13147",
"LABEL_13148",
"LABEL_13149",
"LABEL_1315",
"LABEL_13150",
"LABEL_13151",
"LABEL_13152",
"LABEL_13153",
"LABEL_13154",
"LABEL_13155",
"LABEL_13156",
"LABEL_13157",
"LABEL_13158",
"LABEL_13159",
"LABEL_1316",
"LABEL_13160",
"LABEL_13161",
"LABEL_13162",
"LABEL_13163",
"LABEL_13164",
"LABEL_13165",
"LABEL_13166",
"LABEL_13167",
"LABEL_13168",
"LABEL_13169",
"LABEL_1317",
"LABEL_13170",
"LABEL_13171",
"LABEL_13172",
"LABEL_13173",
"LABEL_13174",
"LABEL_13175",
"LABEL_13176",
"LABEL_13177",
"LABEL_13178",
"LABEL_13179",
"LABEL_1318",
"LABEL_13180",
"LABEL_13181",
"LABEL_13182",
"LABEL_13183",
"LABEL_13184",
"LABEL_13185",
"LABEL_13186",
"LABEL_13187",
"LABEL_13188",
"LABEL_13189",
"LABEL_1319",
"LABEL_13190",
"LABEL_13191",
"LABEL_13192",
"LABEL_13193",
"LABEL_13194",
"LABEL_13195",
"LABEL_13196",
"LABEL_13197",
"LABEL_13198",
"LABEL_13199",
"LABEL_132",
"LABEL_1320",
"LABEL_13200",
"LABEL_13201",
"LABEL_13202",
"LABEL_13203",
"LABEL_13204",
"LABEL_13205",
"LABEL_13206",
"LABEL_13207",
"LABEL_13208",
"LABEL_13209",
"LABEL_1321",
"LABEL_13210",
"LABEL_13211",
"LABEL_13212",
"LABEL_13213",
"LABEL_13214",
"LABEL_13215",
"LABEL_13216",
"LABEL_13217",
"LABEL_13218",
"LABEL_13219",
"LABEL_1322",
"LABEL_13220",
"LABEL_13221",
"LABEL_13222",
"LABEL_13223",
"LABEL_13224",
"LABEL_13225",
"LABEL_13226",
"LABEL_13227",
"LABEL_13228",
"LABEL_13229",
"LABEL_1323",
"LABEL_13230",
"LABEL_13231",
"LABEL_13232",
"LABEL_13233",
"LABEL_13234",
"LABEL_13235",
"LABEL_13236",
"LABEL_13237",
"LABEL_13238",
"LABEL_13239",
"LABEL_1324",
"LABEL_13240",
"LABEL_13241",
"LABEL_13242",
"LABEL_13243",
"LABEL_13244",
"LABEL_13245",
"LABEL_13246",
"LABEL_13247",
"LABEL_13248",
"LABEL_13249",
"LABEL_1325",
"LABEL_13250",
"LABEL_13251",
"LABEL_13252",
"LABEL_13253",
"LABEL_13254",
"LABEL_13255",
"LABEL_13256",
"LABEL_13257",
"LABEL_13258",
"LABEL_13259",
"LABEL_1326",
"LABEL_13260",
"LABEL_13261",
"LABEL_13262",
"LABEL_13263",
"LABEL_13264",
"LABEL_13265",
"LABEL_13266",
"LABEL_13267",
"LABEL_13268",
"LABEL_13269",
"LABEL_1327",
"LABEL_13270",
"LABEL_13271",
"LABEL_13272",
"LABEL_13273",
"LABEL_13274",
"LABEL_13275",
"LABEL_13276",
"LABEL_13277",
"LABEL_13278",
"LABEL_13279",
"LABEL_1328",
"LABEL_13280",
"LABEL_13281",
"LABEL_13282",
"LABEL_13283",
"LABEL_13284",
"LABEL_13285",
"LABEL_13286",
"LABEL_13287",
"LABEL_13288",
"LABEL_13289",
"LABEL_1329",
"LABEL_13290",
"LABEL_13291",
"LABEL_13292",
"LABEL_13293",
"LABEL_13294",
"LABEL_13295",
"LABEL_13296",
"LABEL_13297",
"LABEL_13298",
"LABEL_13299",
"LABEL_133",
"LABEL_1330",
"LABEL_13300",
"LABEL_13301",
"LABEL_13302",
"LABEL_13303",
"LABEL_13304",
"LABEL_13305",
"LABEL_13306",
"LABEL_13307",
"LABEL_13308",
"LABEL_13309",
"LABEL_1331",
"LABEL_13310",
"LABEL_13311",
"LABEL_13312",
"LABEL_13313",
"LABEL_13314",
"LABEL_13315",
"LABEL_13316",
"LABEL_13317",
"LABEL_13318",
"LABEL_13319",
"LABEL_1332",
"LABEL_13320",
"LABEL_13321",
"LABEL_13322",
"LABEL_13323",
"LABEL_13324",
"LABEL_13325",
"LABEL_13326",
"LABEL_13327",
"LABEL_13328",
"LABEL_13329",
"LABEL_1333",
"LABEL_13330",
"LABEL_13331",
"LABEL_13332",
"LABEL_13333",
"LABEL_13334",
"LABEL_13335",
"LABEL_13336",
"LABEL_13337",
"LABEL_13338",
"LABEL_13339",
"LABEL_1334",
"LABEL_13340",
"LABEL_13341",
"LABEL_13342",
"LABEL_13343",
"LABEL_13344",
"LABEL_13345",
"LABEL_13346",
"LABEL_13347",
"LABEL_13348",
"LABEL_13349",
"LABEL_1335",
"LABEL_13350",
"LABEL_13351",
"LABEL_13352",
"LABEL_13353",
"LABEL_13354",
"LABEL_13355",
"LABEL_13356",
"LABEL_13357",
"LABEL_13358",
"LABEL_13359",
"LABEL_1336",
"LABEL_13360",
"LABEL_13361",
"LABEL_13362",
"LABEL_13363",
"LABEL_13364",
"LABEL_13365",
"LABEL_13366",
"LABEL_13367",
"LABEL_13368",
"LABEL_13369",
"LABEL_1337",
"LABEL_13370",
"LABEL_13371",
"LABEL_13372",
"LABEL_13373",
"LABEL_13374",
"LABEL_13375",
"LABEL_13376",
"LABEL_13377",
"LABEL_13378",
"LABEL_13379",
"LABEL_1338",
"LABEL_13380",
"LABEL_13381",
"LABEL_13382",
"LABEL_13383",
"LABEL_13384",
"LABEL_13385",
"LABEL_13386",
"LABEL_13387",
"LABEL_13388",
"LABEL_13389",
"LABEL_1339",
"LABEL_13390",
"LABEL_13391",
"LABEL_13392",
"LABEL_13393",
"LABEL_13394",
"LABEL_13395",
"LABEL_13396",
"LABEL_13397",
"LABEL_13398",
"LABEL_13399",
"LABEL_134",
"LABEL_1340",
"LABEL_13400",
"LABEL_13401",
"LABEL_13402",
"LABEL_13403",
"LABEL_13404",
"LABEL_13405",
"LABEL_13406",
"LABEL_13407",
"LABEL_13408",
"LABEL_13409",
"LABEL_1341",
"LABEL_13410",
"LABEL_13411",
"LABEL_13412",
"LABEL_13413",
"LABEL_13414",
"LABEL_13415",
"LABEL_13416",
"LABEL_13417",
"LABEL_13418",
"LABEL_13419",
"LABEL_1342",
"LABEL_13420",
"LABEL_13421",
"LABEL_13422",
"LABEL_13423",
"LABEL_13424",
"LABEL_13425",
"LABEL_13426",
"LABEL_13427",
"LABEL_13428",
"LABEL_13429",
"LABEL_1343",
"LABEL_13430",
"LABEL_13431",
"LABEL_13432",
"LABEL_13433",
"LABEL_13434",
"LABEL_13435",
"LABEL_13436",
"LABEL_13437",
"LABEL_13438",
"LABEL_13439",
"LABEL_1344",
"LABEL_13440",
"LABEL_13441",
"LABEL_13442",
"LABEL_13443",
"LABEL_13444",
"LABEL_13445",
"LABEL_13446",
"LABEL_13447",
"LABEL_13448",
"LABEL_13449",
"LABEL_1345",
"LABEL_13450",
"LABEL_13451",
"LABEL_13452",
"LABEL_13453",
"LABEL_13454",
"LABEL_13455",
"LABEL_13456",
"LABEL_13457",
"LABEL_13458",
"LABEL_13459",
"LABEL_1346",
"LABEL_13460",
"LABEL_13461",
"LABEL_13462",
"LABEL_13463",
"LABEL_13464",
"LABEL_13465",
"LABEL_13466",
"LABEL_13467",
"LABEL_13468",
"LABEL_13469",
"LABEL_1347",
"LABEL_13470",
"LABEL_13471",
"LABEL_13472",
"LABEL_13473",
"LABEL_13474",
"LABEL_13475",
"LABEL_13476",
"LABEL_13477",
"LABEL_13478",
"LABEL_13479",
"LABEL_1348",
"LABEL_13480",
"LABEL_13481",
"LABEL_13482",
"LABEL_13483",
"LABEL_13484",
"LABEL_13485",
"LABEL_13486",
"LABEL_13487",
"LABEL_13488",
"LABEL_13489",
"LABEL_1349",
"LABEL_13490",
"LABEL_13491",
"LABEL_13492",
"LABEL_13493",
"LABEL_13494",
"LABEL_13495",
"LABEL_13496",
"LABEL_13497",
"LABEL_13498",
"LABEL_13499",
"LABEL_135",
"LABEL_1350",
"LABEL_13500",
"LABEL_13501",
"LABEL_13502",
"LABEL_13503",
"LABEL_13504",
"LABEL_13505",
"LABEL_13506",
"LABEL_13507",
"LABEL_13508",
"LABEL_13509",
"LABEL_1351",
"LABEL_13510",
"LABEL_13511",
"LABEL_13512",
"LABEL_13513",
"LABEL_13514",
"LABEL_13515",
"LABEL_13516",
"LABEL_13517",
"LABEL_13518",
"LABEL_13519",
"LABEL_1352",
"LABEL_13520",
"LABEL_13521",
"LABEL_13522",
"LABEL_13523",
"LABEL_13524",
"LABEL_13525",
"LABEL_13526",
"LABEL_13527",
"LABEL_13528",
"LABEL_13529",
"LABEL_1353",
"LABEL_13530",
"LABEL_13531",
"LABEL_13532",
"LABEL_13533",
"LABEL_13534",
"LABEL_13535",
"LABEL_13536",
"LABEL_13537",
"LABEL_13538",
"LABEL_13539",
"LABEL_1354",
"LABEL_13540",
"LABEL_13541",
"LABEL_13542",
"LABEL_13543",
"LABEL_13544",
"LABEL_13545",
"LABEL_13546",
"LABEL_13547",
"LABEL_13548",
"LABEL_13549",
"LABEL_1355",
"LABEL_13550",
"LABEL_13551",
"LABEL_13552",
"LABEL_13553",
"LABEL_13554",
"LABEL_13555",
"LABEL_13556",
"LABEL_13557",
"LABEL_13558",
"LABEL_13559",
"LABEL_1356",
"LABEL_13560",
"LABEL_13561",
"LABEL_13562",
"LABEL_13563",
"LABEL_13564",
"LABEL_13565",
"LABEL_13566",
"LABEL_13567",
"LABEL_13568",
"LABEL_13569",
"LABEL_1357",
"LABEL_13570",
"LABEL_13571",
"LABEL_13572",
"LABEL_13573",
"LABEL_13574",
"LABEL_13575",
"LABEL_13576",
"LABEL_13577",
"LABEL_13578",
"LABEL_13579",
"LABEL_1358",
"LABEL_13580",
"LABEL_13581",
"LABEL_13582",
"LABEL_13583",
"LABEL_13584",
"LABEL_13585",
"LABEL_13586",
"LABEL_13587",
"LABEL_13588",
"LABEL_13589",
"LABEL_1359",
"LABEL_13590",
"LABEL_13591",
"LABEL_13592",
"LABEL_13593",
"LABEL_13594",
"LABEL_13595",
"LABEL_13596",
"LABEL_13597",
"LABEL_13598",
"LABEL_13599",
"LABEL_136",
"LABEL_1360",
"LABEL_13600",
"LABEL_13601",
"LABEL_13602",
"LABEL_13603",
"LABEL_13604",
"LABEL_13605",
"LABEL_13606",
"LABEL_13607",
"LABEL_13608",
"LABEL_13609",
"LABEL_1361",
"LABEL_13610",
"LABEL_13611",
"LABEL_13612",
"LABEL_13613",
"LABEL_13614",
"LABEL_13615",
"LABEL_13616",
"LABEL_13617",
"LABEL_13618",
"LABEL_13619",
"LABEL_1362",
"LABEL_13620",
"LABEL_13621",
"LABEL_13622",
"LABEL_13623",
"LABEL_13624",
"LABEL_13625",
"LABEL_13626",
"LABEL_13627",
"LABEL_13628",
"LABEL_13629",
"LABEL_1363",
"LABEL_13630",
"LABEL_13631",
"LABEL_13632",
"LABEL_13633",
"LABEL_13634",
"LABEL_13635",
"LABEL_13636",
"LABEL_13637",
"LABEL_13638",
"LABEL_13639",
"LABEL_1364",
"LABEL_13640",
"LABEL_13641",
"LABEL_13642",
"LABEL_13643",
"LABEL_13644",
"LABEL_13645",
"LABEL_13646",
"LABEL_13647",
"LABEL_13648",
"LABEL_13649",
"LABEL_1365",
"LABEL_13650",
"LABEL_13651",
"LABEL_13652",
"LABEL_13653",
"LABEL_13654",
"LABEL_13655",
"LABEL_13656",
"LABEL_13657",
"LABEL_13658",
"LABEL_13659",
"LABEL_1366",
"LABEL_13660",
"LABEL_13661",
"LABEL_13662",
"LABEL_13663",
"LABEL_13664",
"LABEL_13665",
"LABEL_13666",
"LABEL_13667",
"LABEL_13668",
"LABEL_13669",
"LABEL_1367",
"LABEL_13670",
"LABEL_13671",
"LABEL_13672",
"LABEL_13673",
"LABEL_13674",
"LABEL_13675",
"LABEL_13676",
"LABEL_13677",
"LABEL_13678",
"LABEL_13679",
"LABEL_1368",
"LABEL_13680",
"LABEL_13681",
"LABEL_13682",
"LABEL_13683",
"LABEL_13684",
"LABEL_13685",
"LABEL_13686",
"LABEL_13687",
"LABEL_13688",
"LABEL_13689",
"LABEL_1369",
"LABEL_13690",
"LABEL_13691",
"LABEL_13692",
"LABEL_13693",
"LABEL_13694",
"LABEL_13695",
"LABEL_13696",
"LABEL_13697",
"LABEL_13698",
"LABEL_13699",
"LABEL_137",
"LABEL_1370",
"LABEL_13700",
"LABEL_13701",
"LABEL_13702",
"LABEL_13703",
"LABEL_13704",
"LABEL_13705",
"LABEL_13706",
"LABEL_13707",
"LABEL_13708",
"LABEL_13709",
"LABEL_1371",
"LABEL_13710",
"LABEL_13711",
"LABEL_13712",
"LABEL_13713",
"LABEL_13714",
"LABEL_13715",
"LABEL_13716",
"LABEL_13717",
"LABEL_13718",
"LABEL_13719",
"LABEL_1372",
"LABEL_13720",
"LABEL_13721",
"LABEL_13722",
"LABEL_13723",
"LABEL_13724",
"LABEL_13725",
"LABEL_13726",
"LABEL_13727",
"LABEL_13728",
"LABEL_13729",
"LABEL_1373",
"LABEL_13730",
"LABEL_13731",
"LABEL_13732",
"LABEL_13733",
"LABEL_13734",
"LABEL_13735",
"LABEL_13736",
"LABEL_13737",
"LABEL_13738",
"LABEL_13739",
"LABEL_1374",
"LABEL_13740",
"LABEL_13741",
"LABEL_13742",
"LABEL_13743",
"LABEL_13744",
"LABEL_13745",
"LABEL_13746",
"LABEL_13747",
"LABEL_13748",
"LABEL_13749",
"LABEL_1375",
"LABEL_13750",
"LABEL_13751",
"LABEL_13752",
"LABEL_13753",
"LABEL_13754",
"LABEL_13755",
"LABEL_13756",
"LABEL_13757",
"LABEL_13758",
"LABEL_13759",
"LABEL_1376",
"LABEL_13760",
"LABEL_13761",
"LABEL_13762",
"LABEL_13763",
"LABEL_13764",
"LABEL_13765",
"LABEL_13766",
"LABEL_13767",
"LABEL_13768",
"LABEL_13769",
"LABEL_1377",
"LABEL_13770",
"LABEL_13771",
"LABEL_13772",
"LABEL_13773",
"LABEL_13774",
"LABEL_13775",
"LABEL_13776",
"LABEL_13777",
"LABEL_13778",
"LABEL_13779",
"LABEL_1378",
"LABEL_13780",
"LABEL_13781",
"LABEL_13782",
"LABEL_13783",
"LABEL_13784",
"LABEL_13785",
"LABEL_13786",
"LABEL_13787",
"LABEL_13788",
"LABEL_13789",
"LABEL_1379",
"LABEL_13790",
"LABEL_13791",
"LABEL_13792",
"LABEL_13793",
"LABEL_13794",
"LABEL_13795",
"LABEL_13796",
"LABEL_13797",
"LABEL_13798",
"LABEL_13799",
"LABEL_138",
"LABEL_1380",
"LABEL_13800",
"LABEL_13801",
"LABEL_13802",
"LABEL_13803",
"LABEL_13804",
"LABEL_13805",
"LABEL_13806",
"LABEL_13807",
"LABEL_13808",
"LABEL_13809",
"LABEL_1381",
"LABEL_13810",
"LABEL_13811",
"LABEL_13812",
"LABEL_13813",
"LABEL_13814",
"LABEL_13815",
"LABEL_13816",
"LABEL_13817",
"LABEL_13818",
"LABEL_13819",
"LABEL_1382",
"LABEL_13820",
"LABEL_13821",
"LABEL_13822",
"LABEL_13823",
"LABEL_13824",
"LABEL_13825",
"LABEL_13826",
"LABEL_13827",
"LABEL_13828",
"LABEL_13829",
"LABEL_1383",
"LABEL_13830",
"LABEL_13831",
"LABEL_13832",
"LABEL_13833",
"LABEL_13834",
"LABEL_13835",
"LABEL_13836",
"LABEL_13837",
"LABEL_13838",
"LABEL_13839",
"LABEL_1384",
"LABEL_13840",
"LABEL_13841",
"LABEL_13842",
"LABEL_13843",
"LABEL_13844",
"LABEL_13845",
"LABEL_13846",
"LABEL_13847",
"LABEL_13848",
"LABEL_13849",
"LABEL_1385",
"LABEL_13850",
"LABEL_13851",
"LABEL_13852",
"LABEL_13853",
"LABEL_13854",
"LABEL_13855",
"LABEL_13856",
"LABEL_13857",
"LABEL_13858",
"LABEL_13859",
"LABEL_1386",
"LABEL_13860",
"LABEL_13861",
"LABEL_13862",
"LABEL_13863",
"LABEL_13864",
"LABEL_13865",
"LABEL_13866",
"LABEL_13867",
"LABEL_13868",
"LABEL_13869",
"LABEL_1387",
"LABEL_13870",
"LABEL_13871",
"LABEL_13872",
"LABEL_13873",
"LABEL_13874",
"LABEL_13875",
"LABEL_13876",
"LABEL_13877",
"LABEL_13878",
"LABEL_13879",
"LABEL_1388",
"LABEL_13880",
"LABEL_13881",
"LABEL_13882",
"LABEL_13883",
"LABEL_13884",
"LABEL_13885",
"LABEL_13886",
"LABEL_13887",
"LABEL_13888",
"LABEL_13889",
"LABEL_1389",
"LABEL_13890",
"LABEL_13891",
"LABEL_13892",
"LABEL_13893",
"LABEL_13894",
"LABEL_13895",
"LABEL_13896",
"LABEL_13897",
"LABEL_13898",
"LABEL_13899",
"LABEL_139",
"LABEL_1390",
"LABEL_13900",
"LABEL_13901",
"LABEL_13902",
"LABEL_13903",
"LABEL_13904",
"LABEL_13905",
"LABEL_13906",
"LABEL_13907",
"LABEL_13908",
"LABEL_13909",
"LABEL_1391",
"LABEL_13910",
"LABEL_13911",
"LABEL_13912",
"LABEL_13913",
"LABEL_13914",
"LABEL_13915",
"LABEL_13916",
"LABEL_13917",
"LABEL_13918",
"LABEL_13919",
"LABEL_1392",
"LABEL_13920",
"LABEL_13921",
"LABEL_13922",
"LABEL_13923",
"LABEL_13924",
"LABEL_13925",
"LABEL_13926",
"LABEL_13927",
"LABEL_13928",
"LABEL_13929",
"LABEL_1393",
"LABEL_13930",
"LABEL_13931",
"LABEL_13932",
"LABEL_13933",
"LABEL_13934",
"LABEL_13935",
"LABEL_13936",
"LABEL_13937",
"LABEL_13938",
"LABEL_13939",
"LABEL_1394",
"LABEL_13940",
"LABEL_13941",
"LABEL_13942",
"LABEL_13943",
"LABEL_13944",
"LABEL_13945",
"LABEL_13946",
"LABEL_13947",
"LABEL_13948",
"LABEL_13949",
"LABEL_1395",
"LABEL_13950",
"LABEL_13951",
"LABEL_13952",
"LABEL_13953",
"LABEL_13954",
"LABEL_13955",
"LABEL_13956",
"LABEL_13957",
"LABEL_13958",
"LABEL_13959",
"LABEL_1396",
"LABEL_13960",
"LABEL_13961",
"LABEL_13962",
"LABEL_13963",
"LABEL_13964",
"LABEL_13965",
"LABEL_13966",
"LABEL_13967",
"LABEL_13968",
"LABEL_13969",
"LABEL_1397",
"LABEL_13970",
"LABEL_13971",
"LABEL_13972",
"LABEL_13973",
"LABEL_13974",
"LABEL_13975",
"LABEL_13976",
"LABEL_13977",
"LABEL_13978",
"LABEL_13979",
"LABEL_1398",
"LABEL_13980",
"LABEL_13981",
"LABEL_13982",
"LABEL_13983",
"LABEL_13984",
"LABEL_13985",
"LABEL_13986",
"LABEL_13987",
"LABEL_13988",
"LABEL_13989",
"LABEL_1399",
"LABEL_13990",
"LABEL_13991",
"LABEL_13992",
"LABEL_13993",
"LABEL_13994",
"LABEL_13995",
"LABEL_13996",
"LABEL_13997",
"LABEL_13998",
"LABEL_13999",
"LABEL_14",
"LABEL_140",
"LABEL_1400",
"LABEL_14000",
"LABEL_14001",
"LABEL_14002",
"LABEL_14003",
"LABEL_14004",
"LABEL_14005",
"LABEL_14006",
"LABEL_14007",
"LABEL_14008",
"LABEL_14009",
"LABEL_1401",
"LABEL_14010",
"LABEL_14011",
"LABEL_14012",
"LABEL_14013",
"LABEL_14014",
"LABEL_14015",
"LABEL_14016",
"LABEL_14017",
"LABEL_14018",
"LABEL_14019",
"LABEL_1402",
"LABEL_14020",
"LABEL_14021",
"LABEL_14022",
"LABEL_14023",
"LABEL_14024",
"LABEL_14025",
"LABEL_14026",
"LABEL_14027",
"LABEL_14028",
"LABEL_14029",
"LABEL_1403",
"LABEL_14030",
"LABEL_14031",
"LABEL_14032",
"LABEL_14033",
"LABEL_14034",
"LABEL_14035",
"LABEL_14036",
"LABEL_14037",
"LABEL_14038",
"LABEL_14039",
"LABEL_1404",
"LABEL_14040",
"LABEL_14041",
"LABEL_14042",
"LABEL_14043",
"LABEL_14044",
"LABEL_14045",
"LABEL_14046",
"LABEL_14047",
"LABEL_14048",
"LABEL_14049",
"LABEL_1405",
"LABEL_14050",
"LABEL_14051",
"LABEL_14052",
"LABEL_14053",
"LABEL_14054",
"LABEL_14055",
"LABEL_14056",
"LABEL_14057",
"LABEL_14058",
"LABEL_14059",
"LABEL_1406",
"LABEL_14060",
"LABEL_14061",
"LABEL_14062",
"LABEL_14063",
"LABEL_14064",
"LABEL_14065",
"LABEL_14066",
"LABEL_14067",
"LABEL_14068",
"LABEL_14069",
"LABEL_1407",
"LABEL_14070",
"LABEL_14071",
"LABEL_14072",
"LABEL_14073",
"LABEL_14074",
"LABEL_14075",
"LABEL_14076",
"LABEL_14077",
"LABEL_14078",
"LABEL_14079",
"LABEL_1408",
"LABEL_14080",
"LABEL_14081",
"LABEL_14082",
"LABEL_14083",
"LABEL_14084",
"LABEL_14085",
"LABEL_14086",
"LABEL_14087",
"LABEL_14088",
"LABEL_14089",
"LABEL_1409",
"LABEL_14090",
"LABEL_14091",
"LABEL_14092",
"LABEL_14093",
"LABEL_14094",
"LABEL_14095",
"LABEL_14096",
"LABEL_14097",
"LABEL_14098",
"LABEL_14099",
"LABEL_141",
"LABEL_1410",
"LABEL_14100",
"LABEL_14101",
"LABEL_14102",
"LABEL_14103",
"LABEL_14104",
"LABEL_14105",
"LABEL_14106",
"LABEL_14107",
"LABEL_14108",
"LABEL_14109",
"LABEL_1411",
"LABEL_14110",
"LABEL_14111",
"LABEL_14112",
"LABEL_14113",
"LABEL_14114",
"LABEL_14115",
"LABEL_14116",
"LABEL_14117",
"LABEL_14118",
"LABEL_14119",
"LABEL_1412",
"LABEL_14120",
"LABEL_14121",
"LABEL_14122",
"LABEL_14123",
"LABEL_14124",
"LABEL_14125",
"LABEL_14126",
"LABEL_14127",
"LABEL_14128",
"LABEL_14129",
"LABEL_1413",
"LABEL_14130",
"LABEL_14131",
"LABEL_14132",
"LABEL_14133",
"LABEL_14134",
"LABEL_14135",
"LABEL_14136",
"LABEL_14137",
"LABEL_14138",
"LABEL_14139",
"LABEL_1414",
"LABEL_14140",
"LABEL_14141",
"LABEL_14142",
"LABEL_14143",
"LABEL_14144",
"LABEL_14145",
"LABEL_14146",
"LABEL_14147",
"LABEL_14148",
"LABEL_14149",
"LABEL_1415",
"LABEL_14150",
"LABEL_14151",
"LABEL_14152",
"LABEL_14153",
"LABEL_14154",
"LABEL_14155",
"LABEL_14156",
"LABEL_14157",
"LABEL_14158",
"LABEL_14159",
"LABEL_1416",
"LABEL_14160",
"LABEL_14161",
"LABEL_14162",
"LABEL_14163",
"LABEL_14164",
"LABEL_14165",
"LABEL_14166",
"LABEL_14167",
"LABEL_14168",
"LABEL_14169",
"LABEL_1417",
"LABEL_14170",
"LABEL_14171",
"LABEL_14172",
"LABEL_14173",
"LABEL_14174",
"LABEL_14175",
"LABEL_14176",
"LABEL_14177",
"LABEL_14178",
"LABEL_14179",
"LABEL_1418",
"LABEL_14180",
"LABEL_14181",
"LABEL_14182",
"LABEL_14183",
"LABEL_14184",
"LABEL_14185",
"LABEL_14186",
"LABEL_14187",
"LABEL_14188",
"LABEL_14189",
"LABEL_1419",
"LABEL_14190",
"LABEL_14191",
"LABEL_14192",
"LABEL_14193",
"LABEL_14194",
"LABEL_14195",
"LABEL_14196",
"LABEL_14197",
"LABEL_14198",
"LABEL_14199",
"LABEL_142",
"LABEL_1420",
"LABEL_14200",
"LABEL_14201",
"LABEL_14202",
"LABEL_14203",
"LABEL_14204",
"LABEL_14205",
"LABEL_14206",
"LABEL_14207",
"LABEL_14208",
"LABEL_14209",
"LABEL_1421",
"LABEL_14210",
"LABEL_14211",
"LABEL_14212",
"LABEL_14213",
"LABEL_14214",
"LABEL_14215",
"LABEL_14216",
"LABEL_14217",
"LABEL_14218",
"LABEL_14219",
"LABEL_1422",
"LABEL_14220",
"LABEL_14221",
"LABEL_14222",
"LABEL_14223",
"LABEL_14224",
"LABEL_14225",
"LABEL_14226",
"LABEL_14227",
"LABEL_14228",
"LABEL_14229",
"LABEL_1423",
"LABEL_14230",
"LABEL_14231",
"LABEL_14232",
"LABEL_14233",
"LABEL_14234",
"LABEL_14235",
"LABEL_14236",
"LABEL_14237",
"LABEL_14238",
"LABEL_14239",
"LABEL_1424",
"LABEL_14240",
"LABEL_14241",
"LABEL_14242",
"LABEL_14243",
"LABEL_14244",
"LABEL_14245",
"LABEL_14246",
"LABEL_14247",
"LABEL_14248",
"LABEL_14249",
"LABEL_1425",
"LABEL_14250",
"LABEL_14251",
"LABEL_14252",
"LABEL_14253",
"LABEL_14254",
"LABEL_14255",
"LABEL_14256",
"LABEL_14257",
"LABEL_14258",
"LABEL_14259",
"LABEL_1426",
"LABEL_14260",
"LABEL_14261",
"LABEL_14262",
"LABEL_14263",
"LABEL_14264",
"LABEL_14265",
"LABEL_14266",
"LABEL_14267",
"LABEL_14268",
"LABEL_14269",
"LABEL_1427",
"LABEL_14270",
"LABEL_14271",
"LABEL_14272",
"LABEL_14273",
"LABEL_14274",
"LABEL_14275",
"LABEL_14276",
"LABEL_14277",
"LABEL_14278",
"LABEL_14279",
"LABEL_1428",
"LABEL_14280",
"LABEL_14281",
"LABEL_14282",
"LABEL_14283",
"LABEL_14284",
"LABEL_14285",
"LABEL_14286",
"LABEL_14287",
"LABEL_14288",
"LABEL_14289",
"LABEL_1429",
"LABEL_14290",
"LABEL_14291",
"LABEL_14292",
"LABEL_14293",
"LABEL_14294",
"LABEL_14295",
"LABEL_14296",
"LABEL_14297",
"LABEL_14298",
"LABEL_14299",
"LABEL_143",
"LABEL_1430",
"LABEL_14300",
"LABEL_14301",
"LABEL_14302",
"LABEL_14303",
"LABEL_14304",
"LABEL_14305",
"LABEL_14306",
"LABEL_14307",
"LABEL_14308",
"LABEL_14309",
"LABEL_1431",
"LABEL_14310",
"LABEL_14311",
"LABEL_14312",
"LABEL_14313",
"LABEL_14314",
"LABEL_14315",
"LABEL_14316",
"LABEL_14317",
"LABEL_14318",
"LABEL_14319",
"LABEL_1432",
"LABEL_14320",
"LABEL_14321",
"LABEL_14322",
"LABEL_14323",
"LABEL_14324",
"LABEL_14325",
"LABEL_14326",
"LABEL_14327",
"LABEL_14328",
"LABEL_14329",
"LABEL_1433",
"LABEL_14330",
"LABEL_14331",
"LABEL_14332",
"LABEL_14333",
"LABEL_14334",
"LABEL_14335",
"LABEL_14336",
"LABEL_14337",
"LABEL_14338",
"LABEL_14339",
"LABEL_1434",
"LABEL_14340",
"LABEL_14341",
"LABEL_14342",
"LABEL_14343",
"LABEL_14344",
"LABEL_14345",
"LABEL_14346",
"LABEL_14347",
"LABEL_14348",
"LABEL_14349",
"LABEL_1435",
"LABEL_14350",
"LABEL_14351",
"LABEL_14352",
"LABEL_14353",
"LABEL_14354",
"LABEL_14355",
"LABEL_14356",
"LABEL_14357",
"LABEL_14358",
"LABEL_14359",
"LABEL_1436",
"LABEL_14360",
"LABEL_14361",
"LABEL_14362",
"LABEL_14363",
"LABEL_14364",
"LABEL_14365",
"LABEL_14366",
"LABEL_14367",
"LABEL_14368",
"LABEL_14369",
"LABEL_1437",
"LABEL_14370",
"LABEL_14371",
"LABEL_14372",
"LABEL_14373",
"LABEL_14374",
"LABEL_14375",
"LABEL_14376",
"LABEL_14377",
"LABEL_14378",
"LABEL_14379",
"LABEL_1438",
"LABEL_14380",
"LABEL_14381",
"LABEL_14382",
"LABEL_14383",
"LABEL_14384",
"LABEL_14385",
"LABEL_14386",
"LABEL_14387",
"LABEL_14388",
"LABEL_14389",
"LABEL_1439",
"LABEL_14390",
"LABEL_14391",
"LABEL_14392",
"LABEL_14393",
"LABEL_14394",
"LABEL_14395",
"LABEL_14396",
"LABEL_14397",
"LABEL_14398",
"LABEL_14399",
"LABEL_144",
"LABEL_1440",
"LABEL_14400",
"LABEL_14401",
"LABEL_14402",
"LABEL_14403",
"LABEL_14404",
"LABEL_14405",
"LABEL_14406",
"LABEL_14407",
"LABEL_14408",
"LABEL_14409",
"LABEL_1441",
"LABEL_14410",
"LABEL_14411",
"LABEL_14412",
"LABEL_14413",
"LABEL_14414",
"LABEL_14415",
"LABEL_14416",
"LABEL_14417",
"LABEL_14418",
"LABEL_14419",
"LABEL_1442",
"LABEL_14420",
"LABEL_14421",
"LABEL_14422",
"LABEL_14423",
"LABEL_14424",
"LABEL_14425",
"LABEL_14426",
"LABEL_14427",
"LABEL_14428",
"LABEL_14429",
"LABEL_1443",
"LABEL_14430",
"LABEL_14431",
"LABEL_14432",
"LABEL_14433",
"LABEL_14434",
"LABEL_14435",
"LABEL_14436",
"LABEL_14437",
"LABEL_14438",
"LABEL_14439",
"LABEL_1444",
"LABEL_14440",
"LABEL_14441",
"LABEL_14442",
"LABEL_14443",
"LABEL_14444",
"LABEL_14445",
"LABEL_14446",
"LABEL_14447",
"LABEL_14448",
"LABEL_14449",
"LABEL_1445",
"LABEL_14450",
"LABEL_14451",
"LABEL_14452",
"LABEL_14453",
"LABEL_14454",
"LABEL_14455",
"LABEL_14456",
"LABEL_14457",
"LABEL_14458",
"LABEL_14459",
"LABEL_1446",
"LABEL_14460",
"LABEL_14461",
"LABEL_14462",
"LABEL_14463",
"LABEL_14464",
"LABEL_14465",
"LABEL_14466",
"LABEL_14467",
"LABEL_14468",
"LABEL_14469",
"LABEL_1447",
"LABEL_14470",
"LABEL_14471",
"LABEL_14472",
"LABEL_14473",
"LABEL_14474",
"LABEL_14475",
"LABEL_14476",
"LABEL_14477",
"LABEL_14478",
"LABEL_14479",
"LABEL_1448",
"LABEL_14480",
"LABEL_14481",
"LABEL_14482",
"LABEL_14483",
"LABEL_14484",
"LABEL_14485",
"LABEL_14486",
"LABEL_14487",
"LABEL_14488",
"LABEL_14489",
"LABEL_1449",
"LABEL_14490",
"LABEL_14491",
"LABEL_14492",
"LABEL_14493",
"LABEL_14494",
"LABEL_14495",
"LABEL_14496",
"LABEL_14497",
"LABEL_14498",
"LABEL_14499",
"LABEL_145",
"LABEL_1450",
"LABEL_14500",
"LABEL_14501",
"LABEL_14502",
"LABEL_14503",
"LABEL_14504",
"LABEL_14505",
"LABEL_14506",
"LABEL_14507",
"LABEL_14508",
"LABEL_14509",
"LABEL_1451",
"LABEL_14510",
"LABEL_14511",
"LABEL_14512",
"LABEL_14513",
"LABEL_14514",
"LABEL_14515",
"LABEL_14516",
"LABEL_14517",
"LABEL_14518",
"LABEL_14519",
"LABEL_1452",
"LABEL_14520",
"LABEL_14521",
"LABEL_14522",
"LABEL_14523",
"LABEL_14524",
"LABEL_14525",
"LABEL_14526",
"LABEL_14527",
"LABEL_14528",
"LABEL_14529",
"LABEL_1453",
"LABEL_14530",
"LABEL_14531",
"LABEL_14532",
"LABEL_14533",
"LABEL_14534",
"LABEL_14535",
"LABEL_14536",
"LABEL_14537",
"LABEL_14538",
"LABEL_14539",
"LABEL_1454",
"LABEL_14540",
"LABEL_14541",
"LABEL_14542",
"LABEL_14543",
"LABEL_14544",
"LABEL_14545",
"LABEL_14546",
"LABEL_14547",
"LABEL_14548",
"LABEL_14549",
"LABEL_1455",
"LABEL_14550",
"LABEL_14551",
"LABEL_14552",
"LABEL_14553",
"LABEL_14554",
"LABEL_14555",
"LABEL_14556",
"LABEL_14557",
"LABEL_14558",
"LABEL_14559",
"LABEL_1456",
"LABEL_14560",
"LABEL_14561",
"LABEL_14562",
"LABEL_14563",
"LABEL_14564",
"LABEL_14565",
"LABEL_14566",
"LABEL_14567",
"LABEL_14568",
"LABEL_14569",
"LABEL_1457",
"LABEL_14570",
"LABEL_14571",
"LABEL_14572",
"LABEL_14573",
"LABEL_14574",
"LABEL_14575",
"LABEL_14576",
"LABEL_14577",
"LABEL_14578",
"LABEL_14579",
"LABEL_1458",
"LABEL_14580",
"LABEL_14581",
"LABEL_14582",
"LABEL_14583",
"LABEL_14584",
"LABEL_14585",
"LABEL_14586",
"LABEL_14587",
"LABEL_14588",
"LABEL_14589",
"LABEL_1459",
"LABEL_14590",
"LABEL_14591",
"LABEL_14592",
"LABEL_14593",
"LABEL_14594",
"LABEL_14595",
"LABEL_14596",
"LABEL_14597",
"LABEL_14598",
"LABEL_14599",
"LABEL_146",
"LABEL_1460",
"LABEL_14600",
"LABEL_14601",
"LABEL_14602",
"LABEL_14603",
"LABEL_14604",
"LABEL_14605",
"LABEL_14606",
"LABEL_14607",
"LABEL_14608",
"LABEL_14609",
"LABEL_1461",
"LABEL_14610",
"LABEL_14611",
"LABEL_14612",
"LABEL_14613",
"LABEL_14614",
"LABEL_14615",
"LABEL_14616",
"LABEL_14617",
"LABEL_14618",
"LABEL_14619",
"LABEL_1462",
"LABEL_14620",
"LABEL_14621",
"LABEL_14622",
"LABEL_14623",
"LABEL_14624",
"LABEL_14625",
"LABEL_14626",
"LABEL_14627",
"LABEL_14628",
"LABEL_14629",
"LABEL_1463",
"LABEL_14630",
"LABEL_14631",
"LABEL_14632",
"LABEL_14633",
"LABEL_14634",
"LABEL_14635",
"LABEL_14636",
"LABEL_14637",
"LABEL_14638",
"LABEL_14639",
"LABEL_1464",
"LABEL_14640",
"LABEL_14641",
"LABEL_14642",
"LABEL_14643",
"LABEL_14644",
"LABEL_14645",
"LABEL_14646",
"LABEL_14647",
"LABEL_14648",
"LABEL_14649",
"LABEL_1465",
"LABEL_14650",
"LABEL_14651",
"LABEL_14652",
"LABEL_14653",
"LABEL_14654",
"LABEL_14655",
"LABEL_14656",
"LABEL_14657",
"LABEL_14658",
"LABEL_14659",
"LABEL_1466",
"LABEL_14660",
"LABEL_14661",
"LABEL_14662",
"LABEL_14663",
"LABEL_14664",
"LABEL_14665",
"LABEL_14666",
"LABEL_14667",
"LABEL_14668",
"LABEL_14669",
"LABEL_1467",
"LABEL_14670",
"LABEL_14671",
"LABEL_14672",
"LABEL_14673",
"LABEL_14674",
"LABEL_14675",
"LABEL_14676",
"LABEL_14677",
"LABEL_14678",
"LABEL_14679",
"LABEL_1468",
"LABEL_14680",
"LABEL_14681",
"LABEL_14682",
"LABEL_14683",
"LABEL_14684",
"LABEL_14685",
"LABEL_14686",
"LABEL_14687",
"LABEL_14688",
"LABEL_14689",
"LABEL_1469",
"LABEL_14690",
"LABEL_14691",
"LABEL_14692",
"LABEL_14693",
"LABEL_14694",
"LABEL_14695",
"LABEL_14696",
"LABEL_14697",
"LABEL_14698",
"LABEL_14699",
"LABEL_147",
"LABEL_1470",
"LABEL_14700",
"LABEL_14701",
"LABEL_14702",
"LABEL_14703",
"LABEL_14704",
"LABEL_14705",
"LABEL_14706",
"LABEL_14707",
"LABEL_14708",
"LABEL_14709",
"LABEL_1471",
"LABEL_14710",
"LABEL_14711",
"LABEL_14712",
"LABEL_14713",
"LABEL_14714",
"LABEL_14715",
"LABEL_14716",
"LABEL_14717",
"LABEL_14718",
"LABEL_14719",
"LABEL_1472",
"LABEL_14720",
"LABEL_14721",
"LABEL_14722",
"LABEL_14723",
"LABEL_14724",
"LABEL_14725",
"LABEL_14726",
"LABEL_14727",
"LABEL_14728",
"LABEL_14729",
"LABEL_1473",
"LABEL_14730",
"LABEL_14731",
"LABEL_14732",
"LABEL_14733",
"LABEL_14734",
"LABEL_14735",
"LABEL_14736",
"LABEL_14737",
"LABEL_14738",
"LABEL_14739",
"LABEL_1474",
"LABEL_14740",
"LABEL_14741",
"LABEL_14742",
"LABEL_14743",
"LABEL_14744",
"LABEL_14745",
"LABEL_14746",
"LABEL_14747",
"LABEL_14748",
"LABEL_14749",
"LABEL_1475",
"LABEL_14750",
"LABEL_14751",
"LABEL_14752",
"LABEL_14753",
"LABEL_14754",
"LABEL_14755",
"LABEL_14756",
"LABEL_14757",
"LABEL_14758",
"LABEL_14759",
"LABEL_1476",
"LABEL_14760",
"LABEL_14761",
"LABEL_14762",
"LABEL_14763",
"LABEL_14764",
"LABEL_14765",
"LABEL_14766",
"LABEL_14767",
"LABEL_14768",
"LABEL_14769",
"LABEL_1477",
"LABEL_14770",
"LABEL_14771",
"LABEL_14772",
"LABEL_14773",
"LABEL_14774",
"LABEL_14775",
"LABEL_14776",
"LABEL_14777",
"LABEL_14778",
"LABEL_14779",
"LABEL_1478",
"LABEL_14780",
"LABEL_14781",
"LABEL_14782",
"LABEL_14783",
"LABEL_14784",
"LABEL_14785",
"LABEL_14786",
"LABEL_14787",
"LABEL_14788",
"LABEL_14789",
"LABEL_1479",
"LABEL_14790",
"LABEL_14791",
"LABEL_14792",
"LABEL_14793",
"LABEL_14794",
"LABEL_14795",
"LABEL_14796",
"LABEL_14797",
"LABEL_14798",
"LABEL_14799",
"LABEL_148",
"LABEL_1480",
"LABEL_14800",
"LABEL_14801",
"LABEL_14802",
"LABEL_14803",
"LABEL_14804",
"LABEL_14805",
"LABEL_14806",
"LABEL_14807",
"LABEL_14808",
"LABEL_14809",
"LABEL_1481",
"LABEL_14810",
"LABEL_14811",
"LABEL_14812",
"LABEL_14813",
"LABEL_14814",
"LABEL_14815",
"LABEL_14816",
"LABEL_14817",
"LABEL_14818",
"LABEL_14819",
"LABEL_1482",
"LABEL_14820",
"LABEL_14821",
"LABEL_14822",
"LABEL_14823",
"LABEL_14824",
"LABEL_14825",
"LABEL_14826",
"LABEL_14827",
"LABEL_14828",
"LABEL_14829",
"LABEL_1483",
"LABEL_14830",
"LABEL_14831",
"LABEL_14832",
"LABEL_14833",
"LABEL_14834",
"LABEL_14835",
"LABEL_14836",
"LABEL_14837",
"LABEL_14838",
"LABEL_14839",
"LABEL_1484",
"LABEL_14840",
"LABEL_14841",
"LABEL_14842",
"LABEL_14843",
"LABEL_14844",
"LABEL_14845",
"LABEL_14846",
"LABEL_14847",
"LABEL_14848",
"LABEL_14849",
"LABEL_1485",
"LABEL_14850",
"LABEL_14851",
"LABEL_14852",
"LABEL_14853",
"LABEL_14854",
"LABEL_14855",
"LABEL_14856",
"LABEL_14857",
"LABEL_14858",
"LABEL_14859",
"LABEL_1486",
"LABEL_14860",
"LABEL_14861",
"LABEL_14862",
"LABEL_14863",
"LABEL_14864",
"LABEL_14865",
"LABEL_14866",
"LABEL_14867",
"LABEL_14868",
"LABEL_14869",
"LABEL_1487",
"LABEL_14870",
"LABEL_14871",
"LABEL_14872",
"LABEL_14873",
"LABEL_14874",
"LABEL_14875",
"LABEL_14876",
"LABEL_14877",
"LABEL_14878",
"LABEL_14879",
"LABEL_1488",
"LABEL_14880",
"LABEL_14881",
"LABEL_14882",
"LABEL_14883",
"LABEL_14884",
"LABEL_14885",
"LABEL_14886",
"LABEL_14887",
"LABEL_14888",
"LABEL_14889",
"LABEL_1489",
"LABEL_14890",
"LABEL_14891",
"LABEL_14892",
"LABEL_14893",
"LABEL_14894",
"LABEL_14895",
"LABEL_14896",
"LABEL_14897",
"LABEL_14898",
"LABEL_14899",
"LABEL_149",
"LABEL_1490",
"LABEL_14900",
"LABEL_14901",
"LABEL_14902",
"LABEL_14903",
"LABEL_14904",
"LABEL_14905",
"LABEL_14906",
"LABEL_14907",
"LABEL_14908",
"LABEL_14909",
"LABEL_1491",
"LABEL_14910",
"LABEL_14911",
"LABEL_14912",
"LABEL_14913",
"LABEL_14914",
"LABEL_14915",
"LABEL_14916",
"LABEL_14917",
"LABEL_14918",
"LABEL_14919",
"LABEL_1492",
"LABEL_14920",
"LABEL_14921",
"LABEL_14922",
"LABEL_14923",
"LABEL_14924",
"LABEL_14925",
"LABEL_14926",
"LABEL_14927",
"LABEL_14928",
"LABEL_14929",
"LABEL_1493",
"LABEL_14930",
"LABEL_14931",
"LABEL_14932",
"LABEL_14933",
"LABEL_14934",
"LABEL_14935",
"LABEL_14936",
"LABEL_14937",
"LABEL_14938",
"LABEL_14939",
"LABEL_1494",
"LABEL_14940",
"LABEL_14941",
"LABEL_14942",
"LABEL_14943",
"LABEL_14944",
"LABEL_14945",
"LABEL_14946",
"LABEL_14947",
"LABEL_14948",
"LABEL_14949",
"LABEL_1495",
"LABEL_14950",
"LABEL_14951",
"LABEL_14952",
"LABEL_14953",
"LABEL_14954",
"LABEL_14955",
"LABEL_14956",
"LABEL_14957",
"LABEL_14958",
"LABEL_14959",
"LABEL_1496",
"LABEL_14960",
"LABEL_14961",
"LABEL_14962",
"LABEL_14963",
"LABEL_14964",
"LABEL_14965",
"LABEL_14966",
"LABEL_14967",
"LABEL_14968",
"LABEL_14969",
"LABEL_1497",
"LABEL_14970",
"LABEL_14971",
"LABEL_14972",
"LABEL_14973",
"LABEL_14974",
"LABEL_14975",
"LABEL_14976",
"LABEL_14977",
"LABEL_14978",
"LABEL_14979",
"LABEL_1498",
"LABEL_14980",
"LABEL_14981",
"LABEL_14982",
"LABEL_14983",
"LABEL_14984",
"LABEL_14985",
"LABEL_14986",
"LABEL_14987",
"LABEL_14988",
"LABEL_14989",
"LABEL_1499",
"LABEL_14990",
"LABEL_14991",
"LABEL_14992",
"LABEL_14993",
"LABEL_14994",
"LABEL_14995",
"LABEL_14996",
"LABEL_14997",
"LABEL_14998",
"LABEL_14999",
"LABEL_15",
"LABEL_150",
"LABEL_1500",
"LABEL_15000",
"LABEL_15001",
"LABEL_15002",
"LABEL_15003",
"LABEL_15004",
"LABEL_15005",
"LABEL_15006",
"LABEL_15007",
"LABEL_15008",
"LABEL_15009",
"LABEL_1501",
"LABEL_15010",
"LABEL_15011",
"LABEL_15012",
"LABEL_15013",
"LABEL_15014",
"LABEL_15015",
"LABEL_15016",
"LABEL_15017",
"LABEL_15018",
"LABEL_15019",
"LABEL_1502",
"LABEL_15020",
"LABEL_15021",
"LABEL_15022",
"LABEL_15023",
"LABEL_15024",
"LABEL_15025",
"LABEL_15026",
"LABEL_15027",
"LABEL_15028",
"LABEL_15029",
"LABEL_1503",
"LABEL_15030",
"LABEL_15031",
"LABEL_15032",
"LABEL_15033",
"LABEL_15034",
"LABEL_15035",
"LABEL_15036",
"LABEL_15037",
"LABEL_15038",
"LABEL_15039",
"LABEL_1504",
"LABEL_15040",
"LABEL_15041",
"LABEL_15042",
"LABEL_15043",
"LABEL_15044",
"LABEL_15045",
"LABEL_15046",
"LABEL_15047",
"LABEL_15048",
"LABEL_15049",
"LABEL_1505",
"LABEL_15050",
"LABEL_15051",
"LABEL_15052",
"LABEL_15053",
"LABEL_15054",
"LABEL_15055",
"LABEL_15056",
"LABEL_15057",
"LABEL_15058",
"LABEL_15059",
"LABEL_1506",
"LABEL_15060",
"LABEL_15061",
"LABEL_15062",
"LABEL_15063",
"LABEL_15064",
"LABEL_15065",
"LABEL_15066",
"LABEL_15067",
"LABEL_15068",
"LABEL_15069",
"LABEL_1507",
"LABEL_15070",
"LABEL_15071",
"LABEL_15072",
"LABEL_15073",
"LABEL_15074",
"LABEL_15075",
"LABEL_15076",
"LABEL_15077",
"LABEL_15078",
"LABEL_15079",
"LABEL_1508",
"LABEL_15080",
"LABEL_15081",
"LABEL_15082",
"LABEL_15083",
"LABEL_15084",
"LABEL_15085",
"LABEL_15086",
"LABEL_15087",
"LABEL_15088",
"LABEL_15089",
"LABEL_1509",
"LABEL_15090",
"LABEL_15091",
"LABEL_15092",
"LABEL_15093",
"LABEL_15094",
"LABEL_15095",
"LABEL_15096",
"LABEL_15097",
"LABEL_15098",
"LABEL_15099",
"LABEL_151",
"LABEL_1510",
"LABEL_15100",
"LABEL_15101",
"LABEL_15102",
"LABEL_15103",
"LABEL_15104",
"LABEL_15105",
"LABEL_15106",
"LABEL_15107",
"LABEL_15108",
"LABEL_15109",
"LABEL_1511",
"LABEL_15110",
"LABEL_15111",
"LABEL_15112",
"LABEL_15113",
"LABEL_15114",
"LABEL_15115",
"LABEL_15116",
"LABEL_15117",
"LABEL_15118",
"LABEL_15119",
"LABEL_1512",
"LABEL_15120",
"LABEL_15121",
"LABEL_15122",
"LABEL_15123",
"LABEL_15124",
"LABEL_15125",
"LABEL_15126",
"LABEL_15127",
"LABEL_15128",
"LABEL_15129",
"LABEL_1513",
"LABEL_15130",
"LABEL_15131",
"LABEL_15132",
"LABEL_15133",
"LABEL_15134",
"LABEL_15135",
"LABEL_15136",
"LABEL_15137",
"LABEL_15138",
"LABEL_15139",
"LABEL_1514",
"LABEL_15140",
"LABEL_15141",
"LABEL_15142",
"LABEL_15143",
"LABEL_15144",
"LABEL_15145",
"LABEL_15146",
"LABEL_15147",
"LABEL_15148",
"LABEL_15149",
"LABEL_1515",
"LABEL_15150",
"LABEL_15151",
"LABEL_15152",
"LABEL_15153",
"LABEL_15154",
"LABEL_15155",
"LABEL_15156",
"LABEL_15157",
"LABEL_15158",
"LABEL_15159",
"LABEL_1516",
"LABEL_15160",
"LABEL_15161",
"LABEL_15162",
"LABEL_15163",
"LABEL_15164",
"LABEL_15165",
"LABEL_15166",
"LABEL_15167",
"LABEL_15168",
"LABEL_15169",
"LABEL_1517",
"LABEL_15170",
"LABEL_15171",
"LABEL_15172",
"LABEL_15173",
"LABEL_15174",
"LABEL_15175",
"LABEL_15176",
"LABEL_15177",
"LABEL_15178",
"LABEL_15179",
"LABEL_1518",
"LABEL_15180",
"LABEL_15181",
"LABEL_15182",
"LABEL_15183",
"LABEL_15184",
"LABEL_15185",
"LABEL_15186",
"LABEL_15187",
"LABEL_15188",
"LABEL_15189",
"LABEL_1519",
"LABEL_15190",
"LABEL_15191",
"LABEL_15192",
"LABEL_15193",
"LABEL_15194",
"LABEL_15195",
"LABEL_15196",
"LABEL_15197",
"LABEL_15198",
"LABEL_15199",
"LABEL_152",
"LABEL_1520",
"LABEL_15200",
"LABEL_15201",
"LABEL_15202",
"LABEL_15203",
"LABEL_15204",
"LABEL_15205",
"LABEL_15206",
"LABEL_15207",
"LABEL_15208",
"LABEL_15209",
"LABEL_1521",
"LABEL_15210",
"LABEL_15211",
"LABEL_15212",
"LABEL_15213",
"LABEL_15214",
"LABEL_15215",
"LABEL_15216",
"LABEL_15217",
"LABEL_15218",
"LABEL_15219",
"LABEL_1522",
"LABEL_15220",
"LABEL_15221",
"LABEL_15222",
"LABEL_15223",
"LABEL_15224",
"LABEL_15225",
"LABEL_15226",
"LABEL_15227",
"LABEL_15228",
"LABEL_15229",
"LABEL_1523",
"LABEL_15230",
"LABEL_15231",
"LABEL_15232",
"LABEL_15233",
"LABEL_15234",
"LABEL_15235",
"LABEL_15236",
"LABEL_15237",
"LABEL_15238",
"LABEL_15239",
"LABEL_1524",
"LABEL_15240",
"LABEL_15241",
"LABEL_15242",
"LABEL_15243",
"LABEL_15244",
"LABEL_15245",
"LABEL_15246",
"LABEL_15247",
"LABEL_15248",
"LABEL_15249",
"LABEL_1525",
"LABEL_15250",
"LABEL_15251",
"LABEL_15252",
"LABEL_15253",
"LABEL_15254",
"LABEL_15255",
"LABEL_15256",
"LABEL_15257",
"LABEL_15258",
"LABEL_15259",
"LABEL_1526",
"LABEL_15260",
"LABEL_15261",
"LABEL_15262",
"LABEL_15263",
"LABEL_15264",
"LABEL_15265",
"LABEL_15266",
"LABEL_15267",
"LABEL_15268",
"LABEL_15269",
"LABEL_1527",
"LABEL_15270",
"LABEL_15271",
"LABEL_15272",
"LABEL_15273",
"LABEL_15274",
"LABEL_15275",
"LABEL_15276",
"LABEL_15277",
"LABEL_15278",
"LABEL_15279",
"LABEL_1528",
"LABEL_15280",
"LABEL_15281",
"LABEL_15282",
"LABEL_15283",
"LABEL_15284",
"LABEL_15285",
"LABEL_15286",
"LABEL_15287",
"LABEL_15288",
"LABEL_15289",
"LABEL_1529",
"LABEL_15290",
"LABEL_15291",
"LABEL_15292",
"LABEL_15293",
"LABEL_15294",
"LABEL_15295",
"LABEL_15296",
"LABEL_15297",
"LABEL_15298",
"LABEL_15299",
"LABEL_153",
"LABEL_1530",
"LABEL_15300",
"LABEL_15301",
"LABEL_15302",
"LABEL_15303",
"LABEL_15304",
"LABEL_15305",
"LABEL_15306",
"LABEL_15307",
"LABEL_15308",
"LABEL_15309",
"LABEL_1531",
"LABEL_15310",
"LABEL_15311",
"LABEL_15312",
"LABEL_15313",
"LABEL_15314",
"LABEL_15315",
"LABEL_15316",
"LABEL_15317",
"LABEL_15318",
"LABEL_15319",
"LABEL_1532",
"LABEL_15320",
"LABEL_15321",
"LABEL_15322",
"LABEL_15323",
"LABEL_15324",
"LABEL_15325",
"LABEL_15326",
"LABEL_15327",
"LABEL_15328",
"LABEL_15329",
"LABEL_1533",
"LABEL_15330",
"LABEL_15331",
"LABEL_15332",
"LABEL_15333",
"LABEL_15334",
"LABEL_15335",
"LABEL_15336",
"LABEL_15337",
"LABEL_15338",
"LABEL_15339",
"LABEL_1534",
"LABEL_15340",
"LABEL_15341",
"LABEL_15342",
"LABEL_15343",
"LABEL_15344",
"LABEL_15345",
"LABEL_15346",
"LABEL_15347",
"LABEL_15348",
"LABEL_15349",
"LABEL_1535",
"LABEL_15350",
"LABEL_15351",
"LABEL_15352",
"LABEL_15353",
"LABEL_15354",
"LABEL_15355",
"LABEL_15356",
"LABEL_15357",
"LABEL_15358",
"LABEL_15359",
"LABEL_1536",
"LABEL_15360",
"LABEL_15361",
"LABEL_15362",
"LABEL_15363",
"LABEL_15364",
"LABEL_15365",
"LABEL_15366",
"LABEL_15367",
"LABEL_15368",
"LABEL_15369",
"LABEL_1537",
"LABEL_15370",
"LABEL_15371",
"LABEL_15372",
"LABEL_15373",
"LABEL_15374",
"LABEL_15375",
"LABEL_15376",
"LABEL_15377",
"LABEL_15378",
"LABEL_15379",
"LABEL_1538",
"LABEL_15380",
"LABEL_15381",
"LABEL_15382",
"LABEL_15383",
"LABEL_15384",
"LABEL_15385",
"LABEL_15386",
"LABEL_15387",
"LABEL_15388",
"LABEL_15389",
"LABEL_1539",
"LABEL_15390",
"LABEL_15391",
"LABEL_15392",
"LABEL_15393",
"LABEL_15394",
"LABEL_15395",
"LABEL_15396",
"LABEL_15397",
"LABEL_15398",
"LABEL_15399",
"LABEL_154",
"LABEL_1540",
"LABEL_15400",
"LABEL_15401",
"LABEL_15402",
"LABEL_15403",
"LABEL_15404",
"LABEL_15405",
"LABEL_15406",
"LABEL_15407",
"LABEL_15408",
"LABEL_15409",
"LABEL_1541",
"LABEL_15410",
"LABEL_15411",
"LABEL_15412",
"LABEL_15413",
"LABEL_15414",
"LABEL_15415",
"LABEL_15416",
"LABEL_15417",
"LABEL_15418",
"LABEL_15419",
"LABEL_1542",
"LABEL_15420",
"LABEL_15421",
"LABEL_15422",
"LABEL_15423",
"LABEL_15424",
"LABEL_15425",
"LABEL_15426",
"LABEL_15427",
"LABEL_15428",
"LABEL_15429",
"LABEL_1543",
"LABEL_15430",
"LABEL_15431",
"LABEL_15432",
"LABEL_15433",
"LABEL_15434",
"LABEL_15435",
"LABEL_15436",
"LABEL_15437",
"LABEL_15438",
"LABEL_15439",
"LABEL_1544",
"LABEL_15440",
"LABEL_15441",
"LABEL_15442",
"LABEL_15443",
"LABEL_15444",
"LABEL_15445",
"LABEL_15446",
"LABEL_15447",
"LABEL_15448",
"LABEL_15449",
"LABEL_1545",
"LABEL_15450",
"LABEL_15451",
"LABEL_15452",
"LABEL_15453",
"LABEL_15454",
"LABEL_15455",
"LABEL_15456",
"LABEL_15457",
"LABEL_15458",
"LABEL_15459",
"LABEL_1546",
"LABEL_15460",
"LABEL_15461",
"LABEL_15462",
"LABEL_15463",
"LABEL_15464",
"LABEL_15465",
"LABEL_15466",
"LABEL_15467",
"LABEL_15468",
"LABEL_15469",
"LABEL_1547",
"LABEL_15470",
"LABEL_15471",
"LABEL_15472",
"LABEL_15473",
"LABEL_15474",
"LABEL_15475",
"LABEL_15476",
"LABEL_15477",
"LABEL_15478",
"LABEL_15479",
"LABEL_1548",
"LABEL_15480",
"LABEL_15481",
"LABEL_15482",
"LABEL_15483",
"LABEL_15484",
"LABEL_15485",
"LABEL_15486",
"LABEL_15487",
"LABEL_15488",
"LABEL_15489",
"LABEL_1549",
"LABEL_15490",
"LABEL_15491",
"LABEL_15492",
"LABEL_15493",
"LABEL_15494",
"LABEL_15495",
"LABEL_15496",
"LABEL_15497",
"LABEL_15498",
"LABEL_15499",
"LABEL_155",
"LABEL_1550",
"LABEL_15500",
"LABEL_15501",
"LABEL_15502",
"LABEL_15503",
"LABEL_15504",
"LABEL_15505",
"LABEL_15506",
"LABEL_15507",
"LABEL_15508",
"LABEL_15509",
"LABEL_1551",
"LABEL_15510",
"LABEL_15511",
"LABEL_15512",
"LABEL_15513",
"LABEL_15514",
"LABEL_15515",
"LABEL_15516",
"LABEL_15517",
"LABEL_15518",
"LABEL_15519",
"LABEL_1552",
"LABEL_15520",
"LABEL_15521",
"LABEL_15522",
"LABEL_15523",
"LABEL_15524",
"LABEL_15525",
"LABEL_15526",
"LABEL_15527",
"LABEL_15528",
"LABEL_15529",
"LABEL_1553",
"LABEL_15530",
"LABEL_15531",
"LABEL_15532",
"LABEL_15533",
"LABEL_15534",
"LABEL_15535",
"LABEL_15536",
"LABEL_15537",
"LABEL_15538",
"LABEL_15539",
"LABEL_1554",
"LABEL_15540",
"LABEL_15541",
"LABEL_15542",
"LABEL_15543",
"LABEL_15544",
"LABEL_15545",
"LABEL_15546",
"LABEL_15547",
"LABEL_15548",
"LABEL_15549",
"LABEL_1555",
"LABEL_15550",
"LABEL_15551",
"LABEL_15552",
"LABEL_15553",
"LABEL_15554",
"LABEL_15555",
"LABEL_15556",
"LABEL_15557",
"LABEL_15558",
"LABEL_15559",
"LABEL_1556",
"LABEL_15560",
"LABEL_15561",
"LABEL_15562",
"LABEL_15563",
"LABEL_15564",
"LABEL_15565",
"LABEL_15566",
"LABEL_15567",
"LABEL_15568",
"LABEL_15569",
"LABEL_1557",
"LABEL_15570",
"LABEL_15571",
"LABEL_15572",
"LABEL_15573",
"LABEL_15574",
"LABEL_15575",
"LABEL_15576",
"LABEL_15577",
"LABEL_15578",
"LABEL_15579",
"LABEL_1558",
"LABEL_15580",
"LABEL_15581",
"LABEL_15582",
"LABEL_15583",
"LABEL_15584",
"LABEL_15585",
"LABEL_15586",
"LABEL_15587",
"LABEL_15588",
"LABEL_15589",
"LABEL_1559",
"LABEL_15590",
"LABEL_15591",
"LABEL_15592",
"LABEL_15593",
"LABEL_15594",
"LABEL_15595",
"LABEL_15596",
"LABEL_15597",
"LABEL_15598",
"LABEL_15599",
"LABEL_156",
"LABEL_1560",
"LABEL_15600",
"LABEL_15601",
"LABEL_15602",
"LABEL_15603",
"LABEL_15604",
"LABEL_15605",
"LABEL_15606",
"LABEL_15607",
"LABEL_15608",
"LABEL_15609",
"LABEL_1561",
"LABEL_15610",
"LABEL_15611",
"LABEL_15612",
"LABEL_15613",
"LABEL_15614",
"LABEL_15615",
"LABEL_15616",
"LABEL_15617",
"LABEL_15618",
"LABEL_15619",
"LABEL_1562",
"LABEL_15620",
"LABEL_15621",
"LABEL_15622",
"LABEL_15623",
"LABEL_15624",
"LABEL_15625",
"LABEL_15626",
"LABEL_15627",
"LABEL_15628",
"LABEL_15629",
"LABEL_1563",
"LABEL_15630",
"LABEL_15631",
"LABEL_15632",
"LABEL_15633",
"LABEL_15634",
"LABEL_15635",
"LABEL_15636",
"LABEL_15637",
"LABEL_15638",
"LABEL_15639",
"LABEL_1564",
"LABEL_15640",
"LABEL_15641",
"LABEL_15642",
"LABEL_15643",
"LABEL_15644",
"LABEL_15645",
"LABEL_15646",
"LABEL_15647",
"LABEL_15648",
"LABEL_15649",
"LABEL_1565",
"LABEL_15650",
"LABEL_15651",
"LABEL_15652",
"LABEL_15653",
"LABEL_15654",
"LABEL_15655",
"LABEL_15656",
"LABEL_15657",
"LABEL_15658",
"LABEL_15659",
"LABEL_1566",
"LABEL_15660",
"LABEL_15661",
"LABEL_15662",
"LABEL_15663",
"LABEL_15664",
"LABEL_15665",
"LABEL_15666",
"LABEL_15667",
"LABEL_15668",
"LABEL_15669",
"LABEL_1567",
"LABEL_15670",
"LABEL_15671",
"LABEL_15672",
"LABEL_15673",
"LABEL_15674",
"LABEL_15675",
"LABEL_15676",
"LABEL_15677",
"LABEL_15678",
"LABEL_15679",
"LABEL_1568",
"LABEL_15680",
"LABEL_15681",
"LABEL_15682",
"LABEL_15683",
"LABEL_15684",
"LABEL_15685",
"LABEL_15686",
"LABEL_15687",
"LABEL_15688",
"LABEL_15689",
"LABEL_1569",
"LABEL_15690",
"LABEL_15691",
"LABEL_15692",
"LABEL_15693",
"LABEL_15694",
"LABEL_15695",
"LABEL_15696",
"LABEL_15697",
"LABEL_15698",
"LABEL_15699",
"LABEL_157",
"LABEL_1570",
"LABEL_15700",
"LABEL_15701",
"LABEL_15702",
"LABEL_15703",
"LABEL_15704",
"LABEL_15705",
"LABEL_15706",
"LABEL_15707",
"LABEL_15708",
"LABEL_15709",
"LABEL_1571",
"LABEL_15710",
"LABEL_15711",
"LABEL_15712",
"LABEL_15713",
"LABEL_15714",
"LABEL_15715",
"LABEL_15716",
"LABEL_15717",
"LABEL_15718",
"LABEL_15719",
"LABEL_1572",
"LABEL_15720",
"LABEL_15721",
"LABEL_15722",
"LABEL_15723",
"LABEL_15724",
"LABEL_15725",
"LABEL_15726",
"LABEL_15727",
"LABEL_15728",
"LABEL_15729",
"LABEL_1573",
"LABEL_15730",
"LABEL_15731",
"LABEL_15732",
"LABEL_15733",
"LABEL_15734",
"LABEL_15735",
"LABEL_15736",
"LABEL_15737",
"LABEL_15738",
"LABEL_15739",
"LABEL_1574",
"LABEL_15740",
"LABEL_15741",
"LABEL_15742",
"LABEL_15743",
"LABEL_15744",
"LABEL_15745",
"LABEL_15746",
"LABEL_15747",
"LABEL_15748",
"LABEL_15749",
"LABEL_1575",
"LABEL_15750",
"LABEL_15751",
"LABEL_15752",
"LABEL_15753",
"LABEL_15754",
"LABEL_15755",
"LABEL_15756",
"LABEL_15757",
"LABEL_15758",
"LABEL_15759",
"LABEL_1576",
"LABEL_15760",
"LABEL_15761",
"LABEL_15762",
"LABEL_15763",
"LABEL_15764",
"LABEL_15765",
"LABEL_15766",
"LABEL_15767",
"LABEL_15768",
"LABEL_15769",
"LABEL_1577",
"LABEL_15770",
"LABEL_15771",
"LABEL_15772",
"LABEL_15773",
"LABEL_15774",
"LABEL_15775",
"LABEL_15776",
"LABEL_15777",
"LABEL_15778",
"LABEL_15779",
"LABEL_1578",
"LABEL_15780",
"LABEL_15781",
"LABEL_15782",
"LABEL_15783",
"LABEL_15784",
"LABEL_15785",
"LABEL_15786",
"LABEL_15787",
"LABEL_15788",
"LABEL_15789",
"LABEL_1579",
"LABEL_15790",
"LABEL_15791",
"LABEL_15792",
"LABEL_15793",
"LABEL_15794",
"LABEL_15795",
"LABEL_15796",
"LABEL_15797",
"LABEL_15798",
"LABEL_15799",
"LABEL_158",
"LABEL_1580",
"LABEL_15800",
"LABEL_15801",
"LABEL_15802",
"LABEL_15803",
"LABEL_15804",
"LABEL_15805",
"LABEL_15806",
"LABEL_15807",
"LABEL_15808",
"LABEL_15809",
"LABEL_1581",
"LABEL_15810",
"LABEL_15811",
"LABEL_15812",
"LABEL_15813",
"LABEL_15814",
"LABEL_15815",
"LABEL_15816",
"LABEL_15817",
"LABEL_15818",
"LABEL_15819",
"LABEL_1582",
"LABEL_15820",
"LABEL_15821",
"LABEL_15822",
"LABEL_15823",
"LABEL_15824",
"LABEL_15825",
"LABEL_15826",
"LABEL_15827",
"LABEL_15828",
"LABEL_15829",
"LABEL_1583",
"LABEL_15830",
"LABEL_15831",
"LABEL_15832",
"LABEL_15833",
"LABEL_15834",
"LABEL_15835",
"LABEL_15836",
"LABEL_15837",
"LABEL_15838",
"LABEL_15839",
"LABEL_1584",
"LABEL_15840",
"LABEL_15841",
"LABEL_15842",
"LABEL_15843",
"LABEL_15844",
"LABEL_15845",
"LABEL_15846",
"LABEL_15847",
"LABEL_15848",
"LABEL_15849",
"LABEL_1585",
"LABEL_15850",
"LABEL_15851",
"LABEL_15852",
"LABEL_15853",
"LABEL_15854",
"LABEL_15855",
"LABEL_15856",
"LABEL_15857",
"LABEL_15858",
"LABEL_15859",
"LABEL_1586",
"LABEL_15860",
"LABEL_15861",
"LABEL_15862",
"LABEL_15863",
"LABEL_15864",
"LABEL_15865",
"LABEL_15866",
"LABEL_15867",
"LABEL_15868",
"LABEL_15869",
"LABEL_1587",
"LABEL_15870",
"LABEL_15871",
"LABEL_15872",
"LABEL_15873",
"LABEL_15874",
"LABEL_15875",
"LABEL_15876",
"LABEL_15877",
"LABEL_15878",
"LABEL_15879",
"LABEL_1588",
"LABEL_15880",
"LABEL_15881",
"LABEL_15882",
"LABEL_15883",
"LABEL_15884",
"LABEL_15885",
"LABEL_15886",
"LABEL_15887",
"LABEL_15888",
"LABEL_15889",
"LABEL_1589",
"LABEL_15890",
"LABEL_15891",
"LABEL_15892",
"LABEL_15893",
"LABEL_15894",
"LABEL_15895",
"LABEL_15896",
"LABEL_15897",
"LABEL_15898",
"LABEL_15899",
"LABEL_159",
"LABEL_1590",
"LABEL_15900",
"LABEL_15901",
"LABEL_15902",
"LABEL_15903",
"LABEL_15904",
"LABEL_15905",
"LABEL_15906",
"LABEL_15907",
"LABEL_15908",
"LABEL_15909",
"LABEL_1591",
"LABEL_15910",
"LABEL_15911",
"LABEL_15912",
"LABEL_15913",
"LABEL_15914",
"LABEL_15915",
"LABEL_15916",
"LABEL_15917",
"LABEL_15918",
"LABEL_15919",
"LABEL_1592",
"LABEL_15920",
"LABEL_15921",
"LABEL_15922",
"LABEL_15923",
"LABEL_15924",
"LABEL_15925",
"LABEL_15926",
"LABEL_15927",
"LABEL_15928",
"LABEL_15929",
"LABEL_1593",
"LABEL_15930",
"LABEL_1594",
"LABEL_1595",
"LABEL_1596",
"LABEL_1597",
"LABEL_1598",
"LABEL_1599",
"LABEL_16",
"LABEL_160",
"LABEL_1600",
"LABEL_1601",
"LABEL_1602",
"LABEL_1603",
"LABEL_1604",
"LABEL_1605",
"LABEL_1606",
"LABEL_1607",
"LABEL_1608",
"LABEL_1609",
"LABEL_161",
"LABEL_1610",
"LABEL_1611",
"LABEL_1612",
"LABEL_1613",
"LABEL_1614",
"LABEL_1615",
"LABEL_1616",
"LABEL_1617",
"LABEL_1618",
"LABEL_1619",
"LABEL_162",
"LABEL_1620",
"LABEL_1621",
"LABEL_1622",
"LABEL_1623",
"LABEL_1624",
"LABEL_1625",
"LABEL_1626",
"LABEL_1627",
"LABEL_1628",
"LABEL_1629",
"LABEL_163",
"LABEL_1630",
"LABEL_1631",
"LABEL_1632",
"LABEL_1633",
"LABEL_1634",
"LABEL_1635",
"LABEL_1636",
"LABEL_1637",
"LABEL_1638",
"LABEL_1639",
"LABEL_164",
"LABEL_1640",
"LABEL_1641",
"LABEL_1642",
"LABEL_1643",
"LABEL_1644",
"LABEL_1645",
"LABEL_1646",
"LABEL_1647",
"LABEL_1648",
"LABEL_1649",
"LABEL_165",
"LABEL_1650",
"LABEL_1651",
"LABEL_1652",
"LABEL_1653",
"LABEL_1654",
"LABEL_1655",
"LABEL_1656",
"LABEL_1657",
"LABEL_1658",
"LABEL_1659",
"LABEL_166",
"LABEL_1660",
"LABEL_1661",
"LABEL_1662",
"LABEL_1663",
"LABEL_1664",
"LABEL_1665",
"LABEL_1666",
"LABEL_1667",
"LABEL_1668",
"LABEL_1669",
"LABEL_167",
"LABEL_1670",
"LABEL_1671",
"LABEL_1672",
"LABEL_1673",
"LABEL_1674",
"LABEL_1675",
"LABEL_1676",
"LABEL_1677",
"LABEL_1678",
"LABEL_1679",
"LABEL_168",
"LABEL_1680",
"LABEL_1681",
"LABEL_1682",
"LABEL_1683",
"LABEL_1684",
"LABEL_1685",
"LABEL_1686",
"LABEL_1687",
"LABEL_1688",
"LABEL_1689",
"LABEL_169",
"LABEL_1690",
"LABEL_1691",
"LABEL_1692",
"LABEL_1693",
"LABEL_1694",
"LABEL_1695",
"LABEL_1696",
"LABEL_1697",
"LABEL_1698",
"LABEL_1699",
"LABEL_17",
"LABEL_170",
"LABEL_1700",
"LABEL_1701",
"LABEL_1702",
"LABEL_1703",
"LABEL_1704",
"LABEL_1705",
"LABEL_1706",
"LABEL_1707",
"LABEL_1708",
"LABEL_1709",
"LABEL_171",
"LABEL_1710",
"LABEL_1711",
"LABEL_1712",
"LABEL_1713",
"LABEL_1714",
"LABEL_1715",
"LABEL_1716",
"LABEL_1717",
"LABEL_1718",
"LABEL_1719",
"LABEL_172",
"LABEL_1720",
"LABEL_1721",
"LABEL_1722",
"LABEL_1723",
"LABEL_1724",
"LABEL_1725",
"LABEL_1726",
"LABEL_1727",
"LABEL_1728",
"LABEL_1729",
"LABEL_173",
"LABEL_1730",
"LABEL_1731",
"LABEL_1732",
"LABEL_1733",
"LABEL_1734",
"LABEL_1735",
"LABEL_1736",
"LABEL_1737",
"LABEL_1738",
"LABEL_1739",
"LABEL_174",
"LABEL_1740",
"LABEL_1741",
"LABEL_1742",
"LABEL_1743",
"LABEL_1744",
"LABEL_1745",
"LABEL_1746",
"LABEL_1747",
"LABEL_1748",
"LABEL_1749",
"LABEL_175",
"LABEL_1750",
"LABEL_1751",
"LABEL_1752",
"LABEL_1753",
"LABEL_1754",
"LABEL_1755",
"LABEL_1756",
"LABEL_1757",
"LABEL_1758",
"LABEL_1759",
"LABEL_176",
"LABEL_1760",
"LABEL_1761",
"LABEL_1762",
"LABEL_1763",
"LABEL_1764",
"LABEL_1765",
"LABEL_1766",
"LABEL_1767",
"LABEL_1768",
"LABEL_1769",
"LABEL_177",
"LABEL_1770",
"LABEL_1771",
"LABEL_1772",
"LABEL_1773",
"LABEL_1774",
"LABEL_1775",
"LABEL_1776",
"LABEL_1777",
"LABEL_1778",
"LABEL_1779",
"LABEL_178",
"LABEL_1780",
"LABEL_1781",
"LABEL_1782",
"LABEL_1783",
"LABEL_1784",
"LABEL_1785",
"LABEL_1786",
"LABEL_1787",
"LABEL_1788",
"LABEL_1789",
"LABEL_179",
"LABEL_1790",
"LABEL_1791",
"LABEL_1792",
"LABEL_1793",
"LABEL_1794",
"LABEL_1795",
"LABEL_1796",
"LABEL_1797",
"LABEL_1798",
"LABEL_1799",
"LABEL_18",
"LABEL_180",
"LABEL_1800",
"LABEL_1801",
"LABEL_1802",
"LABEL_1803",
"LABEL_1804",
"LABEL_1805",
"LABEL_1806",
"LABEL_1807",
"LABEL_1808",
"LABEL_1809",
"LABEL_181",
"LABEL_1810",
"LABEL_1811",
"LABEL_1812",
"LABEL_1813",
"LABEL_1814",
"LABEL_1815",
"LABEL_1816",
"LABEL_1817",
"LABEL_1818",
"LABEL_1819",
"LABEL_182",
"LABEL_1820",
"LABEL_1821",
"LABEL_1822",
"LABEL_1823",
"LABEL_1824",
"LABEL_1825",
"LABEL_1826",
"LABEL_1827",
"LABEL_1828",
"LABEL_1829",
"LABEL_183",
"LABEL_1830",
"LABEL_1831",
"LABEL_1832",
"LABEL_1833",
"LABEL_1834",
"LABEL_1835",
"LABEL_1836",
"LABEL_1837",
"LABEL_1838",
"LABEL_1839",
"LABEL_184",
"LABEL_1840",
"LABEL_1841",
"LABEL_1842",
"LABEL_1843",
"LABEL_1844",
"LABEL_1845",
"LABEL_1846",
"LABEL_1847",
"LABEL_1848",
"LABEL_1849",
"LABEL_185",
"LABEL_1850",
"LABEL_1851",
"LABEL_1852",
"LABEL_1853",
"LABEL_1854",
"LABEL_1855",
"LABEL_1856",
"LABEL_1857",
"LABEL_1858",
"LABEL_1859",
"LABEL_186",
"LABEL_1860",
"LABEL_1861",
"LABEL_1862",
"LABEL_1863",
"LABEL_1864",
"LABEL_1865",
"LABEL_1866",
"LABEL_1867",
"LABEL_1868",
"LABEL_1869",
"LABEL_187",
"LABEL_1870",
"LABEL_1871",
"LABEL_1872",
"LABEL_1873",
"LABEL_1874",
"LABEL_1875",
"LABEL_1876",
"LABEL_1877",
"LABEL_1878",
"LABEL_1879",
"LABEL_188",
"LABEL_1880",
"LABEL_1881",
"LABEL_1882",
"LABEL_1883",
"LABEL_1884",
"LABEL_1885",
"LABEL_1886",
"LABEL_1887",
"LABEL_1888",
"LABEL_1889",
"LABEL_189",
"LABEL_1890",
"LABEL_1891",
"LABEL_1892",
"LABEL_1893",
"LABEL_1894",
"LABEL_1895",
"LABEL_1896",
"LABEL_1897",
"LABEL_1898",
"LABEL_1899",
"LABEL_19",
"LABEL_190",
"LABEL_1900",
"LABEL_1901",
"LABEL_1902",
"LABEL_1903",
"LABEL_1904",
"LABEL_1905",
"LABEL_1906",
"LABEL_1907",
"LABEL_1908",
"LABEL_1909",
"LABEL_191",
"LABEL_1910",
"LABEL_1911",
"LABEL_1912",
"LABEL_1913",
"LABEL_1914",
"LABEL_1915",
"LABEL_1916",
"LABEL_1917",
"LABEL_1918",
"LABEL_1919",
"LABEL_192",
"LABEL_1920",
"LABEL_1921",
"LABEL_1922",
"LABEL_1923",
"LABEL_1924",
"LABEL_1925",
"LABEL_1926",
"LABEL_1927",
"LABEL_1928",
"LABEL_1929",
"LABEL_193",
"LABEL_1930",
"LABEL_1931",
"LABEL_1932",
"LABEL_1933",
"LABEL_1934",
"LABEL_1935",
"LABEL_1936",
"LABEL_1937",
"LABEL_1938",
"LABEL_1939",
"LABEL_194",
"LABEL_1940",
"LABEL_1941",
"LABEL_1942",
"LABEL_1943",
"LABEL_1944",
"LABEL_1945",
"LABEL_1946",
"LABEL_1947",
"LABEL_1948",
"LABEL_1949",
"LABEL_195",
"LABEL_1950",
"LABEL_1951",
"LABEL_1952",
"LABEL_1953",
"LABEL_1954",
"LABEL_1955",
"LABEL_1956",
"LABEL_1957",
"LABEL_1958",
"LABEL_1959",
"LABEL_196",
"LABEL_1960",
"LABEL_1961",
"LABEL_1962",
"LABEL_1963",
"LABEL_1964",
"LABEL_1965",
"LABEL_1966",
"LABEL_1967",
"LABEL_1968",
"LABEL_1969",
"LABEL_197",
"LABEL_1970",
"LABEL_1971",
"LABEL_1972",
"LABEL_1973",
"LABEL_1974",
"LABEL_1975",
"LABEL_1976",
"LABEL_1977",
"LABEL_1978",
"LABEL_1979",
"LABEL_198",
"LABEL_1980",
"LABEL_1981",
"LABEL_1982",
"LABEL_1983",
"LABEL_1984",
"LABEL_1985",
"LABEL_1986",
"LABEL_1987",
"LABEL_1988",
"LABEL_1989",
"LABEL_199",
"LABEL_1990",
"LABEL_1991",
"LABEL_1992",
"LABEL_1993",
"LABEL_1994",
"LABEL_1995",
"LABEL_1996",
"LABEL_1997",
"LABEL_1998",
"LABEL_1999",
"LABEL_2",
"LABEL_20",
"LABEL_200",
"LABEL_2000",
"LABEL_2001",
"LABEL_2002",
"LABEL_2003",
"LABEL_2004",
"LABEL_2005",
"LABEL_2006",
"LABEL_2007",
"LABEL_2008",
"LABEL_2009",
"LABEL_201",
"LABEL_2010",
"LABEL_2011",
"LABEL_2012",
"LABEL_2013",
"LABEL_2014",
"LABEL_2015",
"LABEL_2016",
"LABEL_2017",
"LABEL_2018",
"LABEL_2019",
"LABEL_202",
"LABEL_2020",
"LABEL_2021",
"LABEL_2022",
"LABEL_2023",
"LABEL_2024",
"LABEL_2025",
"LABEL_2026",
"LABEL_2027",
"LABEL_2028",
"LABEL_2029",
"LABEL_203",
"LABEL_2030",
"LABEL_2031",
"LABEL_2032",
"LABEL_2033",
"LABEL_2034",
"LABEL_2035",
"LABEL_2036",
"LABEL_2037",
"LABEL_2038",
"LABEL_2039",
"LABEL_204",
"LABEL_2040",
"LABEL_2041",
"LABEL_2042",
"LABEL_2043",
"LABEL_2044",
"LABEL_2045",
"LABEL_2046",
"LABEL_2047",
"LABEL_2048",
"LABEL_2049",
"LABEL_205",
"LABEL_2050",
"LABEL_2051",
"LABEL_2052",
"LABEL_2053",
"LABEL_2054",
"LABEL_2055",
"LABEL_2056",
"LABEL_2057",
"LABEL_2058",
"LABEL_2059",
"LABEL_206",
"LABEL_2060",
"LABEL_2061",
"LABEL_2062",
"LABEL_2063",
"LABEL_2064",
"LABEL_2065",
"LABEL_2066",
"LABEL_2067",
"LABEL_2068",
"LABEL_2069",
"LABEL_207",
"LABEL_2070",
"LABEL_2071",
"LABEL_2072",
"LABEL_2073",
"LABEL_2074",
"LABEL_2075",
"LABEL_2076",
"LABEL_2077",
"LABEL_2078",
"LABEL_2079",
"LABEL_208",
"LABEL_2080",
"LABEL_2081",
"LABEL_2082",
"LABEL_2083",
"LABEL_2084",
"LABEL_2085",
"LABEL_2086",
"LABEL_2087",
"LABEL_2088",
"LABEL_2089",
"LABEL_209",
"LABEL_2090",
"LABEL_2091",
"LABEL_2092",
"LABEL_2093",
"LABEL_2094",
"LABEL_2095",
"LABEL_2096",
"LABEL_2097",
"LABEL_2098",
"LABEL_2099",
"LABEL_21",
"LABEL_210",
"LABEL_2100",
"LABEL_2101",
"LABEL_2102",
"LABEL_2103",
"LABEL_2104",
"LABEL_2105",
"LABEL_2106",
"LABEL_2107",
"LABEL_2108",
"LABEL_2109",
"LABEL_211",
"LABEL_2110",
"LABEL_2111",
"LABEL_2112",
"LABEL_2113",
"LABEL_2114",
"LABEL_2115",
"LABEL_2116",
"LABEL_2117",
"LABEL_2118",
"LABEL_2119",
"LABEL_212",
"LABEL_2120",
"LABEL_2121",
"LABEL_2122",
"LABEL_2123",
"LABEL_2124",
"LABEL_2125",
"LABEL_2126",
"LABEL_2127",
"LABEL_2128",
"LABEL_2129",
"LABEL_213",
"LABEL_2130",
"LABEL_2131",
"LABEL_2132",
"LABEL_2133",
"LABEL_2134",
"LABEL_2135",
"LABEL_2136",
"LABEL_2137",
"LABEL_2138",
"LABEL_2139",
"LABEL_214",
"LABEL_2140",
"LABEL_2141",
"LABEL_2142",
"LABEL_2143",
"LABEL_2144",
"LABEL_2145",
"LABEL_2146",
"LABEL_2147",
"LABEL_2148",
"LABEL_2149",
"LABEL_215",
"LABEL_2150",
"LABEL_2151",
"LABEL_2152",
"LABEL_2153",
"LABEL_2154",
"LABEL_2155",
"LABEL_2156",
"LABEL_2157",
"LABEL_2158",
"LABEL_2159",
"LABEL_216",
"LABEL_2160",
"LABEL_2161",
"LABEL_2162",
"LABEL_2163",
"LABEL_2164",
"LABEL_2165",
"LABEL_2166",
"LABEL_2167",
"LABEL_2168",
"LABEL_2169",
"LABEL_217",
"LABEL_2170",
"LABEL_2171",
"LABEL_2172",
"LABEL_2173",
"LABEL_2174",
"LABEL_2175",
"LABEL_2176",
"LABEL_2177",
"LABEL_2178",
"LABEL_2179",
"LABEL_218",
"LABEL_2180",
"LABEL_2181",
"LABEL_2182",
"LABEL_2183",
"LABEL_2184",
"LABEL_2185",
"LABEL_2186",
"LABEL_2187",
"LABEL_2188",
"LABEL_2189",
"LABEL_219",
"LABEL_2190",
"LABEL_2191",
"LABEL_2192",
"LABEL_2193",
"LABEL_2194",
"LABEL_2195",
"LABEL_2196",
"LABEL_2197",
"LABEL_2198",
"LABEL_2199",
"LABEL_22",
"LABEL_220",
"LABEL_2200",
"LABEL_2201",
"LABEL_2202",
"LABEL_2203",
"LABEL_2204",
"LABEL_2205",
"LABEL_2206",
"LABEL_2207",
"LABEL_2208",
"LABEL_2209",
"LABEL_221",
"LABEL_2210",
"LABEL_2211",
"LABEL_2212",
"LABEL_2213",
"LABEL_2214",
"LABEL_2215",
"LABEL_2216",
"LABEL_2217",
"LABEL_2218",
"LABEL_2219",
"LABEL_222",
"LABEL_2220",
"LABEL_2221",
"LABEL_2222",
"LABEL_2223",
"LABEL_2224",
"LABEL_2225",
"LABEL_2226",
"LABEL_2227",
"LABEL_2228",
"LABEL_2229",
"LABEL_223",
"LABEL_2230",
"LABEL_2231",
"LABEL_2232",
"LABEL_2233",
"LABEL_2234",
"LABEL_2235",
"LABEL_2236",
"LABEL_2237",
"LABEL_2238",
"LABEL_2239",
"LABEL_224",
"LABEL_2240",
"LABEL_2241",
"LABEL_2242",
"LABEL_2243",
"LABEL_2244",
"LABEL_2245",
"LABEL_2246",
"LABEL_2247",
"LABEL_2248",
"LABEL_2249",
"LABEL_225",
"LABEL_2250",
"LABEL_2251",
"LABEL_2252",
"LABEL_2253",
"LABEL_2254",
"LABEL_2255",
"LABEL_2256",
"LABEL_2257",
"LABEL_2258",
"LABEL_2259",
"LABEL_226",
"LABEL_2260",
"LABEL_2261",
"LABEL_2262",
"LABEL_2263",
"LABEL_2264",
"LABEL_2265",
"LABEL_2266",
"LABEL_2267",
"LABEL_2268",
"LABEL_2269",
"LABEL_227",
"LABEL_2270",
"LABEL_2271",
"LABEL_2272",
"LABEL_2273",
"LABEL_2274",
"LABEL_2275",
"LABEL_2276",
"LABEL_2277",
"LABEL_2278",
"LABEL_2279",
"LABEL_228",
"LABEL_2280",
"LABEL_2281",
"LABEL_2282",
"LABEL_2283",
"LABEL_2284",
"LABEL_2285",
"LABEL_2286",
"LABEL_2287",
"LABEL_2288",
"LABEL_2289",
"LABEL_229",
"LABEL_2290",
"LABEL_2291",
"LABEL_2292",
"LABEL_2293",
"LABEL_2294",
"LABEL_2295",
"LABEL_2296",
"LABEL_2297",
"LABEL_2298",
"LABEL_2299",
"LABEL_23",
"LABEL_230",
"LABEL_2300",
"LABEL_2301",
"LABEL_2302",
"LABEL_2303",
"LABEL_2304",
"LABEL_2305",
"LABEL_2306",
"LABEL_2307",
"LABEL_2308",
"LABEL_2309",
"LABEL_231",
"LABEL_2310",
"LABEL_2311",
"LABEL_2312",
"LABEL_2313",
"LABEL_2314",
"LABEL_2315",
"LABEL_2316",
"LABEL_2317",
"LABEL_2318",
"LABEL_2319",
"LABEL_232",
"LABEL_2320",
"LABEL_2321",
"LABEL_2322",
"LABEL_2323",
"LABEL_2324",
"LABEL_2325",
"LABEL_2326",
"LABEL_2327",
"LABEL_2328",
"LABEL_2329",
"LABEL_233",
"LABEL_2330",
"LABEL_2331",
"LABEL_2332",
"LABEL_2333",
"LABEL_2334",
"LABEL_2335",
"LABEL_2336",
"LABEL_2337",
"LABEL_2338",
"LABEL_2339",
"LABEL_234",
"LABEL_2340",
"LABEL_2341",
"LABEL_2342",
"LABEL_2343",
"LABEL_2344",
"LABEL_2345",
"LABEL_2346",
"LABEL_2347",
"LABEL_2348",
"LABEL_2349",
"LABEL_235",
"LABEL_2350",
"LABEL_2351",
"LABEL_2352",
"LABEL_2353",
"LABEL_2354",
"LABEL_2355",
"LABEL_2356",
"LABEL_2357",
"LABEL_2358",
"LABEL_2359",
"LABEL_236",
"LABEL_2360",
"LABEL_2361",
"LABEL_2362",
"LABEL_2363",
"LABEL_2364",
"LABEL_2365",
"LABEL_2366",
"LABEL_2367",
"LABEL_2368",
"LABEL_2369",
"LABEL_237",
"LABEL_2370",
"LABEL_2371",
"LABEL_2372",
"LABEL_2373",
"LABEL_2374",
"LABEL_2375",
"LABEL_2376",
"LABEL_2377",
"LABEL_2378",
"LABEL_2379",
"LABEL_238",
"LABEL_2380",
"LABEL_2381",
"LABEL_2382",
"LABEL_2383",
"LABEL_2384",
"LABEL_2385",
"LABEL_2386",
"LABEL_2387",
"LABEL_2388",
"LABEL_2389",
"LABEL_239",
"LABEL_2390",
"LABEL_2391",
"LABEL_2392",
"LABEL_2393",
"LABEL_2394",
"LABEL_2395",
"LABEL_2396",
"LABEL_2397",
"LABEL_2398",
"LABEL_2399",
"LABEL_24",
"LABEL_240",
"LABEL_2400",
"LABEL_2401",
"LABEL_2402",
"LABEL_2403",
"LABEL_2404",
"LABEL_2405",
"LABEL_2406",
"LABEL_2407",
"LABEL_2408",
"LABEL_2409",
"LABEL_241",
"LABEL_2410",
"LABEL_2411",
"LABEL_2412",
"LABEL_2413",
"LABEL_2414",
"LABEL_2415",
"LABEL_2416",
"LABEL_2417",
"LABEL_2418",
"LABEL_2419",
"LABEL_242",
"LABEL_2420",
"LABEL_2421",
"LABEL_2422",
"LABEL_2423",
"LABEL_2424",
"LABEL_2425",
"LABEL_2426",
"LABEL_2427",
"LABEL_2428",
"LABEL_2429",
"LABEL_243",
"LABEL_2430",
"LABEL_2431",
"LABEL_2432",
"LABEL_2433",
"LABEL_2434",
"LABEL_2435",
"LABEL_2436",
"LABEL_2437",
"LABEL_2438",
"LABEL_2439",
"LABEL_244",
"LABEL_2440",
"LABEL_2441",
"LABEL_2442",
"LABEL_2443",
"LABEL_2444",
"LABEL_2445",
"LABEL_2446",
"LABEL_2447",
"LABEL_2448",
"LABEL_2449",
"LABEL_245",
"LABEL_2450",
"LABEL_2451",
"LABEL_2452",
"LABEL_2453",
"LABEL_2454",
"LABEL_2455",
"LABEL_2456",
"LABEL_2457",
"LABEL_2458",
"LABEL_2459",
"LABEL_246",
"LABEL_2460",
"LABEL_2461",
"LABEL_2462",
"LABEL_2463",
"LABEL_2464",
"LABEL_2465",
"LABEL_2466",
"LABEL_2467",
"LABEL_2468",
"LABEL_2469",
"LABEL_247",
"LABEL_2470",
"LABEL_2471",
"LABEL_2472",
"LABEL_2473",
"LABEL_2474",
"LABEL_2475",
"LABEL_2476",
"LABEL_2477",
"LABEL_2478",
"LABEL_2479",
"LABEL_248",
"LABEL_2480",
"LABEL_2481",
"LABEL_2482",
"LABEL_2483",
"LABEL_2484",
"LABEL_2485",
"LABEL_2486",
"LABEL_2487",
"LABEL_2488",
"LABEL_2489",
"LABEL_249",
"LABEL_2490",
"LABEL_2491",
"LABEL_2492",
"LABEL_2493",
"LABEL_2494",
"LABEL_2495",
"LABEL_2496",
"LABEL_2497",
"LABEL_2498",
"LABEL_2499",
"LABEL_25",
"LABEL_250",
"LABEL_2500",
"LABEL_2501",
"LABEL_2502",
"LABEL_2503",
"LABEL_2504",
"LABEL_2505",
"LABEL_2506",
"LABEL_2507",
"LABEL_2508",
"LABEL_2509",
"LABEL_251",
"LABEL_2510",
"LABEL_2511",
"LABEL_2512",
"LABEL_2513",
"LABEL_2514",
"LABEL_2515",
"LABEL_2516",
"LABEL_2517",
"LABEL_2518",
"LABEL_2519",
"LABEL_252",
"LABEL_2520",
"LABEL_2521",
"LABEL_2522",
"LABEL_2523",
"LABEL_2524",
"LABEL_2525",
"LABEL_2526",
"LABEL_2527",
"LABEL_2528",
"LABEL_2529",
"LABEL_253",
"LABEL_2530",
"LABEL_2531",
"LABEL_2532",
"LABEL_2533",
"LABEL_2534",
"LABEL_2535",
"LABEL_2536",
"LABEL_2537",
"LABEL_2538",
"LABEL_2539",
"LABEL_254",
"LABEL_2540",
"LABEL_2541",
"LABEL_2542",
"LABEL_2543",
"LABEL_2544",
"LABEL_2545",
"LABEL_2546",
"LABEL_2547",
"LABEL_2548",
"LABEL_2549",
"LABEL_255",
"LABEL_2550",
"LABEL_2551",
"LABEL_2552",
"LABEL_2553",
"LABEL_2554",
"LABEL_2555",
"LABEL_2556",
"LABEL_2557",
"LABEL_2558",
"LABEL_2559",
"LABEL_256",
"LABEL_2560",
"LABEL_2561",
"LABEL_2562",
"LABEL_2563",
"LABEL_2564",
"LABEL_2565",
"LABEL_2566",
"LABEL_2567",
"LABEL_2568",
"LABEL_2569",
"LABEL_257",
"LABEL_2570",
"LABEL_2571",
"LABEL_2572",
"LABEL_2573",
"LABEL_2574",
"LABEL_2575",
"LABEL_2576",
"LABEL_2577",
"LABEL_2578",
"LABEL_2579",
"LABEL_258",
"LABEL_2580",
"LABEL_2581",
"LABEL_2582",
"LABEL_2583",
"LABEL_2584",
"LABEL_2585",
"LABEL_2586",
"LABEL_2587",
"LABEL_2588",
"LABEL_2589",
"LABEL_259",
"LABEL_2590",
"LABEL_2591",
"LABEL_2592",
"LABEL_2593",
"LABEL_2594",
"LABEL_2595",
"LABEL_2596",
"LABEL_2597",
"LABEL_2598",
"LABEL_2599",
"LABEL_26",
"LABEL_260",
"LABEL_2600",
"LABEL_2601",
"LABEL_2602",
"LABEL_2603",
"LABEL_2604",
"LABEL_2605",
"LABEL_2606",
"LABEL_2607",
"LABEL_2608",
"LABEL_2609",
"LABEL_261",
"LABEL_2610",
"LABEL_2611",
"LABEL_2612",
"LABEL_2613",
"LABEL_2614",
"LABEL_2615",
"LABEL_2616",
"LABEL_2617",
"LABEL_2618",
"LABEL_2619",
"LABEL_262",
"LABEL_2620",
"LABEL_2621",
"LABEL_2622",
"LABEL_2623",
"LABEL_2624",
"LABEL_2625",
"LABEL_2626",
"LABEL_2627",
"LABEL_2628",
"LABEL_2629",
"LABEL_263",
"LABEL_2630",
"LABEL_2631",
"LABEL_2632",
"LABEL_2633",
"LABEL_2634",
"LABEL_2635",
"LABEL_2636",
"LABEL_2637",
"LABEL_2638",
"LABEL_2639",
"LABEL_264",
"LABEL_2640",
"LABEL_2641",
"LABEL_2642",
"LABEL_2643",
"LABEL_2644",
"LABEL_2645",
"LABEL_2646",
"LABEL_2647",
"LABEL_2648",
"LABEL_2649",
"LABEL_265",
"LABEL_2650",
"LABEL_2651",
"LABEL_2652",
"LABEL_2653",
"LABEL_2654",
"LABEL_2655",
"LABEL_2656",
"LABEL_2657",
"LABEL_2658",
"LABEL_2659",
"LABEL_266",
"LABEL_2660",
"LABEL_2661",
"LABEL_2662",
"LABEL_2663",
"LABEL_2664",
"LABEL_2665",
"LABEL_2666",
"LABEL_2667",
"LABEL_2668",
"LABEL_2669",
"LABEL_267",
"LABEL_2670",
"LABEL_2671",
"LABEL_2672",
"LABEL_2673",
"LABEL_2674",
"LABEL_2675",
"LABEL_2676",
"LABEL_2677",
"LABEL_2678",
"LABEL_2679",
"LABEL_268",
"LABEL_2680",
"LABEL_2681",
"LABEL_2682",
"LABEL_2683",
"LABEL_2684",
"LABEL_2685",
"LABEL_2686",
"LABEL_2687",
"LABEL_2688",
"LABEL_2689",
"LABEL_269",
"LABEL_2690",
"LABEL_2691",
"LABEL_2692",
"LABEL_2693",
"LABEL_2694",
"LABEL_2695",
"LABEL_2696",
"LABEL_2697",
"LABEL_2698",
"LABEL_2699",
"LABEL_27",
"LABEL_270",
"LABEL_2700",
"LABEL_2701",
"LABEL_2702",
"LABEL_2703",
"LABEL_2704",
"LABEL_2705",
"LABEL_2706",
"LABEL_2707",
"LABEL_2708",
"LABEL_2709",
"LABEL_271",
"LABEL_2710",
"LABEL_2711",
"LABEL_2712",
"LABEL_2713",
"LABEL_2714",
"LABEL_2715",
"LABEL_2716",
"LABEL_2717",
"LABEL_2718",
"LABEL_2719",
"LABEL_272",
"LABEL_2720",
"LABEL_2721",
"LABEL_2722",
"LABEL_2723",
"LABEL_2724",
"LABEL_2725",
"LABEL_2726",
"LABEL_2727",
"LABEL_2728",
"LABEL_2729",
"LABEL_273",
"LABEL_2730",
"LABEL_2731",
"LABEL_2732",
"LABEL_2733",
"LABEL_2734",
"LABEL_2735",
"LABEL_2736",
"LABEL_2737",
"LABEL_2738",
"LABEL_2739",
"LABEL_274",
"LABEL_2740",
"LABEL_2741",
"LABEL_2742",
"LABEL_2743",
"LABEL_2744",
"LABEL_2745",
"LABEL_2746",
"LABEL_2747",
"LABEL_2748",
"LABEL_2749",
"LABEL_275",
"LABEL_2750",
"LABEL_2751",
"LABEL_2752",
"LABEL_2753",
"LABEL_2754",
"LABEL_2755",
"LABEL_2756",
"LABEL_2757",
"LABEL_2758",
"LABEL_2759",
"LABEL_276",
"LABEL_2760",
"LABEL_2761",
"LABEL_2762",
"LABEL_2763",
"LABEL_2764",
"LABEL_2765",
"LABEL_2766",
"LABEL_2767",
"LABEL_2768",
"LABEL_2769",
"LABEL_277",
"LABEL_2770",
"LABEL_2771",
"LABEL_2772",
"LABEL_2773",
"LABEL_2774",
"LABEL_2775",
"LABEL_2776",
"LABEL_2777",
"LABEL_2778",
"LABEL_2779",
"LABEL_278",
"LABEL_2780",
"LABEL_2781",
"LABEL_2782",
"LABEL_2783",
"LABEL_2784",
"LABEL_2785",
"LABEL_2786",
"LABEL_2787",
"LABEL_2788",
"LABEL_2789",
"LABEL_279",
"LABEL_2790",
"LABEL_2791",
"LABEL_2792",
"LABEL_2793",
"LABEL_2794",
"LABEL_2795",
"LABEL_2796",
"LABEL_2797",
"LABEL_2798",
"LABEL_2799",
"LABEL_28",
"LABEL_280",
"LABEL_2800",
"LABEL_2801",
"LABEL_2802",
"LABEL_2803",
"LABEL_2804",
"LABEL_2805",
"LABEL_2806",
"LABEL_2807",
"LABEL_2808",
"LABEL_2809",
"LABEL_281",
"LABEL_2810",
"LABEL_2811",
"LABEL_2812",
"LABEL_2813",
"LABEL_2814",
"LABEL_2815",
"LABEL_2816",
"LABEL_2817",
"LABEL_2818",
"LABEL_2819",
"LABEL_282",
"LABEL_2820",
"LABEL_2821",
"LABEL_2822",
"LABEL_2823",
"LABEL_2824",
"LABEL_2825",
"LABEL_2826",
"LABEL_2827",
"LABEL_2828",
"LABEL_2829",
"LABEL_283",
"LABEL_2830",
"LABEL_2831",
"LABEL_2832",
"LABEL_2833",
"LABEL_2834",
"LABEL_2835",
"LABEL_2836",
"LABEL_2837",
"LABEL_2838",
"LABEL_2839",
"LABEL_284",
"LABEL_2840",
"LABEL_2841",
"LABEL_2842",
"LABEL_2843",
"LABEL_2844",
"LABEL_2845",
"LABEL_2846",
"LABEL_2847",
"LABEL_2848",
"LABEL_2849",
"LABEL_285",
"LABEL_2850",
"LABEL_2851",
"LABEL_2852",
"LABEL_2853",
"LABEL_2854",
"LABEL_2855",
"LABEL_2856",
"LABEL_2857",
"LABEL_2858",
"LABEL_2859",
"LABEL_286",
"LABEL_2860",
"LABEL_2861",
"LABEL_2862",
"LABEL_2863",
"LABEL_2864",
"LABEL_2865",
"LABEL_2866",
"LABEL_2867",
"LABEL_2868",
"LABEL_2869",
"LABEL_287",
"LABEL_2870",
"LABEL_2871",
"LABEL_2872",
"LABEL_2873",
"LABEL_2874",
"LABEL_2875",
"LABEL_2876",
"LABEL_2877",
"LABEL_2878",
"LABEL_2879",
"LABEL_288",
"LABEL_2880",
"LABEL_2881",
"LABEL_2882",
"LABEL_2883",
"LABEL_2884",
"LABEL_2885",
"LABEL_2886",
"LABEL_2887",
"LABEL_2888",
"LABEL_2889",
"LABEL_289",
"LABEL_2890",
"LABEL_2891",
"LABEL_2892",
"LABEL_2893",
"LABEL_2894",
"LABEL_2895",
"LABEL_2896",
"LABEL_2897",
"LABEL_2898",
"LABEL_2899",
"LABEL_29",
"LABEL_290",
"LABEL_2900",
"LABEL_2901",
"LABEL_2902",
"LABEL_2903",
"LABEL_2904",
"LABEL_2905",
"LABEL_2906",
"LABEL_2907",
"LABEL_2908",
"LABEL_2909",
"LABEL_291",
"LABEL_2910",
"LABEL_2911",
"LABEL_2912",
"LABEL_2913",
"LABEL_2914",
"LABEL_2915",
"LABEL_2916",
"LABEL_2917",
"LABEL_2918",
"LABEL_2919",
"LABEL_292",
"LABEL_2920",
"LABEL_2921",
"LABEL_2922",
"LABEL_2923",
"LABEL_2924",
"LABEL_2925",
"LABEL_2926",
"LABEL_2927",
"LABEL_2928",
"LABEL_2929",
"LABEL_293",
"LABEL_2930",
"LABEL_2931",
"LABEL_2932",
"LABEL_2933",
"LABEL_2934",
"LABEL_2935",
"LABEL_2936",
"LABEL_2937",
"LABEL_2938",
"LABEL_2939",
"LABEL_294",
"LABEL_2940",
"LABEL_2941",
"LABEL_2942",
"LABEL_2943",
"LABEL_2944",
"LABEL_2945",
"LABEL_2946",
"LABEL_2947",
"LABEL_2948",
"LABEL_2949",
"LABEL_295",
"LABEL_2950",
"LABEL_2951",
"LABEL_2952",
"LABEL_2953",
"LABEL_2954",
"LABEL_2955",
"LABEL_2956",
"LABEL_2957",
"LABEL_2958",
"LABEL_2959",
"LABEL_296",
"LABEL_2960",
"LABEL_2961",
"LABEL_2962",
"LABEL_2963",
"LABEL_2964",
"LABEL_2965",
"LABEL_2966",
"LABEL_2967",
"LABEL_2968",
"LABEL_2969",
"LABEL_297",
"LABEL_2970",
"LABEL_2971",
"LABEL_2972",
"LABEL_2973",
"LABEL_2974",
"LABEL_2975",
"LABEL_2976",
"LABEL_2977",
"LABEL_2978",
"LABEL_2979",
"LABEL_298",
"LABEL_2980",
"LABEL_2981",
"LABEL_2982",
"LABEL_2983",
"LABEL_2984",
"LABEL_2985",
"LABEL_2986",
"LABEL_2987",
"LABEL_2988",
"LABEL_2989",
"LABEL_299",
"LABEL_2990",
"LABEL_2991",
"LABEL_2992",
"LABEL_2993",
"LABEL_2994",
"LABEL_2995",
"LABEL_2996",
"LABEL_2997",
"LABEL_2998",
"LABEL_2999",
"LABEL_3",
"LABEL_30",
"LABEL_300",
"LABEL_3000",
"LABEL_3001",
"LABEL_3002",
"LABEL_3003",
"LABEL_3004",
"LABEL_3005",
"LABEL_3006",
"LABEL_3007",
"LABEL_3008",
"LABEL_3009",
"LABEL_301",
"LABEL_3010",
"LABEL_3011",
"LABEL_3012",
"LABEL_3013",
"LABEL_3014",
"LABEL_3015",
"LABEL_3016",
"LABEL_3017",
"LABEL_3018",
"LABEL_3019",
"LABEL_302",
"LABEL_3020",
"LABEL_3021",
"LABEL_3022",
"LABEL_3023",
"LABEL_3024",
"LABEL_3025",
"LABEL_3026",
"LABEL_3027",
"LABEL_3028",
"LABEL_3029",
"LABEL_303",
"LABEL_3030",
"LABEL_3031",
"LABEL_3032",
"LABEL_3033",
"LABEL_3034",
"LABEL_3035",
"LABEL_3036",
"LABEL_3037",
"LABEL_3038",
"LABEL_3039",
"LABEL_304",
"LABEL_3040",
"LABEL_3041",
"LABEL_3042",
"LABEL_3043",
"LABEL_3044",
"LABEL_3045",
"LABEL_3046",
"LABEL_3047",
"LABEL_3048",
"LABEL_3049",
"LABEL_305",
"LABEL_3050",
"LABEL_3051",
"LABEL_3052",
"LABEL_3053",
"LABEL_3054",
"LABEL_3055",
"LABEL_3056",
"LABEL_3057",
"LABEL_3058",
"LABEL_3059",
"LABEL_306",
"LABEL_3060",
"LABEL_3061",
"LABEL_3062",
"LABEL_3063",
"LABEL_3064",
"LABEL_3065",
"LABEL_3066",
"LABEL_3067",
"LABEL_3068",
"LABEL_3069",
"LABEL_307",
"LABEL_3070",
"LABEL_3071",
"LABEL_3072",
"LABEL_3073",
"LABEL_3074",
"LABEL_3075",
"LABEL_3076",
"LABEL_3077",
"LABEL_3078",
"LABEL_3079",
"LABEL_308",
"LABEL_3080",
"LABEL_3081",
"LABEL_3082",
"LABEL_3083",
"LABEL_3084",
"LABEL_3085",
"LABEL_3086",
"LABEL_3087",
"LABEL_3088",
"LABEL_3089",
"LABEL_309",
"LABEL_3090",
"LABEL_3091",
"LABEL_3092",
"LABEL_3093",
"LABEL_3094",
"LABEL_3095",
"LABEL_3096",
"LABEL_3097",
"LABEL_3098",
"LABEL_3099",
"LABEL_31",
"LABEL_310",
"LABEL_3100",
"LABEL_3101",
"LABEL_3102",
"LABEL_3103",
"LABEL_3104",
"LABEL_3105",
"LABEL_3106",
"LABEL_3107",
"LABEL_3108",
"LABEL_3109",
"LABEL_311",
"LABEL_3110",
"LABEL_3111",
"LABEL_3112",
"LABEL_3113",
"LABEL_3114",
"LABEL_3115",
"LABEL_3116",
"LABEL_3117",
"LABEL_3118",
"LABEL_3119",
"LABEL_312",
"LABEL_3120",
"LABEL_3121",
"LABEL_3122",
"LABEL_3123",
"LABEL_3124",
"LABEL_3125",
"LABEL_3126",
"LABEL_3127",
"LABEL_3128",
"LABEL_3129",
"LABEL_313",
"LABEL_3130",
"LABEL_3131",
"LABEL_3132",
"LABEL_3133",
"LABEL_3134",
"LABEL_3135",
"LABEL_3136",
"LABEL_3137",
"LABEL_3138",
"LABEL_3139",
"LABEL_314",
"LABEL_3140",
"LABEL_3141",
"LABEL_3142",
"LABEL_3143",
"LABEL_3144",
"LABEL_3145",
"LABEL_3146",
"LABEL_3147",
"LABEL_3148",
"LABEL_3149",
"LABEL_315",
"LABEL_3150",
"LABEL_3151",
"LABEL_3152",
"LABEL_3153",
"LABEL_3154",
"LABEL_3155",
"LABEL_3156",
"LABEL_3157",
"LABEL_3158",
"LABEL_3159",
"LABEL_316",
"LABEL_3160",
"LABEL_3161",
"LABEL_3162",
"LABEL_3163",
"LABEL_3164",
"LABEL_3165",
"LABEL_3166",
"LABEL_3167",
"LABEL_3168",
"LABEL_3169",
"LABEL_317",
"LABEL_3170",
"LABEL_3171",
"LABEL_3172",
"LABEL_3173",
"LABEL_3174",
"LABEL_3175",
"LABEL_3176",
"LABEL_3177",
"LABEL_3178",
"LABEL_3179",
"LABEL_318",
"LABEL_3180",
"LABEL_3181",
"LABEL_3182",
"LABEL_3183",
"LABEL_3184",
"LABEL_3185",
"LABEL_3186",
"LABEL_3187",
"LABEL_3188",
"LABEL_3189",
"LABEL_319",
"LABEL_3190",
"LABEL_3191",
"LABEL_3192",
"LABEL_3193",
"LABEL_3194",
"LABEL_3195",
"LABEL_3196",
"LABEL_3197",
"LABEL_3198",
"LABEL_3199",
"LABEL_32",
"LABEL_320",
"LABEL_3200",
"LABEL_3201",
"LABEL_3202",
"LABEL_3203",
"LABEL_3204",
"LABEL_3205",
"LABEL_3206",
"LABEL_3207",
"LABEL_3208",
"LABEL_3209",
"LABEL_321",
"LABEL_3210",
"LABEL_3211",
"LABEL_3212",
"LABEL_3213",
"LABEL_3214",
"LABEL_3215",
"LABEL_3216",
"LABEL_3217",
"LABEL_3218",
"LABEL_3219",
"LABEL_322",
"LABEL_3220",
"LABEL_3221",
"LABEL_3222",
"LABEL_3223",
"LABEL_3224",
"LABEL_3225",
"LABEL_3226",
"LABEL_3227",
"LABEL_3228",
"LABEL_3229",
"LABEL_323",
"LABEL_3230",
"LABEL_3231",
"LABEL_3232",
"LABEL_3233",
"LABEL_3234",
"LABEL_3235",
"LABEL_3236",
"LABEL_3237",
"LABEL_3238",
"LABEL_3239",
"LABEL_324",
"LABEL_3240",
"LABEL_3241",
"LABEL_3242",
"LABEL_3243",
"LABEL_3244",
"LABEL_3245",
"LABEL_3246",
"LABEL_3247",
"LABEL_3248",
"LABEL_3249",
"LABEL_325",
"LABEL_3250",
"LABEL_3251",
"LABEL_3252",
"LABEL_3253",
"LABEL_3254",
"LABEL_3255",
"LABEL_3256",
"LABEL_3257",
"LABEL_3258",
"LABEL_3259",
"LABEL_326",
"LABEL_3260",
"LABEL_3261",
"LABEL_3262",
"LABEL_3263",
"LABEL_3264",
"LABEL_3265",
"LABEL_3266",
"LABEL_3267",
"LABEL_3268",
"LABEL_3269",
"LABEL_327",
"LABEL_3270",
"LABEL_3271",
"LABEL_3272",
"LABEL_3273",
"LABEL_3274",
"LABEL_3275",
"LABEL_3276",
"LABEL_3277",
"LABEL_3278",
"LABEL_3279",
"LABEL_328",
"LABEL_3280",
"LABEL_3281",
"LABEL_3282",
"LABEL_3283",
"LABEL_3284",
"LABEL_3285",
"LABEL_3286",
"LABEL_3287",
"LABEL_3288",
"LABEL_3289",
"LABEL_329",
"LABEL_3290",
"LABEL_3291",
"LABEL_3292",
"LABEL_3293",
"LABEL_3294",
"LABEL_3295",
"LABEL_3296",
"LABEL_3297",
"LABEL_3298",
"LABEL_3299",
"LABEL_33",
"LABEL_330",
"LABEL_3300",
"LABEL_3301",
"LABEL_3302",
"LABEL_3303",
"LABEL_3304",
"LABEL_3305",
"LABEL_3306",
"LABEL_3307",
"LABEL_3308",
"LABEL_3309",
"LABEL_331",
"LABEL_3310",
"LABEL_3311",
"LABEL_3312",
"LABEL_3313",
"LABEL_3314",
"LABEL_3315",
"LABEL_3316",
"LABEL_3317",
"LABEL_3318",
"LABEL_3319",
"LABEL_332",
"LABEL_3320",
"LABEL_3321",
"LABEL_3322",
"LABEL_3323",
"LABEL_3324",
"LABEL_3325",
"LABEL_3326",
"LABEL_3327",
"LABEL_3328",
"LABEL_3329",
"LABEL_333",
"LABEL_3330",
"LABEL_3331",
"LABEL_3332",
"LABEL_3333",
"LABEL_3334",
"LABEL_3335",
"LABEL_3336",
"LABEL_3337",
"LABEL_3338",
"LABEL_3339",
"LABEL_334",
"LABEL_3340",
"LABEL_3341",
"LABEL_3342",
"LABEL_3343",
"LABEL_3344",
"LABEL_3345",
"LABEL_3346",
"LABEL_3347",
"LABEL_3348",
"LABEL_3349",
"LABEL_335",
"LABEL_3350",
"LABEL_3351",
"LABEL_3352",
"LABEL_3353",
"LABEL_3354",
"LABEL_3355",
"LABEL_3356",
"LABEL_3357",
"LABEL_3358",
"LABEL_3359",
"LABEL_336",
"LABEL_3360",
"LABEL_3361",
"LABEL_3362",
"LABEL_3363",
"LABEL_3364",
"LABEL_3365",
"LABEL_3366",
"LABEL_3367",
"LABEL_3368",
"LABEL_3369",
"LABEL_337",
"LABEL_3370",
"LABEL_3371",
"LABEL_3372",
"LABEL_3373",
"LABEL_3374",
"LABEL_3375",
"LABEL_3376",
"LABEL_3377",
"LABEL_3378",
"LABEL_3379",
"LABEL_338",
"LABEL_3380",
"LABEL_3381",
"LABEL_3382",
"LABEL_3383",
"LABEL_3384",
"LABEL_3385",
"LABEL_3386",
"LABEL_3387",
"LABEL_3388",
"LABEL_3389",
"LABEL_339",
"LABEL_3390",
"LABEL_3391",
"LABEL_3392",
"LABEL_3393",
"LABEL_3394",
"LABEL_3395",
"LABEL_3396",
"LABEL_3397",
"LABEL_3398",
"LABEL_3399",
"LABEL_34",
"LABEL_340",
"LABEL_3400",
"LABEL_3401",
"LABEL_3402",
"LABEL_3403",
"LABEL_3404",
"LABEL_3405",
"LABEL_3406",
"LABEL_3407",
"LABEL_3408",
"LABEL_3409",
"LABEL_341",
"LABEL_3410",
"LABEL_3411",
"LABEL_3412",
"LABEL_3413",
"LABEL_3414",
"LABEL_3415",
"LABEL_3416",
"LABEL_3417",
"LABEL_3418",
"LABEL_3419",
"LABEL_342",
"LABEL_3420",
"LABEL_3421",
"LABEL_3422",
"LABEL_3423",
"LABEL_3424",
"LABEL_3425",
"LABEL_3426",
"LABEL_3427",
"LABEL_3428",
"LABEL_3429",
"LABEL_343",
"LABEL_3430",
"LABEL_3431",
"LABEL_3432",
"LABEL_3433",
"LABEL_3434",
"LABEL_3435",
"LABEL_3436",
"LABEL_3437",
"LABEL_3438",
"LABEL_3439",
"LABEL_344",
"LABEL_3440",
"LABEL_3441",
"LABEL_3442",
"LABEL_3443",
"LABEL_3444",
"LABEL_3445",
"LABEL_3446",
"LABEL_3447",
"LABEL_3448",
"LABEL_3449",
"LABEL_345",
"LABEL_3450",
"LABEL_3451",
"LABEL_3452",
"LABEL_3453",
"LABEL_3454",
"LABEL_3455",
"LABEL_3456",
"LABEL_3457",
"LABEL_3458",
"LABEL_3459",
"LABEL_346",
"LABEL_3460",
"LABEL_3461",
"LABEL_3462",
"LABEL_3463",
"LABEL_3464",
"LABEL_3465",
"LABEL_3466",
"LABEL_3467",
"LABEL_3468",
"LABEL_3469",
"LABEL_347",
"LABEL_3470",
"LABEL_3471",
"LABEL_3472",
"LABEL_3473",
"LABEL_3474",
"LABEL_3475",
"LABEL_3476",
"LABEL_3477",
"LABEL_3478",
"LABEL_3479",
"LABEL_348",
"LABEL_3480",
"LABEL_3481",
"LABEL_3482",
"LABEL_3483",
"LABEL_3484",
"LABEL_3485",
"LABEL_3486",
"LABEL_3487",
"LABEL_3488",
"LABEL_3489",
"LABEL_349",
"LABEL_3490",
"LABEL_3491",
"LABEL_3492",
"LABEL_3493",
"LABEL_3494",
"LABEL_3495",
"LABEL_3496",
"LABEL_3497",
"LABEL_3498",
"LABEL_3499",
"LABEL_35",
"LABEL_350",
"LABEL_3500",
"LABEL_3501",
"LABEL_3502",
"LABEL_3503",
"LABEL_3504",
"LABEL_3505",
"LABEL_3506",
"LABEL_3507",
"LABEL_3508",
"LABEL_3509",
"LABEL_351",
"LABEL_3510",
"LABEL_3511",
"LABEL_3512",
"LABEL_3513",
"LABEL_3514",
"LABEL_3515",
"LABEL_3516",
"LABEL_3517",
"LABEL_3518",
"LABEL_3519",
"LABEL_352",
"LABEL_3520",
"LABEL_3521",
"LABEL_3522",
"LABEL_3523",
"LABEL_3524",
"LABEL_3525",
"LABEL_3526",
"LABEL_3527",
"LABEL_3528",
"LABEL_3529",
"LABEL_353",
"LABEL_3530",
"LABEL_3531",
"LABEL_3532",
"LABEL_3533",
"LABEL_3534",
"LABEL_3535",
"LABEL_3536",
"LABEL_3537",
"LABEL_3538",
"LABEL_3539",
"LABEL_354",
"LABEL_3540",
"LABEL_3541",
"LABEL_3542",
"LABEL_3543",
"LABEL_3544",
"LABEL_3545",
"LABEL_3546",
"LABEL_3547",
"LABEL_3548",
"LABEL_3549",
"LABEL_355",
"LABEL_3550",
"LABEL_3551",
"LABEL_3552",
"LABEL_3553",
"LABEL_3554",
"LABEL_3555",
"LABEL_3556",
"LABEL_3557",
"LABEL_3558",
"LABEL_3559",
"LABEL_356",
"LABEL_3560",
"LABEL_3561",
"LABEL_3562",
"LABEL_3563",
"LABEL_3564",
"LABEL_3565",
"LABEL_3566",
"LABEL_3567",
"LABEL_3568",
"LABEL_3569",
"LABEL_357",
"LABEL_3570",
"LABEL_3571",
"LABEL_3572",
"LABEL_3573",
"LABEL_3574",
"LABEL_3575",
"LABEL_3576",
"LABEL_3577",
"LABEL_3578",
"LABEL_3579",
"LABEL_358",
"LABEL_3580",
"LABEL_3581",
"LABEL_3582",
"LABEL_3583",
"LABEL_3584",
"LABEL_3585",
"LABEL_3586",
"LABEL_3587",
"LABEL_3588",
"LABEL_3589",
"LABEL_359",
"LABEL_3590",
"LABEL_3591",
"LABEL_3592",
"LABEL_3593",
"LABEL_3594",
"LABEL_3595",
"LABEL_3596",
"LABEL_3597",
"LABEL_3598",
"LABEL_3599",
"LABEL_36",
"LABEL_360",
"LABEL_3600",
"LABEL_3601",
"LABEL_3602",
"LABEL_3603",
"LABEL_3604",
"LABEL_3605",
"LABEL_3606",
"LABEL_3607",
"LABEL_3608",
"LABEL_3609",
"LABEL_361",
"LABEL_3610",
"LABEL_3611",
"LABEL_3612",
"LABEL_3613",
"LABEL_3614",
"LABEL_3615",
"LABEL_3616",
"LABEL_3617",
"LABEL_3618",
"LABEL_3619",
"LABEL_362",
"LABEL_3620",
"LABEL_3621",
"LABEL_3622",
"LABEL_3623",
"LABEL_3624",
"LABEL_3625",
"LABEL_3626",
"LABEL_3627",
"LABEL_3628",
"LABEL_3629",
"LABEL_363",
"LABEL_3630",
"LABEL_3631",
"LABEL_3632",
"LABEL_3633",
"LABEL_3634",
"LABEL_3635",
"LABEL_3636",
"LABEL_3637",
"LABEL_3638",
"LABEL_3639",
"LABEL_364",
"LABEL_3640",
"LABEL_3641",
"LABEL_3642",
"LABEL_3643",
"LABEL_3644",
"LABEL_3645",
"LABEL_3646",
"LABEL_3647",
"LABEL_3648",
"LABEL_3649",
"LABEL_365",
"LABEL_3650",
"LABEL_3651",
"LABEL_3652",
"LABEL_3653",
"LABEL_3654",
"LABEL_3655",
"LABEL_3656",
"LABEL_3657",
"LABEL_3658",
"LABEL_3659",
"LABEL_366",
"LABEL_3660",
"LABEL_3661",
"LABEL_3662",
"LABEL_3663",
"LABEL_3664",
"LABEL_3665",
"LABEL_3666",
"LABEL_3667",
"LABEL_3668",
"LABEL_3669",
"LABEL_367",
"LABEL_3670",
"LABEL_3671",
"LABEL_3672",
"LABEL_3673",
"LABEL_3674",
"LABEL_3675",
"LABEL_3676",
"LABEL_3677",
"LABEL_3678",
"LABEL_3679",
"LABEL_368",
"LABEL_3680",
"LABEL_3681",
"LABEL_3682",
"LABEL_3683",
"LABEL_3684",
"LABEL_3685",
"LABEL_3686",
"LABEL_3687",
"LABEL_3688",
"LABEL_3689",
"LABEL_369",
"LABEL_3690",
"LABEL_3691",
"LABEL_3692",
"LABEL_3693",
"LABEL_3694",
"LABEL_3695",
"LABEL_3696",
"LABEL_3697",
"LABEL_3698",
"LABEL_3699",
"LABEL_37",
"LABEL_370",
"LABEL_3700",
"LABEL_3701",
"LABEL_3702",
"LABEL_3703",
"LABEL_3704",
"LABEL_3705",
"LABEL_3706",
"LABEL_3707",
"LABEL_3708",
"LABEL_3709",
"LABEL_371",
"LABEL_3710",
"LABEL_3711",
"LABEL_3712",
"LABEL_3713",
"LABEL_3714",
"LABEL_3715",
"LABEL_3716",
"LABEL_3717",
"LABEL_3718",
"LABEL_3719",
"LABEL_372",
"LABEL_3720",
"LABEL_3721",
"LABEL_3722",
"LABEL_3723",
"LABEL_3724",
"LABEL_3725",
"LABEL_3726",
"LABEL_3727",
"LABEL_3728",
"LABEL_3729",
"LABEL_373",
"LABEL_3730",
"LABEL_3731",
"LABEL_3732",
"LABEL_3733",
"LABEL_3734",
"LABEL_3735",
"LABEL_3736",
"LABEL_3737",
"LABEL_3738",
"LABEL_3739",
"LABEL_374",
"LABEL_3740",
"LABEL_3741",
"LABEL_3742",
"LABEL_3743",
"LABEL_3744",
"LABEL_3745",
"LABEL_3746",
"LABEL_3747",
"LABEL_3748",
"LABEL_3749",
"LABEL_375",
"LABEL_3750",
"LABEL_3751",
"LABEL_3752",
"LABEL_3753",
"LABEL_3754",
"LABEL_3755",
"LABEL_3756",
"LABEL_3757",
"LABEL_3758",
"LABEL_3759",
"LABEL_376",
"LABEL_3760",
"LABEL_3761",
"LABEL_3762",
"LABEL_3763",
"LABEL_3764",
"LABEL_3765",
"LABEL_3766",
"LABEL_3767",
"LABEL_3768",
"LABEL_3769",
"LABEL_377",
"LABEL_3770",
"LABEL_3771",
"LABEL_3772",
"LABEL_3773",
"LABEL_3774",
"LABEL_3775",
"LABEL_3776",
"LABEL_3777",
"LABEL_3778",
"LABEL_3779",
"LABEL_378",
"LABEL_3780",
"LABEL_3781",
"LABEL_3782",
"LABEL_3783",
"LABEL_3784",
"LABEL_3785",
"LABEL_3786",
"LABEL_3787",
"LABEL_3788",
"LABEL_3789",
"LABEL_379",
"LABEL_3790",
"LABEL_3791",
"LABEL_3792",
"LABEL_3793",
"LABEL_3794",
"LABEL_3795",
"LABEL_3796",
"LABEL_3797",
"LABEL_3798",
"LABEL_3799",
"LABEL_38",
"LABEL_380",
"LABEL_3800",
"LABEL_3801",
"LABEL_3802",
"LABEL_3803",
"LABEL_3804",
"LABEL_3805",
"LABEL_3806",
"LABEL_3807",
"LABEL_3808",
"LABEL_3809",
"LABEL_381",
"LABEL_3810",
"LABEL_3811",
"LABEL_3812",
"LABEL_3813",
"LABEL_3814",
"LABEL_3815",
"LABEL_3816",
"LABEL_3817",
"LABEL_3818",
"LABEL_3819",
"LABEL_382",
"LABEL_3820",
"LABEL_3821",
"LABEL_3822",
"LABEL_3823",
"LABEL_3824",
"LABEL_3825",
"LABEL_3826",
"LABEL_3827",
"LABEL_3828",
"LABEL_3829",
"LABEL_383",
"LABEL_3830",
"LABEL_3831",
"LABEL_3832",
"LABEL_3833",
"LABEL_3834",
"LABEL_3835",
"LABEL_3836",
"LABEL_3837",
"LABEL_3838",
"LABEL_3839",
"LABEL_384",
"LABEL_3840",
"LABEL_3841",
"LABEL_3842",
"LABEL_3843",
"LABEL_3844",
"LABEL_3845",
"LABEL_3846",
"LABEL_3847",
"LABEL_3848",
"LABEL_3849",
"LABEL_385",
"LABEL_3850",
"LABEL_3851",
"LABEL_3852",
"LABEL_3853",
"LABEL_3854",
"LABEL_3855",
"LABEL_3856",
"LABEL_3857",
"LABEL_3858",
"LABEL_3859",
"LABEL_386",
"LABEL_3860",
"LABEL_3861",
"LABEL_3862",
"LABEL_3863",
"LABEL_3864",
"LABEL_3865",
"LABEL_3866",
"LABEL_3867",
"LABEL_3868",
"LABEL_3869",
"LABEL_387",
"LABEL_3870",
"LABEL_3871",
"LABEL_3872",
"LABEL_3873",
"LABEL_3874",
"LABEL_3875",
"LABEL_3876",
"LABEL_3877",
"LABEL_3878",
"LABEL_3879",
"LABEL_388",
"LABEL_3880",
"LABEL_3881",
"LABEL_3882",
"LABEL_3883",
"LABEL_3884",
"LABEL_3885",
"LABEL_3886",
"LABEL_3887",
"LABEL_3888",
"LABEL_3889",
"LABEL_389",
"LABEL_3890",
"LABEL_3891",
"LABEL_3892",
"LABEL_3893",
"LABEL_3894",
"LABEL_3895",
"LABEL_3896",
"LABEL_3897",
"LABEL_3898",
"LABEL_3899",
"LABEL_39",
"LABEL_390",
"LABEL_3900",
"LABEL_3901",
"LABEL_3902",
"LABEL_3903",
"LABEL_3904",
"LABEL_3905",
"LABEL_3906",
"LABEL_3907",
"LABEL_3908",
"LABEL_3909",
"LABEL_391",
"LABEL_3910",
"LABEL_3911",
"LABEL_3912",
"LABEL_3913",
"LABEL_3914",
"LABEL_3915",
"LABEL_3916",
"LABEL_3917",
"LABEL_3918",
"LABEL_3919",
"LABEL_392",
"LABEL_3920",
"LABEL_3921",
"LABEL_3922",
"LABEL_3923",
"LABEL_3924",
"LABEL_3925",
"LABEL_3926",
"LABEL_3927",
"LABEL_3928",
"LABEL_3929",
"LABEL_393",
"LABEL_3930",
"LABEL_3931",
"LABEL_3932",
"LABEL_3933",
"LABEL_3934",
"LABEL_3935",
"LABEL_3936",
"LABEL_3937",
"LABEL_3938",
"LABEL_3939",
"LABEL_394",
"LABEL_3940",
"LABEL_3941",
"LABEL_3942",
"LABEL_3943",
"LABEL_3944",
"LABEL_3945",
"LABEL_3946",
"LABEL_3947",
"LABEL_3948",
"LABEL_3949",
"LABEL_395",
"LABEL_3950",
"LABEL_3951",
"LABEL_3952",
"LABEL_3953",
"LABEL_3954",
"LABEL_3955",
"LABEL_3956",
"LABEL_3957",
"LABEL_3958",
"LABEL_3959",
"LABEL_396",
"LABEL_3960",
"LABEL_3961",
"LABEL_3962",
"LABEL_3963",
"LABEL_3964",
"LABEL_3965",
"LABEL_3966",
"LABEL_3967",
"LABEL_3968",
"LABEL_3969",
"LABEL_397",
"LABEL_3970",
"LABEL_3971",
"LABEL_3972",
"LABEL_3973",
"LABEL_3974",
"LABEL_3975",
"LABEL_3976",
"LABEL_3977",
"LABEL_3978",
"LABEL_3979",
"LABEL_398",
"LABEL_3980",
"LABEL_3981",
"LABEL_3982",
"LABEL_3983",
"LABEL_3984",
"LABEL_3985",
"LABEL_3986",
"LABEL_3987",
"LABEL_3988",
"LABEL_3989",
"LABEL_399",
"LABEL_3990",
"LABEL_3991",
"LABEL_3992",
"LABEL_3993",
"LABEL_3994",
"LABEL_3995",
"LABEL_3996",
"LABEL_3997",
"LABEL_3998",
"LABEL_3999",
"LABEL_4",
"LABEL_40",
"LABEL_400",
"LABEL_4000",
"LABEL_4001",
"LABEL_4002",
"LABEL_4003",
"LABEL_4004",
"LABEL_4005",
"LABEL_4006",
"LABEL_4007",
"LABEL_4008",
"LABEL_4009",
"LABEL_401",
"LABEL_4010",
"LABEL_4011",
"LABEL_4012",
"LABEL_4013",
"LABEL_4014",
"LABEL_4015",
"LABEL_4016",
"LABEL_4017",
"LABEL_4018",
"LABEL_4019",
"LABEL_402",
"LABEL_4020",
"LABEL_4021",
"LABEL_4022",
"LABEL_4023",
"LABEL_4024",
"LABEL_4025",
"LABEL_4026",
"LABEL_4027",
"LABEL_4028",
"LABEL_4029",
"LABEL_403",
"LABEL_4030",
"LABEL_4031",
"LABEL_4032",
"LABEL_4033",
"LABEL_4034",
"LABEL_4035",
"LABEL_4036",
"LABEL_4037",
"LABEL_4038",
"LABEL_4039",
"LABEL_404",
"LABEL_4040",
"LABEL_4041",
"LABEL_4042",
"LABEL_4043",
"LABEL_4044",
"LABEL_4045",
"LABEL_4046",
"LABEL_4047",
"LABEL_4048",
"LABEL_4049",
"LABEL_405",
"LABEL_4050",
"LABEL_4051",
"LABEL_4052",
"LABEL_4053",
"LABEL_4054",
"LABEL_4055",
"LABEL_4056",
"LABEL_4057",
"LABEL_4058",
"LABEL_4059",
"LABEL_406",
"LABEL_4060",
"LABEL_4061",
"LABEL_4062",
"LABEL_4063",
"LABEL_4064",
"LABEL_4065",
"LABEL_4066",
"LABEL_4067",
"LABEL_4068",
"LABEL_4069",
"LABEL_407",
"LABEL_4070",
"LABEL_4071",
"LABEL_4072",
"LABEL_4073",
"LABEL_4074",
"LABEL_4075",
"LABEL_4076",
"LABEL_4077",
"LABEL_4078",
"LABEL_4079",
"LABEL_408",
"LABEL_4080",
"LABEL_4081",
"LABEL_4082",
"LABEL_4083",
"LABEL_4084",
"LABEL_4085",
"LABEL_4086",
"LABEL_4087",
"LABEL_4088",
"LABEL_4089",
"LABEL_409",
"LABEL_4090",
"LABEL_4091",
"LABEL_4092",
"LABEL_4093",
"LABEL_4094",
"LABEL_4095",
"LABEL_4096",
"LABEL_4097",
"LABEL_4098",
"LABEL_4099",
"LABEL_41",
"LABEL_410",
"LABEL_4100",
"LABEL_4101",
"LABEL_4102",
"LABEL_4103",
"LABEL_4104",
"LABEL_4105",
"LABEL_4106",
"LABEL_4107",
"LABEL_4108",
"LABEL_4109",
"LABEL_411",
"LABEL_4110",
"LABEL_4111",
"LABEL_4112",
"LABEL_4113",
"LABEL_4114",
"LABEL_4115",
"LABEL_4116",
"LABEL_4117",
"LABEL_4118",
"LABEL_4119",
"LABEL_412",
"LABEL_4120",
"LABEL_4121",
"LABEL_4122",
"LABEL_4123",
"LABEL_4124",
"LABEL_4125",
"LABEL_4126",
"LABEL_4127",
"LABEL_4128",
"LABEL_4129",
"LABEL_413",
"LABEL_4130",
"LABEL_4131",
"LABEL_4132",
"LABEL_4133",
"LABEL_4134",
"LABEL_4135",
"LABEL_4136",
"LABEL_4137",
"LABEL_4138",
"LABEL_4139",
"LABEL_414",
"LABEL_4140",
"LABEL_4141",
"LABEL_4142",
"LABEL_4143",
"LABEL_4144",
"LABEL_4145",
"LABEL_4146",
"LABEL_4147",
"LABEL_4148",
"LABEL_4149",
"LABEL_415",
"LABEL_4150",
"LABEL_4151",
"LABEL_4152",
"LABEL_4153",
"LABEL_4154",
"LABEL_4155",
"LABEL_4156",
"LABEL_4157",
"LABEL_4158",
"LABEL_4159",
"LABEL_416",
"LABEL_4160",
"LABEL_4161",
"LABEL_4162",
"LABEL_4163",
"LABEL_4164",
"LABEL_4165",
"LABEL_4166",
"LABEL_4167",
"LABEL_4168",
"LABEL_4169",
"LABEL_417",
"LABEL_4170",
"LABEL_4171",
"LABEL_4172",
"LABEL_4173",
"LABEL_4174",
"LABEL_4175",
"LABEL_4176",
"LABEL_4177",
"LABEL_4178",
"LABEL_4179",
"LABEL_418",
"LABEL_4180",
"LABEL_4181",
"LABEL_4182",
"LABEL_4183",
"LABEL_4184",
"LABEL_4185",
"LABEL_4186",
"LABEL_4187",
"LABEL_4188",
"LABEL_4189",
"LABEL_419",
"LABEL_4190",
"LABEL_4191",
"LABEL_4192",
"LABEL_4193",
"LABEL_4194",
"LABEL_4195",
"LABEL_4196",
"LABEL_4197",
"LABEL_4198",
"LABEL_4199",
"LABEL_42",
"LABEL_420",
"LABEL_4200",
"LABEL_4201",
"LABEL_4202",
"LABEL_4203",
"LABEL_4204",
"LABEL_4205",
"LABEL_4206",
"LABEL_4207",
"LABEL_4208",
"LABEL_4209",
"LABEL_421",
"LABEL_4210",
"LABEL_4211",
"LABEL_4212",
"LABEL_4213",
"LABEL_4214",
"LABEL_4215",
"LABEL_4216",
"LABEL_4217",
"LABEL_4218",
"LABEL_4219",
"LABEL_422",
"LABEL_4220",
"LABEL_4221",
"LABEL_4222",
"LABEL_4223",
"LABEL_4224",
"LABEL_4225",
"LABEL_4226",
"LABEL_4227",
"LABEL_4228",
"LABEL_4229",
"LABEL_423",
"LABEL_4230",
"LABEL_4231",
"LABEL_4232",
"LABEL_4233",
"LABEL_4234",
"LABEL_4235",
"LABEL_4236",
"LABEL_4237",
"LABEL_4238",
"LABEL_4239",
"LABEL_424",
"LABEL_4240",
"LABEL_4241",
"LABEL_4242",
"LABEL_4243",
"LABEL_4244",
"LABEL_4245",
"LABEL_4246",
"LABEL_4247",
"LABEL_4248",
"LABEL_4249",
"LABEL_425",
"LABEL_4250",
"LABEL_4251",
"LABEL_4252",
"LABEL_4253",
"LABEL_4254",
"LABEL_4255",
"LABEL_4256",
"LABEL_4257",
"LABEL_4258",
"LABEL_4259",
"LABEL_426",
"LABEL_4260",
"LABEL_4261",
"LABEL_4262",
"LABEL_4263",
"LABEL_4264",
"LABEL_4265",
"LABEL_4266",
"LABEL_4267",
"LABEL_4268",
"LABEL_4269",
"LABEL_427",
"LABEL_4270",
"LABEL_4271",
"LABEL_4272",
"LABEL_4273",
"LABEL_4274",
"LABEL_4275",
"LABEL_4276",
"LABEL_4277",
"LABEL_4278",
"LABEL_4279",
"LABEL_428",
"LABEL_4280",
"LABEL_4281",
"LABEL_4282",
"LABEL_4283",
"LABEL_4284",
"LABEL_4285",
"LABEL_4286",
"LABEL_4287",
"LABEL_4288",
"LABEL_4289",
"LABEL_429",
"LABEL_4290",
"LABEL_4291",
"LABEL_4292",
"LABEL_4293",
"LABEL_4294",
"LABEL_4295",
"LABEL_4296",
"LABEL_4297",
"LABEL_4298",
"LABEL_4299",
"LABEL_43",
"LABEL_430",
"LABEL_4300",
"LABEL_4301",
"LABEL_4302",
"LABEL_4303",
"LABEL_4304",
"LABEL_4305",
"LABEL_4306",
"LABEL_4307",
"LABEL_4308",
"LABEL_4309",
"LABEL_431",
"LABEL_4310",
"LABEL_4311",
"LABEL_4312",
"LABEL_4313",
"LABEL_4314",
"LABEL_4315",
"LABEL_4316",
"LABEL_4317",
"LABEL_4318",
"LABEL_4319",
"LABEL_432",
"LABEL_4320",
"LABEL_4321",
"LABEL_4322",
"LABEL_4323",
"LABEL_4324",
"LABEL_4325",
"LABEL_4326",
"LABEL_4327",
"LABEL_4328",
"LABEL_4329",
"LABEL_433",
"LABEL_4330",
"LABEL_4331",
"LABEL_4332",
"LABEL_4333",
"LABEL_4334",
"LABEL_4335",
"LABEL_4336",
"LABEL_4337",
"LABEL_4338",
"LABEL_4339",
"LABEL_434",
"LABEL_4340",
"LABEL_4341",
"LABEL_4342",
"LABEL_4343",
"LABEL_4344",
"LABEL_4345",
"LABEL_4346",
"LABEL_4347",
"LABEL_4348",
"LABEL_4349",
"LABEL_435",
"LABEL_4350",
"LABEL_4351",
"LABEL_4352",
"LABEL_4353",
"LABEL_4354",
"LABEL_4355",
"LABEL_4356",
"LABEL_4357",
"LABEL_4358",
"LABEL_4359",
"LABEL_436",
"LABEL_4360",
"LABEL_4361",
"LABEL_4362",
"LABEL_4363",
"LABEL_4364",
"LABEL_4365",
"LABEL_4366",
"LABEL_4367",
"LABEL_4368",
"LABEL_4369",
"LABEL_437",
"LABEL_4370",
"LABEL_4371",
"LABEL_4372",
"LABEL_4373",
"LABEL_4374",
"LABEL_4375",
"LABEL_4376",
"LABEL_4377",
"LABEL_4378",
"LABEL_4379",
"LABEL_438",
"LABEL_4380",
"LABEL_4381",
"LABEL_4382",
"LABEL_4383",
"LABEL_4384",
"LABEL_4385",
"LABEL_4386",
"LABEL_4387",
"LABEL_4388",
"LABEL_4389",
"LABEL_439",
"LABEL_4390",
"LABEL_4391",
"LABEL_4392",
"LABEL_4393",
"LABEL_4394",
"LABEL_4395",
"LABEL_4396",
"LABEL_4397",
"LABEL_4398",
"LABEL_4399",
"LABEL_44",
"LABEL_440",
"LABEL_4400",
"LABEL_4401",
"LABEL_4402",
"LABEL_4403",
"LABEL_4404",
"LABEL_4405",
"LABEL_4406",
"LABEL_4407",
"LABEL_4408",
"LABEL_4409",
"LABEL_441",
"LABEL_4410",
"LABEL_4411",
"LABEL_4412",
"LABEL_4413",
"LABEL_4414",
"LABEL_4415",
"LABEL_4416",
"LABEL_4417",
"LABEL_4418",
"LABEL_4419",
"LABEL_442",
"LABEL_4420",
"LABEL_4421",
"LABEL_4422",
"LABEL_4423",
"LABEL_4424",
"LABEL_4425",
"LABEL_4426",
"LABEL_4427",
"LABEL_4428",
"LABEL_4429",
"LABEL_443",
"LABEL_4430",
"LABEL_4431",
"LABEL_4432",
"LABEL_4433",
"LABEL_4434",
"LABEL_4435",
"LABEL_4436",
"LABEL_4437",
"LABEL_4438",
"LABEL_4439",
"LABEL_444",
"LABEL_4440",
"LABEL_4441",
"LABEL_4442",
"LABEL_4443",
"LABEL_4444",
"LABEL_4445",
"LABEL_4446",
"LABEL_4447",
"LABEL_4448",
"LABEL_4449",
"LABEL_445",
"LABEL_4450",
"LABEL_4451",
"LABEL_4452",
"LABEL_4453",
"LABEL_4454",
"LABEL_4455",
"LABEL_4456",
"LABEL_4457",
"LABEL_4458",
"LABEL_4459",
"LABEL_446",
"LABEL_4460",
"LABEL_4461",
"LABEL_4462",
"LABEL_4463",
"LABEL_4464",
"LABEL_4465",
"LABEL_4466",
"LABEL_4467",
"LABEL_4468",
"LABEL_4469",
"LABEL_447",
"LABEL_4470",
"LABEL_4471",
"LABEL_4472",
"LABEL_4473",
"LABEL_4474",
"LABEL_4475",
"LABEL_4476",
"LABEL_4477",
"LABEL_4478",
"LABEL_4479",
"LABEL_448",
"LABEL_4480",
"LABEL_4481",
"LABEL_4482",
"LABEL_4483",
"LABEL_4484",
"LABEL_4485",
"LABEL_4486",
"LABEL_4487",
"LABEL_4488",
"LABEL_4489",
"LABEL_449",
"LABEL_4490",
"LABEL_4491",
"LABEL_4492",
"LABEL_4493",
"LABEL_4494",
"LABEL_4495",
"LABEL_4496",
"LABEL_4497",
"LABEL_4498",
"LABEL_4499",
"LABEL_45",
"LABEL_450",
"LABEL_4500",
"LABEL_4501",
"LABEL_4502",
"LABEL_4503",
"LABEL_4504",
"LABEL_4505",
"LABEL_4506",
"LABEL_4507",
"LABEL_4508",
"LABEL_4509",
"LABEL_451",
"LABEL_4510",
"LABEL_4511",
"LABEL_4512",
"LABEL_4513",
"LABEL_4514",
"LABEL_4515",
"LABEL_4516",
"LABEL_4517",
"LABEL_4518",
"LABEL_4519",
"LABEL_452",
"LABEL_4520",
"LABEL_4521",
"LABEL_4522",
"LABEL_4523",
"LABEL_4524",
"LABEL_4525",
"LABEL_4526",
"LABEL_4527",
"LABEL_4528",
"LABEL_4529",
"LABEL_453",
"LABEL_4530",
"LABEL_4531",
"LABEL_4532",
"LABEL_4533",
"LABEL_4534",
"LABEL_4535",
"LABEL_4536",
"LABEL_4537",
"LABEL_4538",
"LABEL_4539",
"LABEL_454",
"LABEL_4540",
"LABEL_4541",
"LABEL_4542",
"LABEL_4543",
"LABEL_4544",
"LABEL_4545",
"LABEL_4546",
"LABEL_4547",
"LABEL_4548",
"LABEL_4549",
"LABEL_455",
"LABEL_4550",
"LABEL_4551",
"LABEL_4552",
"LABEL_4553",
"LABEL_4554",
"LABEL_4555",
"LABEL_4556",
"LABEL_4557",
"LABEL_4558",
"LABEL_4559",
"LABEL_456",
"LABEL_4560",
"LABEL_4561",
"LABEL_4562",
"LABEL_4563",
"LABEL_4564",
"LABEL_4565",
"LABEL_4566",
"LABEL_4567",
"LABEL_4568",
"LABEL_4569",
"LABEL_457",
"LABEL_4570",
"LABEL_4571",
"LABEL_4572",
"LABEL_4573",
"LABEL_4574",
"LABEL_4575",
"LABEL_4576",
"LABEL_4577",
"LABEL_4578",
"LABEL_4579",
"LABEL_458",
"LABEL_4580",
"LABEL_4581",
"LABEL_4582",
"LABEL_4583",
"LABEL_4584",
"LABEL_4585",
"LABEL_4586",
"LABEL_4587",
"LABEL_4588",
"LABEL_4589",
"LABEL_459",
"LABEL_4590",
"LABEL_4591",
"LABEL_4592",
"LABEL_4593",
"LABEL_4594",
"LABEL_4595",
"LABEL_4596",
"LABEL_4597",
"LABEL_4598",
"LABEL_4599",
"LABEL_46",
"LABEL_460",
"LABEL_4600",
"LABEL_4601",
"LABEL_4602",
"LABEL_4603",
"LABEL_4604",
"LABEL_4605",
"LABEL_4606",
"LABEL_4607",
"LABEL_4608",
"LABEL_4609",
"LABEL_461",
"LABEL_4610",
"LABEL_4611",
"LABEL_4612",
"LABEL_4613",
"LABEL_4614",
"LABEL_4615",
"LABEL_4616",
"LABEL_4617",
"LABEL_4618",
"LABEL_4619",
"LABEL_462",
"LABEL_4620",
"LABEL_4621",
"LABEL_4622",
"LABEL_4623",
"LABEL_4624",
"LABEL_4625",
"LABEL_4626",
"LABEL_4627",
"LABEL_4628",
"LABEL_4629",
"LABEL_463",
"LABEL_4630",
"LABEL_4631",
"LABEL_4632",
"LABEL_4633",
"LABEL_4634",
"LABEL_4635",
"LABEL_4636",
"LABEL_4637",
"LABEL_4638",
"LABEL_4639",
"LABEL_464",
"LABEL_4640",
"LABEL_4641",
"LABEL_4642",
"LABEL_4643",
"LABEL_4644",
"LABEL_4645",
"LABEL_4646",
"LABEL_4647",
"LABEL_4648",
"LABEL_4649",
"LABEL_465",
"LABEL_4650",
"LABEL_4651",
"LABEL_4652",
"LABEL_4653",
"LABEL_4654",
"LABEL_4655",
"LABEL_4656",
"LABEL_4657",
"LABEL_4658",
"LABEL_4659",
"LABEL_466",
"LABEL_4660",
"LABEL_4661",
"LABEL_4662",
"LABEL_4663",
"LABEL_4664",
"LABEL_4665",
"LABEL_4666",
"LABEL_4667",
"LABEL_4668",
"LABEL_4669",
"LABEL_467",
"LABEL_4670",
"LABEL_4671",
"LABEL_4672",
"LABEL_4673",
"LABEL_4674",
"LABEL_4675",
"LABEL_4676",
"LABEL_4677",
"LABEL_4678",
"LABEL_4679",
"LABEL_468",
"LABEL_4680",
"LABEL_4681",
"LABEL_4682",
"LABEL_4683",
"LABEL_4684",
"LABEL_4685",
"LABEL_4686",
"LABEL_4687",
"LABEL_4688",
"LABEL_4689",
"LABEL_469",
"LABEL_4690",
"LABEL_4691",
"LABEL_4692",
"LABEL_4693",
"LABEL_4694",
"LABEL_4695",
"LABEL_4696",
"LABEL_4697",
"LABEL_4698",
"LABEL_4699",
"LABEL_47",
"LABEL_470",
"LABEL_4700",
"LABEL_4701",
"LABEL_4702",
"LABEL_4703",
"LABEL_4704",
"LABEL_4705",
"LABEL_4706",
"LABEL_4707",
"LABEL_4708",
"LABEL_4709",
"LABEL_471",
"LABEL_4710",
"LABEL_4711",
"LABEL_4712",
"LABEL_4713",
"LABEL_4714",
"LABEL_4715",
"LABEL_4716",
"LABEL_4717",
"LABEL_4718",
"LABEL_4719",
"LABEL_472",
"LABEL_4720",
"LABEL_4721",
"LABEL_4722",
"LABEL_4723",
"LABEL_4724",
"LABEL_4725",
"LABEL_4726",
"LABEL_4727",
"LABEL_4728",
"LABEL_4729",
"LABEL_473",
"LABEL_4730",
"LABEL_4731",
"LABEL_4732",
"LABEL_4733",
"LABEL_4734",
"LABEL_4735",
"LABEL_4736",
"LABEL_4737",
"LABEL_4738",
"LABEL_4739",
"LABEL_474",
"LABEL_4740",
"LABEL_4741",
"LABEL_4742",
"LABEL_4743",
"LABEL_4744",
"LABEL_4745",
"LABEL_4746",
"LABEL_4747",
"LABEL_4748",
"LABEL_4749",
"LABEL_475",
"LABEL_4750",
"LABEL_4751",
"LABEL_4752",
"LABEL_4753",
"LABEL_4754",
"LABEL_4755",
"LABEL_4756",
"LABEL_4757",
"LABEL_4758",
"LABEL_4759",
"LABEL_476",
"LABEL_4760",
"LABEL_4761",
"LABEL_4762",
"LABEL_4763",
"LABEL_4764",
"LABEL_4765",
"LABEL_4766",
"LABEL_4767",
"LABEL_4768",
"LABEL_4769",
"LABEL_477",
"LABEL_4770",
"LABEL_4771",
"LABEL_4772",
"LABEL_4773",
"LABEL_4774",
"LABEL_4775",
"LABEL_4776",
"LABEL_4777",
"LABEL_4778",
"LABEL_4779",
"LABEL_478",
"LABEL_4780",
"LABEL_4781",
"LABEL_4782",
"LABEL_4783",
"LABEL_4784",
"LABEL_4785",
"LABEL_4786",
"LABEL_4787",
"LABEL_4788",
"LABEL_4789",
"LABEL_479",
"LABEL_4790",
"LABEL_4791",
"LABEL_4792",
"LABEL_4793",
"LABEL_4794",
"LABEL_4795",
"LABEL_4796",
"LABEL_4797",
"LABEL_4798",
"LABEL_4799",
"LABEL_48",
"LABEL_480",
"LABEL_4800",
"LABEL_4801",
"LABEL_4802",
"LABEL_4803",
"LABEL_4804",
"LABEL_4805",
"LABEL_4806",
"LABEL_4807",
"LABEL_4808",
"LABEL_4809",
"LABEL_481",
"LABEL_4810",
"LABEL_4811",
"LABEL_4812",
"LABEL_4813",
"LABEL_4814",
"LABEL_4815",
"LABEL_4816",
"LABEL_4817",
"LABEL_4818",
"LABEL_4819",
"LABEL_482",
"LABEL_4820",
"LABEL_4821",
"LABEL_4822",
"LABEL_4823",
"LABEL_4824",
"LABEL_4825",
"LABEL_4826",
"LABEL_4827",
"LABEL_4828",
"LABEL_4829",
"LABEL_483",
"LABEL_4830",
"LABEL_4831",
"LABEL_4832",
"LABEL_4833",
"LABEL_4834",
"LABEL_4835",
"LABEL_4836",
"LABEL_4837",
"LABEL_4838",
"LABEL_4839",
"LABEL_484",
"LABEL_4840",
"LABEL_4841",
"LABEL_4842",
"LABEL_4843",
"LABEL_4844",
"LABEL_4845",
"LABEL_4846",
"LABEL_4847",
"LABEL_4848",
"LABEL_4849",
"LABEL_485",
"LABEL_4850",
"LABEL_4851",
"LABEL_4852",
"LABEL_4853",
"LABEL_4854",
"LABEL_4855",
"LABEL_4856",
"LABEL_4857",
"LABEL_4858",
"LABEL_4859",
"LABEL_486",
"LABEL_4860",
"LABEL_4861",
"LABEL_4862",
"LABEL_4863",
"LABEL_4864",
"LABEL_4865",
"LABEL_4866",
"LABEL_4867",
"LABEL_4868",
"LABEL_4869",
"LABEL_487",
"LABEL_4870",
"LABEL_4871",
"LABEL_4872",
"LABEL_4873",
"LABEL_4874",
"LABEL_4875",
"LABEL_4876",
"LABEL_4877",
"LABEL_4878",
"LABEL_4879",
"LABEL_488",
"LABEL_4880",
"LABEL_4881",
"LABEL_4882",
"LABEL_4883",
"LABEL_4884",
"LABEL_4885",
"LABEL_4886",
"LABEL_4887",
"LABEL_4888",
"LABEL_4889",
"LABEL_489",
"LABEL_4890",
"LABEL_4891",
"LABEL_4892",
"LABEL_4893",
"LABEL_4894",
"LABEL_4895",
"LABEL_4896",
"LABEL_4897",
"LABEL_4898",
"LABEL_4899",
"LABEL_49",
"LABEL_490",
"LABEL_4900",
"LABEL_4901",
"LABEL_4902",
"LABEL_4903",
"LABEL_4904",
"LABEL_4905",
"LABEL_4906",
"LABEL_4907",
"LABEL_4908",
"LABEL_4909",
"LABEL_491",
"LABEL_4910",
"LABEL_4911",
"LABEL_4912",
"LABEL_4913",
"LABEL_4914",
"LABEL_4915",
"LABEL_4916",
"LABEL_4917",
"LABEL_4918",
"LABEL_4919",
"LABEL_492",
"LABEL_4920",
"LABEL_4921",
"LABEL_4922",
"LABEL_4923",
"LABEL_4924",
"LABEL_4925",
"LABEL_4926",
"LABEL_4927",
"LABEL_4928",
"LABEL_4929",
"LABEL_493",
"LABEL_4930",
"LABEL_4931",
"LABEL_4932",
"LABEL_4933",
"LABEL_4934",
"LABEL_4935",
"LABEL_4936",
"LABEL_4937",
"LABEL_4938",
"LABEL_4939",
"LABEL_494",
"LABEL_4940",
"LABEL_4941",
"LABEL_4942",
"LABEL_4943",
"LABEL_4944",
"LABEL_4945",
"LABEL_4946",
"LABEL_4947",
"LABEL_4948",
"LABEL_4949",
"LABEL_495",
"LABEL_4950",
"LABEL_4951",
"LABEL_4952",
"LABEL_4953",
"LABEL_4954",
"LABEL_4955",
"LABEL_4956",
"LABEL_4957",
"LABEL_4958",
"LABEL_4959",
"LABEL_496",
"LABEL_4960",
"LABEL_4961",
"LABEL_4962",
"LABEL_4963",
"LABEL_4964",
"LABEL_4965",
"LABEL_4966",
"LABEL_4967",
"LABEL_4968",
"LABEL_4969",
"LABEL_497",
"LABEL_4970",
"LABEL_4971",
"LABEL_4972",
"LABEL_4973",
"LABEL_4974",
"LABEL_4975",
"LABEL_4976",
"LABEL_4977",
"LABEL_4978",
"LABEL_4979",
"LABEL_498",
"LABEL_4980",
"LABEL_4981",
"LABEL_4982",
"LABEL_4983",
"LABEL_4984",
"LABEL_4985",
"LABEL_4986",
"LABEL_4987",
"LABEL_4988",
"LABEL_4989",
"LABEL_499",
"LABEL_4990",
"LABEL_4991",
"LABEL_4992",
"LABEL_4993",
"LABEL_4994",
"LABEL_4995",
"LABEL_4996",
"LABEL_4997",
"LABEL_4998",
"LABEL_4999",
"LABEL_5",
"LABEL_50",
"LABEL_500",
"LABEL_5000",
"LABEL_5001",
"LABEL_5002",
"LABEL_5003",
"LABEL_5004",
"LABEL_5005",
"LABEL_5006",
"LABEL_5007",
"LABEL_5008",
"LABEL_5009",
"LABEL_501",
"LABEL_5010",
"LABEL_5011",
"LABEL_5012",
"LABEL_5013",
"LABEL_5014",
"LABEL_5015",
"LABEL_5016",
"LABEL_5017",
"LABEL_5018",
"LABEL_5019",
"LABEL_502",
"LABEL_5020",
"LABEL_5021",
"LABEL_5022",
"LABEL_5023",
"LABEL_5024",
"LABEL_5025",
"LABEL_5026",
"LABEL_5027",
"LABEL_5028",
"LABEL_5029",
"LABEL_503",
"LABEL_5030",
"LABEL_5031",
"LABEL_5032",
"LABEL_5033",
"LABEL_5034",
"LABEL_5035",
"LABEL_5036",
"LABEL_5037",
"LABEL_5038",
"LABEL_5039",
"LABEL_504",
"LABEL_5040",
"LABEL_5041",
"LABEL_5042",
"LABEL_5043",
"LABEL_5044",
"LABEL_5045",
"LABEL_5046",
"LABEL_5047",
"LABEL_5048",
"LABEL_5049",
"LABEL_505",
"LABEL_5050",
"LABEL_5051",
"LABEL_5052",
"LABEL_5053",
"LABEL_5054",
"LABEL_5055",
"LABEL_5056",
"LABEL_5057",
"LABEL_5058",
"LABEL_5059",
"LABEL_506",
"LABEL_5060",
"LABEL_5061",
"LABEL_5062",
"LABEL_5063",
"LABEL_5064",
"LABEL_5065",
"LABEL_5066",
"LABEL_5067",
"LABEL_5068",
"LABEL_5069",
"LABEL_507",
"LABEL_5070",
"LABEL_5071",
"LABEL_5072",
"LABEL_5073",
"LABEL_5074",
"LABEL_5075",
"LABEL_5076",
"LABEL_5077",
"LABEL_5078",
"LABEL_5079",
"LABEL_508",
"LABEL_5080",
"LABEL_5081",
"LABEL_5082",
"LABEL_5083",
"LABEL_5084",
"LABEL_5085",
"LABEL_5086",
"LABEL_5087",
"LABEL_5088",
"LABEL_5089",
"LABEL_509",
"LABEL_5090",
"LABEL_5091",
"LABEL_5092",
"LABEL_5093",
"LABEL_5094",
"LABEL_5095",
"LABEL_5096",
"LABEL_5097",
"LABEL_5098",
"LABEL_5099",
"LABEL_51",
"LABEL_510",
"LABEL_5100",
"LABEL_5101",
"LABEL_5102",
"LABEL_5103",
"LABEL_5104",
"LABEL_5105",
"LABEL_5106",
"LABEL_5107",
"LABEL_5108",
"LABEL_5109",
"LABEL_511",
"LABEL_5110",
"LABEL_5111",
"LABEL_5112",
"LABEL_5113",
"LABEL_5114",
"LABEL_5115",
"LABEL_5116",
"LABEL_5117",
"LABEL_5118",
"LABEL_5119",
"LABEL_512",
"LABEL_5120",
"LABEL_5121",
"LABEL_5122",
"LABEL_5123",
"LABEL_5124",
"LABEL_5125",
"LABEL_5126",
"LABEL_5127",
"LABEL_5128",
"LABEL_5129",
"LABEL_513",
"LABEL_5130",
"LABEL_5131",
"LABEL_5132",
"LABEL_5133",
"LABEL_5134",
"LABEL_5135",
"LABEL_5136",
"LABEL_5137",
"LABEL_5138",
"LABEL_5139",
"LABEL_514",
"LABEL_5140",
"LABEL_5141",
"LABEL_5142",
"LABEL_5143",
"LABEL_5144",
"LABEL_5145",
"LABEL_5146",
"LABEL_5147",
"LABEL_5148",
"LABEL_5149",
"LABEL_515",
"LABEL_5150",
"LABEL_5151",
"LABEL_5152",
"LABEL_5153",
"LABEL_5154",
"LABEL_5155",
"LABEL_5156",
"LABEL_5157",
"LABEL_5158",
"LABEL_5159",
"LABEL_516",
"LABEL_5160",
"LABEL_5161",
"LABEL_5162",
"LABEL_5163",
"LABEL_5164",
"LABEL_5165",
"LABEL_5166",
"LABEL_5167",
"LABEL_5168",
"LABEL_5169",
"LABEL_517",
"LABEL_5170",
"LABEL_5171",
"LABEL_5172",
"LABEL_5173",
"LABEL_5174",
"LABEL_5175",
"LABEL_5176",
"LABEL_5177",
"LABEL_5178",
"LABEL_5179",
"LABEL_518",
"LABEL_5180",
"LABEL_5181",
"LABEL_5182",
"LABEL_5183",
"LABEL_5184",
"LABEL_5185",
"LABEL_5186",
"LABEL_5187",
"LABEL_5188",
"LABEL_5189",
"LABEL_519",
"LABEL_5190",
"LABEL_5191",
"LABEL_5192",
"LABEL_5193",
"LABEL_5194",
"LABEL_5195",
"LABEL_5196",
"LABEL_5197",
"LABEL_5198",
"LABEL_5199",
"LABEL_52",
"LABEL_520",
"LABEL_5200",
"LABEL_5201",
"LABEL_5202",
"LABEL_5203",
"LABEL_5204",
"LABEL_5205",
"LABEL_5206",
"LABEL_5207",
"LABEL_5208",
"LABEL_5209",
"LABEL_521",
"LABEL_5210",
"LABEL_5211",
"LABEL_5212",
"LABEL_5213",
"LABEL_5214",
"LABEL_5215",
"LABEL_5216",
"LABEL_5217",
"LABEL_5218",
"LABEL_5219",
"LABEL_522",
"LABEL_5220",
"LABEL_5221",
"LABEL_5222",
"LABEL_5223",
"LABEL_5224",
"LABEL_5225",
"LABEL_5226",
"LABEL_5227",
"LABEL_5228",
"LABEL_5229",
"LABEL_523",
"LABEL_5230",
"LABEL_5231",
"LABEL_5232",
"LABEL_5233",
"LABEL_5234",
"LABEL_5235",
"LABEL_5236",
"LABEL_5237",
"LABEL_5238",
"LABEL_5239",
"LABEL_524",
"LABEL_5240",
"LABEL_5241",
"LABEL_5242",
"LABEL_5243",
"LABEL_5244",
"LABEL_5245",
"LABEL_5246",
"LABEL_5247",
"LABEL_5248",
"LABEL_5249",
"LABEL_525",
"LABEL_5250",
"LABEL_5251",
"LABEL_5252",
"LABEL_5253",
"LABEL_5254",
"LABEL_5255",
"LABEL_5256",
"LABEL_5257",
"LABEL_5258",
"LABEL_5259",
"LABEL_526",
"LABEL_5260",
"LABEL_5261",
"LABEL_5262",
"LABEL_5263",
"LABEL_5264",
"LABEL_5265",
"LABEL_5266",
"LABEL_5267",
"LABEL_5268",
"LABEL_5269",
"LABEL_527",
"LABEL_5270",
"LABEL_5271",
"LABEL_5272",
"LABEL_5273",
"LABEL_5274",
"LABEL_5275",
"LABEL_5276",
"LABEL_5277",
"LABEL_5278",
"LABEL_5279",
"LABEL_528",
"LABEL_5280",
"LABEL_5281",
"LABEL_5282",
"LABEL_5283",
"LABEL_5284",
"LABEL_5285",
"LABEL_5286",
"LABEL_5287",
"LABEL_5288",
"LABEL_5289",
"LABEL_529",
"LABEL_5290",
"LABEL_5291",
"LABEL_5292",
"LABEL_5293",
"LABEL_5294",
"LABEL_5295",
"LABEL_5296",
"LABEL_5297",
"LABEL_5298",
"LABEL_5299",
"LABEL_53",
"LABEL_530",
"LABEL_5300",
"LABEL_5301",
"LABEL_5302",
"LABEL_5303",
"LABEL_5304",
"LABEL_5305",
"LABEL_5306",
"LABEL_5307",
"LABEL_5308",
"LABEL_5309",
"LABEL_531",
"LABEL_5310",
"LABEL_5311",
"LABEL_5312",
"LABEL_5313",
"LABEL_5314",
"LABEL_5315",
"LABEL_5316",
"LABEL_5317",
"LABEL_5318",
"LABEL_5319",
"LABEL_532",
"LABEL_5320",
"LABEL_5321",
"LABEL_5322",
"LABEL_5323",
"LABEL_5324",
"LABEL_5325",
"LABEL_5326",
"LABEL_5327",
"LABEL_5328",
"LABEL_5329",
"LABEL_533",
"LABEL_5330",
"LABEL_5331",
"LABEL_5332",
"LABEL_5333",
"LABEL_5334",
"LABEL_5335",
"LABEL_5336",
"LABEL_5337",
"LABEL_5338",
"LABEL_5339",
"LABEL_534",
"LABEL_5340",
"LABEL_5341",
"LABEL_5342",
"LABEL_5343",
"LABEL_5344",
"LABEL_5345",
"LABEL_5346",
"LABEL_5347",
"LABEL_5348",
"LABEL_5349",
"LABEL_535",
"LABEL_5350",
"LABEL_5351",
"LABEL_5352",
"LABEL_5353",
"LABEL_5354",
"LABEL_5355",
"LABEL_5356",
"LABEL_5357",
"LABEL_5358",
"LABEL_5359",
"LABEL_536",
"LABEL_5360",
"LABEL_5361",
"LABEL_5362",
"LABEL_5363",
"LABEL_5364",
"LABEL_5365",
"LABEL_5366",
"LABEL_5367",
"LABEL_5368",
"LABEL_5369",
"LABEL_537",
"LABEL_5370",
"LABEL_5371",
"LABEL_5372",
"LABEL_5373",
"LABEL_5374",
"LABEL_5375",
"LABEL_5376",
"LABEL_5377",
"LABEL_5378",
"LABEL_5379",
"LABEL_538",
"LABEL_5380",
"LABEL_5381",
"LABEL_5382",
"LABEL_5383",
"LABEL_5384",
"LABEL_5385",
"LABEL_5386",
"LABEL_5387",
"LABEL_5388",
"LABEL_5389",
"LABEL_539",
"LABEL_5390",
"LABEL_5391",
"LABEL_5392",
"LABEL_5393",
"LABEL_5394",
"LABEL_5395",
"LABEL_5396",
"LABEL_5397",
"LABEL_5398",
"LABEL_5399",
"LABEL_54",
"LABEL_540",
"LABEL_5400",
"LABEL_5401",
"LABEL_5402",
"LABEL_5403",
"LABEL_5404",
"LABEL_5405",
"LABEL_5406",
"LABEL_5407",
"LABEL_5408",
"LABEL_5409",
"LABEL_541",
"LABEL_5410",
"LABEL_5411",
"LABEL_5412",
"LABEL_5413",
"LABEL_5414",
"LABEL_5415",
"LABEL_5416",
"LABEL_5417",
"LABEL_5418",
"LABEL_5419",
"LABEL_542",
"LABEL_5420",
"LABEL_5421",
"LABEL_5422",
"LABEL_5423",
"LABEL_5424",
"LABEL_5425",
"LABEL_5426",
"LABEL_5427",
"LABEL_5428",
"LABEL_5429",
"LABEL_543",
"LABEL_5430",
"LABEL_5431",
"LABEL_5432",
"LABEL_5433",
"LABEL_5434",
"LABEL_5435",
"LABEL_5436",
"LABEL_5437",
"LABEL_5438",
"LABEL_5439",
"LABEL_544",
"LABEL_5440",
"LABEL_5441",
"LABEL_5442",
"LABEL_5443",
"LABEL_5444",
"LABEL_5445",
"LABEL_5446",
"LABEL_5447",
"LABEL_5448",
"LABEL_5449",
"LABEL_545",
"LABEL_5450",
"LABEL_5451",
"LABEL_5452",
"LABEL_5453",
"LABEL_5454",
"LABEL_5455",
"LABEL_5456",
"LABEL_5457",
"LABEL_5458",
"LABEL_5459",
"LABEL_546",
"LABEL_5460",
"LABEL_5461",
"LABEL_5462",
"LABEL_5463",
"LABEL_5464",
"LABEL_5465",
"LABEL_5466",
"LABEL_5467",
"LABEL_5468",
"LABEL_5469",
"LABEL_547",
"LABEL_5470",
"LABEL_5471",
"LABEL_5472",
"LABEL_5473",
"LABEL_5474",
"LABEL_5475",
"LABEL_5476",
"LABEL_5477",
"LABEL_5478",
"LABEL_5479",
"LABEL_548",
"LABEL_5480",
"LABEL_5481",
"LABEL_5482",
"LABEL_5483",
"LABEL_5484",
"LABEL_5485",
"LABEL_5486",
"LABEL_5487",
"LABEL_5488",
"LABEL_5489",
"LABEL_549",
"LABEL_5490",
"LABEL_5491",
"LABEL_5492",
"LABEL_5493",
"LABEL_5494",
"LABEL_5495",
"LABEL_5496",
"LABEL_5497",
"LABEL_5498",
"LABEL_5499",
"LABEL_55",
"LABEL_550",
"LABEL_5500",
"LABEL_5501",
"LABEL_5502",
"LABEL_5503",
"LABEL_5504",
"LABEL_5505",
"LABEL_5506",
"LABEL_5507",
"LABEL_5508",
"LABEL_5509",
"LABEL_551",
"LABEL_5510",
"LABEL_5511",
"LABEL_5512",
"LABEL_5513",
"LABEL_5514",
"LABEL_5515",
"LABEL_5516",
"LABEL_5517",
"LABEL_5518",
"LABEL_5519",
"LABEL_552",
"LABEL_5520",
"LABEL_5521",
"LABEL_5522",
"LABEL_5523",
"LABEL_5524",
"LABEL_5525",
"LABEL_5526",
"LABEL_5527",
"LABEL_5528",
"LABEL_5529",
"LABEL_553",
"LABEL_5530",
"LABEL_5531",
"LABEL_5532",
"LABEL_5533",
"LABEL_5534",
"LABEL_5535",
"LABEL_5536",
"LABEL_5537",
"LABEL_5538",
"LABEL_5539",
"LABEL_554",
"LABEL_5540",
"LABEL_5541",
"LABEL_5542",
"LABEL_5543",
"LABEL_5544",
"LABEL_5545",
"LABEL_5546",
"LABEL_5547",
"LABEL_5548",
"LABEL_5549",
"LABEL_555",
"LABEL_5550",
"LABEL_5551",
"LABEL_5552",
"LABEL_5553",
"LABEL_5554",
"LABEL_5555",
"LABEL_5556",
"LABEL_5557",
"LABEL_5558",
"LABEL_5559",
"LABEL_556",
"LABEL_5560",
"LABEL_5561",
"LABEL_5562",
"LABEL_5563",
"LABEL_5564",
"LABEL_5565",
"LABEL_5566",
"LABEL_5567",
"LABEL_5568",
"LABEL_5569",
"LABEL_557",
"LABEL_5570",
"LABEL_5571",
"LABEL_5572",
"LABEL_5573",
"LABEL_5574",
"LABEL_5575",
"LABEL_5576",
"LABEL_5577",
"LABEL_5578",
"LABEL_5579",
"LABEL_558",
"LABEL_5580",
"LABEL_5581",
"LABEL_5582",
"LABEL_5583",
"LABEL_5584",
"LABEL_5585",
"LABEL_5586",
"LABEL_5587",
"LABEL_5588",
"LABEL_5589",
"LABEL_559",
"LABEL_5590",
"LABEL_5591",
"LABEL_5592",
"LABEL_5593",
"LABEL_5594",
"LABEL_5595",
"LABEL_5596",
"LABEL_5597",
"LABEL_5598",
"LABEL_5599",
"LABEL_56",
"LABEL_560",
"LABEL_5600",
"LABEL_5601",
"LABEL_5602",
"LABEL_5603",
"LABEL_5604",
"LABEL_5605",
"LABEL_5606",
"LABEL_5607",
"LABEL_5608",
"LABEL_5609",
"LABEL_561",
"LABEL_5610",
"LABEL_5611",
"LABEL_5612",
"LABEL_5613",
"LABEL_5614",
"LABEL_5615",
"LABEL_5616",
"LABEL_5617",
"LABEL_5618",
"LABEL_5619",
"LABEL_562",
"LABEL_5620",
"LABEL_5621",
"LABEL_5622",
"LABEL_5623",
"LABEL_5624",
"LABEL_5625",
"LABEL_5626",
"LABEL_5627",
"LABEL_5628",
"LABEL_5629",
"LABEL_563",
"LABEL_5630",
"LABEL_5631",
"LABEL_5632",
"LABEL_5633",
"LABEL_5634",
"LABEL_5635",
"LABEL_5636",
"LABEL_5637",
"LABEL_5638",
"LABEL_5639",
"LABEL_564",
"LABEL_5640",
"LABEL_5641",
"LABEL_5642",
"LABEL_5643",
"LABEL_5644",
"LABEL_5645",
"LABEL_5646",
"LABEL_5647",
"LABEL_5648",
"LABEL_5649",
"LABEL_565",
"LABEL_5650",
"LABEL_5651",
"LABEL_5652",
"LABEL_5653",
"LABEL_5654",
"LABEL_5655",
"LABEL_5656",
"LABEL_5657",
"LABEL_5658",
"LABEL_5659",
"LABEL_566",
"LABEL_5660",
"LABEL_5661",
"LABEL_5662",
"LABEL_5663",
"LABEL_5664",
"LABEL_5665",
"LABEL_5666",
"LABEL_5667",
"LABEL_5668",
"LABEL_5669",
"LABEL_567",
"LABEL_5670",
"LABEL_5671",
"LABEL_5672",
"LABEL_5673",
"LABEL_5674",
"LABEL_5675",
"LABEL_5676",
"LABEL_5677",
"LABEL_5678",
"LABEL_5679",
"LABEL_568",
"LABEL_5680",
"LABEL_5681",
"LABEL_5682",
"LABEL_5683",
"LABEL_5684",
"LABEL_5685",
"LABEL_5686",
"LABEL_5687",
"LABEL_5688",
"LABEL_5689",
"LABEL_569",
"LABEL_5690",
"LABEL_5691",
"LABEL_5692",
"LABEL_5693",
"LABEL_5694",
"LABEL_5695",
"LABEL_5696",
"LABEL_5697",
"LABEL_5698",
"LABEL_5699",
"LABEL_57",
"LABEL_570",
"LABEL_5700",
"LABEL_5701",
"LABEL_5702",
"LABEL_5703",
"LABEL_5704",
"LABEL_5705",
"LABEL_5706",
"LABEL_5707",
"LABEL_5708",
"LABEL_5709",
"LABEL_571",
"LABEL_5710",
"LABEL_5711",
"LABEL_5712",
"LABEL_5713",
"LABEL_5714",
"LABEL_5715",
"LABEL_5716",
"LABEL_5717",
"LABEL_5718",
"LABEL_5719",
"LABEL_572",
"LABEL_5720",
"LABEL_5721",
"LABEL_5722",
"LABEL_5723",
"LABEL_5724",
"LABEL_5725",
"LABEL_5726",
"LABEL_5727",
"LABEL_5728",
"LABEL_5729",
"LABEL_573",
"LABEL_5730",
"LABEL_5731",
"LABEL_5732",
"LABEL_5733",
"LABEL_5734",
"LABEL_5735",
"LABEL_5736",
"LABEL_5737",
"LABEL_5738",
"LABEL_5739",
"LABEL_574",
"LABEL_5740",
"LABEL_5741",
"LABEL_5742",
"LABEL_5743",
"LABEL_5744",
"LABEL_5745",
"LABEL_5746",
"LABEL_5747",
"LABEL_5748",
"LABEL_5749",
"LABEL_575",
"LABEL_5750",
"LABEL_5751",
"LABEL_5752",
"LABEL_5753",
"LABEL_5754",
"LABEL_5755",
"LABEL_5756",
"LABEL_5757",
"LABEL_5758",
"LABEL_5759",
"LABEL_576",
"LABEL_5760",
"LABEL_5761",
"LABEL_5762",
"LABEL_5763",
"LABEL_5764",
"LABEL_5765",
"LABEL_5766",
"LABEL_5767",
"LABEL_5768",
"LABEL_5769",
"LABEL_577",
"LABEL_5770",
"LABEL_5771",
"LABEL_5772",
"LABEL_5773",
"LABEL_5774",
"LABEL_5775",
"LABEL_5776",
"LABEL_5777",
"LABEL_5778",
"LABEL_5779",
"LABEL_578",
"LABEL_5780",
"LABEL_5781",
"LABEL_5782",
"LABEL_5783",
"LABEL_5784",
"LABEL_5785",
"LABEL_5786",
"LABEL_5787",
"LABEL_5788",
"LABEL_5789",
"LABEL_579",
"LABEL_5790",
"LABEL_5791",
"LABEL_5792",
"LABEL_5793",
"LABEL_5794",
"LABEL_5795",
"LABEL_5796",
"LABEL_5797",
"LABEL_5798",
"LABEL_5799",
"LABEL_58",
"LABEL_580",
"LABEL_5800",
"LABEL_5801",
"LABEL_5802",
"LABEL_5803",
"LABEL_5804",
"LABEL_5805",
"LABEL_5806",
"LABEL_5807",
"LABEL_5808",
"LABEL_5809",
"LABEL_581",
"LABEL_5810",
"LABEL_5811",
"LABEL_5812",
"LABEL_5813",
"LABEL_5814",
"LABEL_5815",
"LABEL_5816",
"LABEL_5817",
"LABEL_5818",
"LABEL_5819",
"LABEL_582",
"LABEL_5820",
"LABEL_5821",
"LABEL_5822",
"LABEL_5823",
"LABEL_5824",
"LABEL_5825",
"LABEL_5826",
"LABEL_5827",
"LABEL_5828",
"LABEL_5829",
"LABEL_583",
"LABEL_5830",
"LABEL_5831",
"LABEL_5832",
"LABEL_5833",
"LABEL_5834",
"LABEL_5835",
"LABEL_5836",
"LABEL_5837",
"LABEL_5838",
"LABEL_5839",
"LABEL_584",
"LABEL_5840",
"LABEL_5841",
"LABEL_5842",
"LABEL_5843",
"LABEL_5844",
"LABEL_5845",
"LABEL_5846",
"LABEL_5847",
"LABEL_5848",
"LABEL_5849",
"LABEL_585",
"LABEL_5850",
"LABEL_5851",
"LABEL_5852",
"LABEL_5853",
"LABEL_5854",
"LABEL_5855",
"LABEL_5856",
"LABEL_5857",
"LABEL_5858",
"LABEL_5859",
"LABEL_586",
"LABEL_5860",
"LABEL_5861",
"LABEL_5862",
"LABEL_5863",
"LABEL_5864",
"LABEL_5865",
"LABEL_5866",
"LABEL_5867",
"LABEL_5868",
"LABEL_5869",
"LABEL_587",
"LABEL_5870",
"LABEL_5871",
"LABEL_5872",
"LABEL_5873",
"LABEL_5874",
"LABEL_5875",
"LABEL_5876",
"LABEL_5877",
"LABEL_5878",
"LABEL_5879",
"LABEL_588",
"LABEL_5880",
"LABEL_5881",
"LABEL_5882",
"LABEL_5883",
"LABEL_5884",
"LABEL_5885",
"LABEL_5886",
"LABEL_5887",
"LABEL_5888",
"LABEL_5889",
"LABEL_589",
"LABEL_5890",
"LABEL_5891",
"LABEL_5892",
"LABEL_5893",
"LABEL_5894",
"LABEL_5895",
"LABEL_5896",
"LABEL_5897",
"LABEL_5898",
"LABEL_5899",
"LABEL_59",
"LABEL_590",
"LABEL_5900",
"LABEL_5901",
"LABEL_5902",
"LABEL_5903",
"LABEL_5904",
"LABEL_5905",
"LABEL_5906",
"LABEL_5907",
"LABEL_5908",
"LABEL_5909",
"LABEL_591",
"LABEL_5910",
"LABEL_5911",
"LABEL_5912",
"LABEL_5913",
"LABEL_5914",
"LABEL_5915",
"LABEL_5916",
"LABEL_5917",
"LABEL_5918",
"LABEL_5919",
"LABEL_592",
"LABEL_5920",
"LABEL_5921",
"LABEL_5922",
"LABEL_5923",
"LABEL_5924",
"LABEL_5925",
"LABEL_5926",
"LABEL_5927",
"LABEL_5928",
"LABEL_5929",
"LABEL_593",
"LABEL_5930",
"LABEL_5931",
"LABEL_5932",
"LABEL_5933",
"LABEL_5934",
"LABEL_5935",
"LABEL_5936",
"LABEL_5937",
"LABEL_5938",
"LABEL_5939",
"LABEL_594",
"LABEL_5940",
"LABEL_5941",
"LABEL_5942",
"LABEL_5943",
"LABEL_5944",
"LABEL_5945",
"LABEL_5946",
"LABEL_5947",
"LABEL_5948",
"LABEL_5949",
"LABEL_595",
"LABEL_5950",
"LABEL_5951",
"LABEL_5952",
"LABEL_5953",
"LABEL_5954",
"LABEL_5955",
"LABEL_5956",
"LABEL_5957",
"LABEL_5958",
"LABEL_5959",
"LABEL_596",
"LABEL_5960",
"LABEL_5961",
"LABEL_5962",
"LABEL_5963",
"LABEL_5964",
"LABEL_5965",
"LABEL_5966",
"LABEL_5967",
"LABEL_5968",
"LABEL_5969",
"LABEL_597",
"LABEL_5970",
"LABEL_5971",
"LABEL_5972",
"LABEL_5973",
"LABEL_5974",
"LABEL_5975",
"LABEL_5976",
"LABEL_5977",
"LABEL_5978",
"LABEL_5979",
"LABEL_598",
"LABEL_5980",
"LABEL_5981",
"LABEL_5982",
"LABEL_5983",
"LABEL_5984",
"LABEL_5985",
"LABEL_5986",
"LABEL_5987",
"LABEL_5988",
"LABEL_5989",
"LABEL_599",
"LABEL_5990",
"LABEL_5991",
"LABEL_5992",
"LABEL_5993",
"LABEL_5994",
"LABEL_5995",
"LABEL_5996",
"LABEL_5997",
"LABEL_5998",
"LABEL_5999",
"LABEL_6",
"LABEL_60",
"LABEL_600",
"LABEL_6000",
"LABEL_6001",
"LABEL_6002",
"LABEL_6003",
"LABEL_6004",
"LABEL_6005",
"LABEL_6006",
"LABEL_6007",
"LABEL_6008",
"LABEL_6009",
"LABEL_601",
"LABEL_6010",
"LABEL_6011",
"LABEL_6012",
"LABEL_6013",
"LABEL_6014",
"LABEL_6015",
"LABEL_6016",
"LABEL_6017",
"LABEL_6018",
"LABEL_6019",
"LABEL_602",
"LABEL_6020",
"LABEL_6021",
"LABEL_6022",
"LABEL_6023",
"LABEL_6024",
"LABEL_6025",
"LABEL_6026",
"LABEL_6027",
"LABEL_6028",
"LABEL_6029",
"LABEL_603",
"LABEL_6030",
"LABEL_6031",
"LABEL_6032",
"LABEL_6033",
"LABEL_6034",
"LABEL_6035",
"LABEL_6036",
"LABEL_6037",
"LABEL_6038",
"LABEL_6039",
"LABEL_604",
"LABEL_6040",
"LABEL_6041",
"LABEL_6042",
"LABEL_6043",
"LABEL_6044",
"LABEL_6045",
"LABEL_6046",
"LABEL_6047",
"LABEL_6048",
"LABEL_6049",
"LABEL_605",
"LABEL_6050",
"LABEL_6051",
"LABEL_6052",
"LABEL_6053",
"LABEL_6054",
"LABEL_6055",
"LABEL_6056",
"LABEL_6057",
"LABEL_6058",
"LABEL_6059",
"LABEL_606",
"LABEL_6060",
"LABEL_6061",
"LABEL_6062",
"LABEL_6063",
"LABEL_6064",
"LABEL_6065",
"LABEL_6066",
"LABEL_6067",
"LABEL_6068",
"LABEL_6069",
"LABEL_607",
"LABEL_6070",
"LABEL_6071",
"LABEL_6072",
"LABEL_6073",
"LABEL_6074",
"LABEL_6075",
"LABEL_6076",
"LABEL_6077",
"LABEL_6078",
"LABEL_6079",
"LABEL_608",
"LABEL_6080",
"LABEL_6081",
"LABEL_6082",
"LABEL_6083",
"LABEL_6084",
"LABEL_6085",
"LABEL_6086",
"LABEL_6087",
"LABEL_6088",
"LABEL_6089",
"LABEL_609",
"LABEL_6090",
"LABEL_6091",
"LABEL_6092",
"LABEL_6093",
"LABEL_6094",
"LABEL_6095",
"LABEL_6096",
"LABEL_6097",
"LABEL_6098",
"LABEL_6099",
"LABEL_61",
"LABEL_610",
"LABEL_6100",
"LABEL_6101",
"LABEL_6102",
"LABEL_6103",
"LABEL_6104",
"LABEL_6105",
"LABEL_6106",
"LABEL_6107",
"LABEL_6108",
"LABEL_6109",
"LABEL_611",
"LABEL_6110",
"LABEL_6111",
"LABEL_6112",
"LABEL_6113",
"LABEL_6114",
"LABEL_6115",
"LABEL_6116",
"LABEL_6117",
"LABEL_6118",
"LABEL_6119",
"LABEL_612",
"LABEL_6120",
"LABEL_6121",
"LABEL_6122",
"LABEL_6123",
"LABEL_6124",
"LABEL_6125",
"LABEL_6126",
"LABEL_6127",
"LABEL_6128",
"LABEL_6129",
"LABEL_613",
"LABEL_6130",
"LABEL_6131",
"LABEL_6132",
"LABEL_6133",
"LABEL_6134",
"LABEL_6135",
"LABEL_6136",
"LABEL_6137",
"LABEL_6138",
"LABEL_6139",
"LABEL_614",
"LABEL_6140",
"LABEL_6141",
"LABEL_6142",
"LABEL_6143",
"LABEL_6144",
"LABEL_6145",
"LABEL_6146",
"LABEL_6147",
"LABEL_6148",
"LABEL_6149",
"LABEL_615",
"LABEL_6150",
"LABEL_6151",
"LABEL_6152",
"LABEL_6153",
"LABEL_6154",
"LABEL_6155",
"LABEL_6156",
"LABEL_6157",
"LABEL_6158",
"LABEL_6159",
"LABEL_616",
"LABEL_6160",
"LABEL_6161",
"LABEL_6162",
"LABEL_6163",
"LABEL_6164",
"LABEL_6165",
"LABEL_6166",
"LABEL_6167",
"LABEL_6168",
"LABEL_6169",
"LABEL_617",
"LABEL_6170",
"LABEL_6171",
"LABEL_6172",
"LABEL_6173",
"LABEL_6174",
"LABEL_6175",
"LABEL_6176",
"LABEL_6177",
"LABEL_6178",
"LABEL_6179",
"LABEL_618",
"LABEL_6180",
"LABEL_6181",
"LABEL_6182",
"LABEL_6183",
"LABEL_6184",
"LABEL_6185",
"LABEL_6186",
"LABEL_6187",
"LABEL_6188",
"LABEL_6189",
"LABEL_619",
"LABEL_6190",
"LABEL_6191",
"LABEL_6192",
"LABEL_6193",
"LABEL_6194",
"LABEL_6195",
"LABEL_6196",
"LABEL_6197",
"LABEL_6198",
"LABEL_6199",
"LABEL_62",
"LABEL_620",
"LABEL_6200",
"LABEL_6201",
"LABEL_6202",
"LABEL_6203",
"LABEL_6204",
"LABEL_6205",
"LABEL_6206",
"LABEL_6207",
"LABEL_6208",
"LABEL_6209",
"LABEL_621",
"LABEL_6210",
"LABEL_6211",
"LABEL_6212",
"LABEL_6213",
"LABEL_6214",
"LABEL_6215",
"LABEL_6216",
"LABEL_6217",
"LABEL_6218",
"LABEL_6219",
"LABEL_622",
"LABEL_6220",
"LABEL_6221",
"LABEL_6222",
"LABEL_6223",
"LABEL_6224",
"LABEL_6225",
"LABEL_6226",
"LABEL_6227",
"LABEL_6228",
"LABEL_6229",
"LABEL_623",
"LABEL_6230",
"LABEL_6231",
"LABEL_6232",
"LABEL_6233",
"LABEL_6234",
"LABEL_6235",
"LABEL_6236",
"LABEL_6237",
"LABEL_6238",
"LABEL_6239",
"LABEL_624",
"LABEL_6240",
"LABEL_6241",
"LABEL_6242",
"LABEL_6243",
"LABEL_6244",
"LABEL_6245",
"LABEL_6246",
"LABEL_6247",
"LABEL_6248",
"LABEL_6249",
"LABEL_625",
"LABEL_6250",
"LABEL_6251",
"LABEL_6252",
"LABEL_6253",
"LABEL_6254",
"LABEL_6255",
"LABEL_6256",
"LABEL_6257",
"LABEL_6258",
"LABEL_6259",
"LABEL_626",
"LABEL_6260",
"LABEL_6261",
"LABEL_6262",
"LABEL_6263",
"LABEL_6264",
"LABEL_6265",
"LABEL_6266",
"LABEL_6267",
"LABEL_6268",
"LABEL_6269",
"LABEL_627",
"LABEL_6270",
"LABEL_6271",
"LABEL_6272",
"LABEL_6273",
"LABEL_6274",
"LABEL_6275",
"LABEL_6276",
"LABEL_6277",
"LABEL_6278",
"LABEL_6279",
"LABEL_628",
"LABEL_6280",
"LABEL_6281",
"LABEL_6282",
"LABEL_6283",
"LABEL_6284",
"LABEL_6285",
"LABEL_6286",
"LABEL_6287",
"LABEL_6288",
"LABEL_6289",
"LABEL_629",
"LABEL_6290",
"LABEL_6291",
"LABEL_6292",
"LABEL_6293",
"LABEL_6294",
"LABEL_6295",
"LABEL_6296",
"LABEL_6297",
"LABEL_6298",
"LABEL_6299",
"LABEL_63",
"LABEL_630",
"LABEL_6300",
"LABEL_6301",
"LABEL_6302",
"LABEL_6303",
"LABEL_6304",
"LABEL_6305",
"LABEL_6306",
"LABEL_6307",
"LABEL_6308",
"LABEL_6309",
"LABEL_631",
"LABEL_6310",
"LABEL_6311",
"LABEL_6312",
"LABEL_6313",
"LABEL_6314",
"LABEL_6315",
"LABEL_6316",
"LABEL_6317",
"LABEL_6318",
"LABEL_6319",
"LABEL_632",
"LABEL_6320",
"LABEL_6321",
"LABEL_6322",
"LABEL_6323",
"LABEL_6324",
"LABEL_6325",
"LABEL_6326",
"LABEL_6327",
"LABEL_6328",
"LABEL_6329",
"LABEL_633",
"LABEL_6330",
"LABEL_6331",
"LABEL_6332",
"LABEL_6333",
"LABEL_6334",
"LABEL_6335",
"LABEL_6336",
"LABEL_6337",
"LABEL_6338",
"LABEL_6339",
"LABEL_634",
"LABEL_6340",
"LABEL_6341",
"LABEL_6342",
"LABEL_6343",
"LABEL_6344",
"LABEL_6345",
"LABEL_6346",
"LABEL_6347",
"LABEL_6348",
"LABEL_6349",
"LABEL_635",
"LABEL_6350",
"LABEL_6351",
"LABEL_6352",
"LABEL_6353",
"LABEL_6354",
"LABEL_6355",
"LABEL_6356",
"LABEL_6357",
"LABEL_6358",
"LABEL_6359",
"LABEL_636",
"LABEL_6360",
"LABEL_6361",
"LABEL_6362",
"LABEL_6363",
"LABEL_6364",
"LABEL_6365",
"LABEL_6366",
"LABEL_6367",
"LABEL_6368",
"LABEL_6369",
"LABEL_637",
"LABEL_6370",
"LABEL_6371",
"LABEL_6372",
"LABEL_6373",
"LABEL_6374",
"LABEL_6375",
"LABEL_6376",
"LABEL_6377",
"LABEL_6378",
"LABEL_6379",
"LABEL_638",
"LABEL_6380",
"LABEL_6381",
"LABEL_6382",
"LABEL_6383",
"LABEL_6384",
"LABEL_6385",
"LABEL_6386",
"LABEL_6387",
"LABEL_6388",
"LABEL_6389",
"LABEL_639",
"LABEL_6390",
"LABEL_6391",
"LABEL_6392",
"LABEL_6393",
"LABEL_6394",
"LABEL_6395",
"LABEL_6396",
"LABEL_6397",
"LABEL_6398",
"LABEL_6399",
"LABEL_64",
"LABEL_640",
"LABEL_6400",
"LABEL_6401",
"LABEL_6402",
"LABEL_6403",
"LABEL_6404",
"LABEL_6405",
"LABEL_6406",
"LABEL_6407",
"LABEL_6408",
"LABEL_6409",
"LABEL_641",
"LABEL_6410",
"LABEL_6411",
"LABEL_6412",
"LABEL_6413",
"LABEL_6414",
"LABEL_6415",
"LABEL_6416",
"LABEL_6417",
"LABEL_6418",
"LABEL_6419",
"LABEL_642",
"LABEL_6420",
"LABEL_6421",
"LABEL_6422",
"LABEL_6423",
"LABEL_6424",
"LABEL_6425",
"LABEL_6426",
"LABEL_6427",
"LABEL_6428",
"LABEL_6429",
"LABEL_643",
"LABEL_6430",
"LABEL_6431",
"LABEL_6432",
"LABEL_6433",
"LABEL_6434",
"LABEL_6435",
"LABEL_6436",
"LABEL_6437",
"LABEL_6438",
"LABEL_6439",
"LABEL_644",
"LABEL_6440",
"LABEL_6441",
"LABEL_6442",
"LABEL_6443",
"LABEL_6444",
"LABEL_6445",
"LABEL_6446",
"LABEL_6447",
"LABEL_6448",
"LABEL_6449",
"LABEL_645",
"LABEL_6450",
"LABEL_6451",
"LABEL_6452",
"LABEL_6453",
"LABEL_6454",
"LABEL_6455",
"LABEL_6456",
"LABEL_6457",
"LABEL_6458",
"LABEL_6459",
"LABEL_646",
"LABEL_6460",
"LABEL_6461",
"LABEL_6462",
"LABEL_6463",
"LABEL_6464",
"LABEL_6465",
"LABEL_6466",
"LABEL_6467",
"LABEL_6468",
"LABEL_6469",
"LABEL_647",
"LABEL_6470",
"LABEL_6471",
"LABEL_6472",
"LABEL_6473",
"LABEL_6474",
"LABEL_6475",
"LABEL_6476",
"LABEL_6477",
"LABEL_6478",
"LABEL_6479",
"LABEL_648",
"LABEL_6480",
"LABEL_6481",
"LABEL_6482",
"LABEL_6483",
"LABEL_6484",
"LABEL_6485",
"LABEL_6486",
"LABEL_6487",
"LABEL_6488",
"LABEL_6489",
"LABEL_649",
"LABEL_6490",
"LABEL_6491",
"LABEL_6492",
"LABEL_6493",
"LABEL_6494",
"LABEL_6495",
"LABEL_6496",
"LABEL_6497",
"LABEL_6498",
"LABEL_6499",
"LABEL_65",
"LABEL_650",
"LABEL_6500",
"LABEL_6501",
"LABEL_6502",
"LABEL_6503",
"LABEL_6504",
"LABEL_6505",
"LABEL_6506",
"LABEL_6507",
"LABEL_6508",
"LABEL_6509",
"LABEL_651",
"LABEL_6510",
"LABEL_6511",
"LABEL_6512",
"LABEL_6513",
"LABEL_6514",
"LABEL_6515",
"LABEL_6516",
"LABEL_6517",
"LABEL_6518",
"LABEL_6519",
"LABEL_652",
"LABEL_6520",
"LABEL_6521",
"LABEL_6522",
"LABEL_6523",
"LABEL_6524",
"LABEL_6525",
"LABEL_6526",
"LABEL_6527",
"LABEL_6528",
"LABEL_6529",
"LABEL_653",
"LABEL_6530",
"LABEL_6531",
"LABEL_6532",
"LABEL_6533",
"LABEL_6534",
"LABEL_6535",
"LABEL_6536",
"LABEL_6537",
"LABEL_6538",
"LABEL_6539",
"LABEL_654",
"LABEL_6540",
"LABEL_6541",
"LABEL_6542",
"LABEL_6543",
"LABEL_6544",
"LABEL_6545",
"LABEL_6546",
"LABEL_6547",
"LABEL_6548",
"LABEL_6549",
"LABEL_655",
"LABEL_6550",
"LABEL_6551",
"LABEL_6552",
"LABEL_6553",
"LABEL_6554",
"LABEL_6555",
"LABEL_6556",
"LABEL_6557",
"LABEL_6558",
"LABEL_6559",
"LABEL_656",
"LABEL_6560",
"LABEL_6561",
"LABEL_6562",
"LABEL_6563",
"LABEL_6564",
"LABEL_6565",
"LABEL_6566",
"LABEL_6567",
"LABEL_6568",
"LABEL_6569",
"LABEL_657",
"LABEL_6570",
"LABEL_6571",
"LABEL_6572",
"LABEL_6573",
"LABEL_6574",
"LABEL_6575",
"LABEL_6576",
"LABEL_6577",
"LABEL_6578",
"LABEL_6579",
"LABEL_658",
"LABEL_6580",
"LABEL_6581",
"LABEL_6582",
"LABEL_6583",
"LABEL_6584",
"LABEL_6585",
"LABEL_6586",
"LABEL_6587",
"LABEL_6588",
"LABEL_6589",
"LABEL_659",
"LABEL_6590",
"LABEL_6591",
"LABEL_6592",
"LABEL_6593",
"LABEL_6594",
"LABEL_6595",
"LABEL_6596",
"LABEL_6597",
"LABEL_6598",
"LABEL_6599",
"LABEL_66",
"LABEL_660",
"LABEL_6600",
"LABEL_6601",
"LABEL_6602",
"LABEL_6603",
"LABEL_6604",
"LABEL_6605",
"LABEL_6606",
"LABEL_6607",
"LABEL_6608",
"LABEL_6609",
"LABEL_661",
"LABEL_6610",
"LABEL_6611",
"LABEL_6612",
"LABEL_6613",
"LABEL_6614",
"LABEL_6615",
"LABEL_6616",
"LABEL_6617",
"LABEL_6618",
"LABEL_6619",
"LABEL_662",
"LABEL_6620",
"LABEL_6621",
"LABEL_6622",
"LABEL_6623",
"LABEL_6624",
"LABEL_6625",
"LABEL_6626",
"LABEL_6627",
"LABEL_6628",
"LABEL_6629",
"LABEL_663",
"LABEL_6630",
"LABEL_6631",
"LABEL_6632",
"LABEL_6633",
"LABEL_6634",
"LABEL_6635",
"LABEL_6636",
"LABEL_6637",
"LABEL_6638",
"LABEL_6639",
"LABEL_664",
"LABEL_6640",
"LABEL_6641",
"LABEL_6642",
"LABEL_6643",
"LABEL_6644",
"LABEL_6645",
"LABEL_6646",
"LABEL_6647",
"LABEL_6648",
"LABEL_6649",
"LABEL_665",
"LABEL_6650",
"LABEL_6651",
"LABEL_6652",
"LABEL_6653",
"LABEL_6654",
"LABEL_6655",
"LABEL_6656",
"LABEL_6657",
"LABEL_6658",
"LABEL_6659",
"LABEL_666",
"LABEL_6660",
"LABEL_6661",
"LABEL_6662",
"LABEL_6663",
"LABEL_6664",
"LABEL_6665",
"LABEL_6666",
"LABEL_6667",
"LABEL_6668",
"LABEL_6669",
"LABEL_667",
"LABEL_6670",
"LABEL_6671",
"LABEL_6672",
"LABEL_6673",
"LABEL_6674",
"LABEL_6675",
"LABEL_6676",
"LABEL_6677",
"LABEL_6678",
"LABEL_6679",
"LABEL_668",
"LABEL_6680",
"LABEL_6681",
"LABEL_6682",
"LABEL_6683",
"LABEL_6684",
"LABEL_6685",
"LABEL_6686",
"LABEL_6687",
"LABEL_6688",
"LABEL_6689",
"LABEL_669",
"LABEL_6690",
"LABEL_6691",
"LABEL_6692",
"LABEL_6693",
"LABEL_6694",
"LABEL_6695",
"LABEL_6696",
"LABEL_6697",
"LABEL_6698",
"LABEL_6699",
"LABEL_67",
"LABEL_670",
"LABEL_6700",
"LABEL_6701",
"LABEL_6702",
"LABEL_6703",
"LABEL_6704",
"LABEL_6705",
"LABEL_6706",
"LABEL_6707",
"LABEL_6708",
"LABEL_6709",
"LABEL_671",
"LABEL_6710",
"LABEL_6711",
"LABEL_6712",
"LABEL_6713",
"LABEL_6714",
"LABEL_6715",
"LABEL_6716",
"LABEL_6717",
"LABEL_6718",
"LABEL_6719",
"LABEL_672",
"LABEL_6720",
"LABEL_6721",
"LABEL_6722",
"LABEL_6723",
"LABEL_6724",
"LABEL_6725",
"LABEL_6726",
"LABEL_6727",
"LABEL_6728",
"LABEL_6729",
"LABEL_673",
"LABEL_6730",
"LABEL_6731",
"LABEL_6732",
"LABEL_6733",
"LABEL_6734",
"LABEL_6735",
"LABEL_6736",
"LABEL_6737",
"LABEL_6738",
"LABEL_6739",
"LABEL_674",
"LABEL_6740",
"LABEL_6741",
"LABEL_6742",
"LABEL_6743",
"LABEL_6744",
"LABEL_6745",
"LABEL_6746",
"LABEL_6747",
"LABEL_6748",
"LABEL_6749",
"LABEL_675",
"LABEL_6750",
"LABEL_6751",
"LABEL_6752",
"LABEL_6753",
"LABEL_6754",
"LABEL_6755",
"LABEL_6756",
"LABEL_6757",
"LABEL_6758",
"LABEL_6759",
"LABEL_676",
"LABEL_6760",
"LABEL_6761",
"LABEL_6762",
"LABEL_6763",
"LABEL_6764",
"LABEL_6765",
"LABEL_6766",
"LABEL_6767",
"LABEL_6768",
"LABEL_6769",
"LABEL_677",
"LABEL_6770",
"LABEL_6771",
"LABEL_6772",
"LABEL_6773",
"LABEL_6774",
"LABEL_6775",
"LABEL_6776",
"LABEL_6777",
"LABEL_6778",
"LABEL_6779",
"LABEL_678",
"LABEL_6780",
"LABEL_6781",
"LABEL_6782",
"LABEL_6783",
"LABEL_6784",
"LABEL_6785",
"LABEL_6786",
"LABEL_6787",
"LABEL_6788",
"LABEL_6789",
"LABEL_679",
"LABEL_6790",
"LABEL_6791",
"LABEL_6792",
"LABEL_6793",
"LABEL_6794",
"LABEL_6795",
"LABEL_6796",
"LABEL_6797",
"LABEL_6798",
"LABEL_6799",
"LABEL_68",
"LABEL_680",
"LABEL_6800",
"LABEL_6801",
"LABEL_6802",
"LABEL_6803",
"LABEL_6804",
"LABEL_6805",
"LABEL_6806",
"LABEL_6807",
"LABEL_6808",
"LABEL_6809",
"LABEL_681",
"LABEL_6810",
"LABEL_6811",
"LABEL_6812",
"LABEL_6813",
"LABEL_6814",
"LABEL_6815",
"LABEL_6816",
"LABEL_6817",
"LABEL_6818",
"LABEL_6819",
"LABEL_682",
"LABEL_6820",
"LABEL_6821",
"LABEL_6822",
"LABEL_6823",
"LABEL_6824",
"LABEL_6825",
"LABEL_6826",
"LABEL_6827",
"LABEL_6828",
"LABEL_6829",
"LABEL_683",
"LABEL_6830",
"LABEL_6831",
"LABEL_6832",
"LABEL_6833",
"LABEL_6834",
"LABEL_6835",
"LABEL_6836",
"LABEL_6837",
"LABEL_6838",
"LABEL_6839",
"LABEL_684",
"LABEL_6840",
"LABEL_6841",
"LABEL_6842",
"LABEL_6843",
"LABEL_6844",
"LABEL_6845",
"LABEL_6846",
"LABEL_6847",
"LABEL_6848",
"LABEL_6849",
"LABEL_685",
"LABEL_6850",
"LABEL_6851",
"LABEL_6852",
"LABEL_6853",
"LABEL_6854",
"LABEL_6855",
"LABEL_6856",
"LABEL_6857",
"LABEL_6858",
"LABEL_6859",
"LABEL_686",
"LABEL_6860",
"LABEL_6861",
"LABEL_6862",
"LABEL_6863",
"LABEL_6864",
"LABEL_6865",
"LABEL_6866",
"LABEL_6867",
"LABEL_6868",
"LABEL_6869",
"LABEL_687",
"LABEL_6870",
"LABEL_6871",
"LABEL_6872",
"LABEL_6873",
"LABEL_6874",
"LABEL_6875",
"LABEL_6876",
"LABEL_6877",
"LABEL_6878",
"LABEL_6879",
"LABEL_688",
"LABEL_6880",
"LABEL_6881",
"LABEL_6882",
"LABEL_6883",
"LABEL_6884",
"LABEL_6885",
"LABEL_6886",
"LABEL_6887",
"LABEL_6888",
"LABEL_6889",
"LABEL_689",
"LABEL_6890",
"LABEL_6891",
"LABEL_6892",
"LABEL_6893",
"LABEL_6894",
"LABEL_6895",
"LABEL_6896",
"LABEL_6897",
"LABEL_6898",
"LABEL_6899",
"LABEL_69",
"LABEL_690",
"LABEL_6900",
"LABEL_6901",
"LABEL_6902",
"LABEL_6903",
"LABEL_6904",
"LABEL_6905",
"LABEL_6906",
"LABEL_6907",
"LABEL_6908",
"LABEL_6909",
"LABEL_691",
"LABEL_6910",
"LABEL_6911",
"LABEL_6912",
"LABEL_6913",
"LABEL_6914",
"LABEL_6915",
"LABEL_6916",
"LABEL_6917",
"LABEL_6918",
"LABEL_6919",
"LABEL_692",
"LABEL_6920",
"LABEL_6921",
"LABEL_6922",
"LABEL_6923",
"LABEL_6924",
"LABEL_6925",
"LABEL_6926",
"LABEL_6927",
"LABEL_6928",
"LABEL_6929",
"LABEL_693",
"LABEL_6930",
"LABEL_6931",
"LABEL_6932",
"LABEL_6933",
"LABEL_6934",
"LABEL_6935",
"LABEL_6936",
"LABEL_6937",
"LABEL_6938",
"LABEL_6939",
"LABEL_694",
"LABEL_6940",
"LABEL_6941",
"LABEL_6942",
"LABEL_6943",
"LABEL_6944",
"LABEL_6945",
"LABEL_6946",
"LABEL_6947",
"LABEL_6948",
"LABEL_6949",
"LABEL_695",
"LABEL_6950",
"LABEL_6951",
"LABEL_6952",
"LABEL_6953",
"LABEL_6954",
"LABEL_6955",
"LABEL_6956",
"LABEL_6957",
"LABEL_6958",
"LABEL_6959",
"LABEL_696",
"LABEL_6960",
"LABEL_6961",
"LABEL_6962",
"LABEL_6963",
"LABEL_6964",
"LABEL_6965",
"LABEL_6966",
"LABEL_6967",
"LABEL_6968",
"LABEL_6969",
"LABEL_697",
"LABEL_6970",
"LABEL_6971",
"LABEL_6972",
"LABEL_6973",
"LABEL_6974",
"LABEL_6975",
"LABEL_6976",
"LABEL_6977",
"LABEL_6978",
"LABEL_6979",
"LABEL_698",
"LABEL_6980",
"LABEL_6981",
"LABEL_6982",
"LABEL_6983",
"LABEL_6984",
"LABEL_6985",
"LABEL_6986",
"LABEL_6987",
"LABEL_6988",
"LABEL_6989",
"LABEL_699",
"LABEL_6990",
"LABEL_6991",
"LABEL_6992",
"LABEL_6993",
"LABEL_6994",
"LABEL_6995",
"LABEL_6996",
"LABEL_6997",
"LABEL_6998",
"LABEL_6999",
"LABEL_7",
"LABEL_70",
"LABEL_700",
"LABEL_7000",
"LABEL_7001",
"LABEL_7002",
"LABEL_7003",
"LABEL_7004",
"LABEL_7005",
"LABEL_7006",
"LABEL_7007",
"LABEL_7008",
"LABEL_7009",
"LABEL_701",
"LABEL_7010",
"LABEL_7011",
"LABEL_7012",
"LABEL_7013",
"LABEL_7014",
"LABEL_7015",
"LABEL_7016",
"LABEL_7017",
"LABEL_7018",
"LABEL_7019",
"LABEL_702",
"LABEL_7020",
"LABEL_7021",
"LABEL_7022",
"LABEL_7023",
"LABEL_7024",
"LABEL_7025",
"LABEL_7026",
"LABEL_7027",
"LABEL_7028",
"LABEL_7029",
"LABEL_703",
"LABEL_7030",
"LABEL_7031",
"LABEL_7032",
"LABEL_7033",
"LABEL_7034",
"LABEL_7035",
"LABEL_7036",
"LABEL_7037",
"LABEL_7038",
"LABEL_7039",
"LABEL_704",
"LABEL_7040",
"LABEL_7041",
"LABEL_7042",
"LABEL_7043",
"LABEL_7044",
"LABEL_7045",
"LABEL_7046",
"LABEL_7047",
"LABEL_7048",
"LABEL_7049",
"LABEL_705",
"LABEL_7050",
"LABEL_7051",
"LABEL_7052",
"LABEL_7053",
"LABEL_7054",
"LABEL_7055",
"LABEL_7056",
"LABEL_7057",
"LABEL_7058",
"LABEL_7059",
"LABEL_706",
"LABEL_7060",
"LABEL_7061",
"LABEL_7062",
"LABEL_7063",
"LABEL_7064",
"LABEL_7065",
"LABEL_7066",
"LABEL_7067",
"LABEL_7068",
"LABEL_7069",
"LABEL_707",
"LABEL_7070",
"LABEL_7071",
"LABEL_7072",
"LABEL_7073",
"LABEL_7074",
"LABEL_7075",
"LABEL_7076",
"LABEL_7077",
"LABEL_7078",
"LABEL_7079",
"LABEL_708",
"LABEL_7080",
"LABEL_7081",
"LABEL_7082",
"LABEL_7083",
"LABEL_7084",
"LABEL_7085",
"LABEL_7086",
"LABEL_7087",
"LABEL_7088",
"LABEL_7089",
"LABEL_709",
"LABEL_7090",
"LABEL_7091",
"LABEL_7092",
"LABEL_7093",
"LABEL_7094",
"LABEL_7095",
"LABEL_7096",
"LABEL_7097",
"LABEL_7098",
"LABEL_7099",
"LABEL_71",
"LABEL_710",
"LABEL_7100",
"LABEL_7101",
"LABEL_7102",
"LABEL_7103",
"LABEL_7104",
"LABEL_7105",
"LABEL_7106",
"LABEL_7107",
"LABEL_7108",
"LABEL_7109",
"LABEL_711",
"LABEL_7110",
"LABEL_7111",
"LABEL_7112",
"LABEL_7113",
"LABEL_7114",
"LABEL_7115",
"LABEL_7116",
"LABEL_7117",
"LABEL_7118",
"LABEL_7119",
"LABEL_712",
"LABEL_7120",
"LABEL_7121",
"LABEL_7122",
"LABEL_7123",
"LABEL_7124",
"LABEL_7125",
"LABEL_7126",
"LABEL_7127",
"LABEL_7128",
"LABEL_7129",
"LABEL_713",
"LABEL_7130",
"LABEL_7131",
"LABEL_7132",
"LABEL_7133",
"LABEL_7134",
"LABEL_7135",
"LABEL_7136",
"LABEL_7137",
"LABEL_7138",
"LABEL_7139",
"LABEL_714",
"LABEL_7140",
"LABEL_7141",
"LABEL_7142",
"LABEL_7143",
"LABEL_7144",
"LABEL_7145",
"LABEL_7146",
"LABEL_7147",
"LABEL_7148",
"LABEL_7149",
"LABEL_715",
"LABEL_7150",
"LABEL_7151",
"LABEL_7152",
"LABEL_7153",
"LABEL_7154",
"LABEL_7155",
"LABEL_7156",
"LABEL_7157",
"LABEL_7158",
"LABEL_7159",
"LABEL_716",
"LABEL_7160",
"LABEL_7161",
"LABEL_7162",
"LABEL_7163",
"LABEL_7164",
"LABEL_7165",
"LABEL_7166",
"LABEL_7167",
"LABEL_7168",
"LABEL_7169",
"LABEL_717",
"LABEL_7170",
"LABEL_7171",
"LABEL_7172",
"LABEL_7173",
"LABEL_7174",
"LABEL_7175",
"LABEL_7176",
"LABEL_7177",
"LABEL_7178",
"LABEL_7179",
"LABEL_718",
"LABEL_7180",
"LABEL_7181",
"LABEL_7182",
"LABEL_7183",
"LABEL_7184",
"LABEL_7185",
"LABEL_7186",
"LABEL_7187",
"LABEL_7188",
"LABEL_7189",
"LABEL_719",
"LABEL_7190",
"LABEL_7191",
"LABEL_7192",
"LABEL_7193",
"LABEL_7194",
"LABEL_7195",
"LABEL_7196",
"LABEL_7197",
"LABEL_7198",
"LABEL_7199",
"LABEL_72",
"LABEL_720",
"LABEL_7200",
"LABEL_7201",
"LABEL_7202",
"LABEL_7203",
"LABEL_7204",
"LABEL_7205",
"LABEL_7206",
"LABEL_7207",
"LABEL_7208",
"LABEL_7209",
"LABEL_721",
"LABEL_7210",
"LABEL_7211",
"LABEL_7212",
"LABEL_7213",
"LABEL_7214",
"LABEL_7215",
"LABEL_7216",
"LABEL_7217",
"LABEL_7218",
"LABEL_7219",
"LABEL_722",
"LABEL_7220",
"LABEL_7221",
"LABEL_7222",
"LABEL_7223",
"LABEL_7224",
"LABEL_7225",
"LABEL_7226",
"LABEL_7227",
"LABEL_7228",
"LABEL_7229",
"LABEL_723",
"LABEL_7230",
"LABEL_7231",
"LABEL_7232",
"LABEL_7233",
"LABEL_7234",
"LABEL_7235",
"LABEL_7236",
"LABEL_7237",
"LABEL_7238",
"LABEL_7239",
"LABEL_724",
"LABEL_7240",
"LABEL_7241",
"LABEL_7242",
"LABEL_7243",
"LABEL_7244",
"LABEL_7245",
"LABEL_7246",
"LABEL_7247",
"LABEL_7248",
"LABEL_7249",
"LABEL_725",
"LABEL_7250",
"LABEL_7251",
"LABEL_7252",
"LABEL_7253",
"LABEL_7254",
"LABEL_7255",
"LABEL_7256",
"LABEL_7257",
"LABEL_7258",
"LABEL_7259",
"LABEL_726",
"LABEL_7260",
"LABEL_7261",
"LABEL_7262",
"LABEL_7263",
"LABEL_7264",
"LABEL_7265",
"LABEL_7266",
"LABEL_7267",
"LABEL_7268",
"LABEL_7269",
"LABEL_727",
"LABEL_7270",
"LABEL_7271",
"LABEL_7272",
"LABEL_7273",
"LABEL_7274",
"LABEL_7275",
"LABEL_7276",
"LABEL_7277",
"LABEL_7278",
"LABEL_7279",
"LABEL_728",
"LABEL_7280",
"LABEL_7281",
"LABEL_7282",
"LABEL_7283",
"LABEL_7284",
"LABEL_7285",
"LABEL_7286",
"LABEL_7287",
"LABEL_7288",
"LABEL_7289",
"LABEL_729",
"LABEL_7290",
"LABEL_7291",
"LABEL_7292",
"LABEL_7293",
"LABEL_7294",
"LABEL_7295",
"LABEL_7296",
"LABEL_7297",
"LABEL_7298",
"LABEL_7299",
"LABEL_73",
"LABEL_730",
"LABEL_7300",
"LABEL_7301",
"LABEL_7302",
"LABEL_7303",
"LABEL_7304",
"LABEL_7305",
"LABEL_7306",
"LABEL_7307",
"LABEL_7308",
"LABEL_7309",
"LABEL_731",
"LABEL_7310",
"LABEL_7311",
"LABEL_7312",
"LABEL_7313",
"LABEL_7314",
"LABEL_7315",
"LABEL_7316",
"LABEL_7317",
"LABEL_7318",
"LABEL_7319",
"LABEL_732",
"LABEL_7320",
"LABEL_7321",
"LABEL_7322",
"LABEL_7323",
"LABEL_7324",
"LABEL_7325",
"LABEL_7326",
"LABEL_7327",
"LABEL_7328",
"LABEL_7329",
"LABEL_733",
"LABEL_7330",
"LABEL_7331",
"LABEL_7332",
"LABEL_7333",
"LABEL_7334",
"LABEL_7335",
"LABEL_7336",
"LABEL_7337",
"LABEL_7338",
"LABEL_7339",
"LABEL_734",
"LABEL_7340",
"LABEL_7341",
"LABEL_7342",
"LABEL_7343",
"LABEL_7344",
"LABEL_7345",
"LABEL_7346",
"LABEL_7347",
"LABEL_7348",
"LABEL_7349",
"LABEL_735",
"LABEL_7350",
"LABEL_7351",
"LABEL_7352",
"LABEL_7353",
"LABEL_7354",
"LABEL_7355",
"LABEL_7356",
"LABEL_7357",
"LABEL_7358",
"LABEL_7359",
"LABEL_736",
"LABEL_7360",
"LABEL_7361",
"LABEL_7362",
"LABEL_7363",
"LABEL_7364",
"LABEL_7365",
"LABEL_7366",
"LABEL_7367",
"LABEL_7368",
"LABEL_7369",
"LABEL_737",
"LABEL_7370",
"LABEL_7371",
"LABEL_7372",
"LABEL_7373",
"LABEL_7374",
"LABEL_7375",
"LABEL_7376",
"LABEL_7377",
"LABEL_7378",
"LABEL_7379",
"LABEL_738",
"LABEL_7380",
"LABEL_7381",
"LABEL_7382",
"LABEL_7383",
"LABEL_7384",
"LABEL_7385",
"LABEL_7386",
"LABEL_7387",
"LABEL_7388",
"LABEL_7389",
"LABEL_739",
"LABEL_7390",
"LABEL_7391",
"LABEL_7392",
"LABEL_7393",
"LABEL_7394",
"LABEL_7395",
"LABEL_7396",
"LABEL_7397",
"LABEL_7398",
"LABEL_7399",
"LABEL_74",
"LABEL_740",
"LABEL_7400",
"LABEL_7401",
"LABEL_7402",
"LABEL_7403",
"LABEL_7404",
"LABEL_7405",
"LABEL_7406",
"LABEL_7407",
"LABEL_7408",
"LABEL_7409",
"LABEL_741",
"LABEL_7410",
"LABEL_7411",
"LABEL_7412",
"LABEL_7413",
"LABEL_7414",
"LABEL_7415",
"LABEL_7416",
"LABEL_7417",
"LABEL_7418",
"LABEL_7419",
"LABEL_742",
"LABEL_7420",
"LABEL_7421",
"LABEL_7422",
"LABEL_7423",
"LABEL_7424",
"LABEL_7425",
"LABEL_7426",
"LABEL_7427",
"LABEL_7428",
"LABEL_7429",
"LABEL_743",
"LABEL_7430",
"LABEL_7431",
"LABEL_7432",
"LABEL_7433",
"LABEL_7434",
"LABEL_7435",
"LABEL_7436",
"LABEL_7437",
"LABEL_7438",
"LABEL_7439",
"LABEL_744",
"LABEL_7440",
"LABEL_7441",
"LABEL_7442",
"LABEL_7443",
"LABEL_7444",
"LABEL_7445",
"LABEL_7446",
"LABEL_7447",
"LABEL_7448",
"LABEL_7449",
"LABEL_745",
"LABEL_7450",
"LABEL_7451",
"LABEL_7452",
"LABEL_7453",
"LABEL_7454",
"LABEL_7455",
"LABEL_7456",
"LABEL_7457",
"LABEL_7458",
"LABEL_7459",
"LABEL_746",
"LABEL_7460",
"LABEL_7461",
"LABEL_7462",
"LABEL_7463",
"LABEL_7464",
"LABEL_7465",
"LABEL_7466",
"LABEL_7467",
"LABEL_7468",
"LABEL_7469",
"LABEL_747",
"LABEL_7470",
"LABEL_7471",
"LABEL_7472",
"LABEL_7473",
"LABEL_7474",
"LABEL_7475",
"LABEL_7476",
"LABEL_7477",
"LABEL_7478",
"LABEL_7479",
"LABEL_748",
"LABEL_7480",
"LABEL_7481",
"LABEL_7482",
"LABEL_7483",
"LABEL_7484",
"LABEL_7485",
"LABEL_7486",
"LABEL_7487",
"LABEL_7488",
"LABEL_7489",
"LABEL_749",
"LABEL_7490",
"LABEL_7491",
"LABEL_7492",
"LABEL_7493",
"LABEL_7494",
"LABEL_7495",
"LABEL_7496",
"LABEL_7497",
"LABEL_7498",
"LABEL_7499",
"LABEL_75",
"LABEL_750",
"LABEL_7500",
"LABEL_7501",
"LABEL_7502",
"LABEL_7503",
"LABEL_7504",
"LABEL_7505",
"LABEL_7506",
"LABEL_7507",
"LABEL_7508",
"LABEL_7509",
"LABEL_751",
"LABEL_7510",
"LABEL_7511",
"LABEL_7512",
"LABEL_7513",
"LABEL_7514",
"LABEL_7515",
"LABEL_7516",
"LABEL_7517",
"LABEL_7518",
"LABEL_7519",
"LABEL_752",
"LABEL_7520",
"LABEL_7521",
"LABEL_7522",
"LABEL_7523",
"LABEL_7524",
"LABEL_7525",
"LABEL_7526",
"LABEL_7527",
"LABEL_7528",
"LABEL_7529",
"LABEL_753",
"LABEL_7530",
"LABEL_7531",
"LABEL_7532",
"LABEL_7533",
"LABEL_7534",
"LABEL_7535",
"LABEL_7536",
"LABEL_7537",
"LABEL_7538",
"LABEL_7539",
"LABEL_754",
"LABEL_7540",
"LABEL_7541",
"LABEL_7542",
"LABEL_7543",
"LABEL_7544",
"LABEL_7545",
"LABEL_7546",
"LABEL_7547",
"LABEL_7548",
"LABEL_7549",
"LABEL_755",
"LABEL_7550",
"LABEL_7551",
"LABEL_7552",
"LABEL_7553",
"LABEL_7554",
"LABEL_7555",
"LABEL_7556",
"LABEL_7557",
"LABEL_7558",
"LABEL_7559",
"LABEL_756",
"LABEL_7560",
"LABEL_7561",
"LABEL_7562",
"LABEL_7563",
"LABEL_7564",
"LABEL_7565",
"LABEL_7566",
"LABEL_7567",
"LABEL_7568",
"LABEL_7569",
"LABEL_757",
"LABEL_7570",
"LABEL_7571",
"LABEL_7572",
"LABEL_7573",
"LABEL_7574",
"LABEL_7575",
"LABEL_7576",
"LABEL_7577",
"LABEL_7578",
"LABEL_7579",
"LABEL_758",
"LABEL_7580",
"LABEL_7581",
"LABEL_7582",
"LABEL_7583",
"LABEL_7584",
"LABEL_7585",
"LABEL_7586",
"LABEL_7587",
"LABEL_7588",
"LABEL_7589",
"LABEL_759",
"LABEL_7590",
"LABEL_7591",
"LABEL_7592",
"LABEL_7593",
"LABEL_7594",
"LABEL_7595",
"LABEL_7596",
"LABEL_7597",
"LABEL_7598",
"LABEL_7599",
"LABEL_76",
"LABEL_760",
"LABEL_7600",
"LABEL_7601",
"LABEL_7602",
"LABEL_7603",
"LABEL_7604",
"LABEL_7605",
"LABEL_7606",
"LABEL_7607",
"LABEL_7608",
"LABEL_7609",
"LABEL_761",
"LABEL_7610",
"LABEL_7611",
"LABEL_7612",
"LABEL_7613",
"LABEL_7614",
"LABEL_7615",
"LABEL_7616",
"LABEL_7617",
"LABEL_7618",
"LABEL_7619",
"LABEL_762",
"LABEL_7620",
"LABEL_7621",
"LABEL_7622",
"LABEL_7623",
"LABEL_7624",
"LABEL_7625",
"LABEL_7626",
"LABEL_7627",
"LABEL_7628",
"LABEL_7629",
"LABEL_763",
"LABEL_7630",
"LABEL_7631",
"LABEL_7632",
"LABEL_7633",
"LABEL_7634",
"LABEL_7635",
"LABEL_7636",
"LABEL_7637",
"LABEL_7638",
"LABEL_7639",
"LABEL_764",
"LABEL_7640",
"LABEL_7641",
"LABEL_7642",
"LABEL_7643",
"LABEL_7644",
"LABEL_7645",
"LABEL_7646",
"LABEL_7647",
"LABEL_7648",
"LABEL_7649",
"LABEL_765",
"LABEL_7650",
"LABEL_7651",
"LABEL_7652",
"LABEL_7653",
"LABEL_7654",
"LABEL_7655",
"LABEL_7656",
"LABEL_7657",
"LABEL_7658",
"LABEL_7659",
"LABEL_766",
"LABEL_7660",
"LABEL_7661",
"LABEL_7662",
"LABEL_7663",
"LABEL_7664",
"LABEL_7665",
"LABEL_7666",
"LABEL_7667",
"LABEL_7668",
"LABEL_7669",
"LABEL_767",
"LABEL_7670",
"LABEL_7671",
"LABEL_7672",
"LABEL_7673",
"LABEL_7674",
"LABEL_7675",
"LABEL_7676",
"LABEL_7677",
"LABEL_7678",
"LABEL_7679",
"LABEL_768",
"LABEL_7680",
"LABEL_7681",
"LABEL_7682",
"LABEL_7683",
"LABEL_7684",
"LABEL_7685",
"LABEL_7686",
"LABEL_7687",
"LABEL_7688",
"LABEL_7689",
"LABEL_769",
"LABEL_7690",
"LABEL_7691",
"LABEL_7692",
"LABEL_7693",
"LABEL_7694",
"LABEL_7695",
"LABEL_7696",
"LABEL_7697",
"LABEL_7698",
"LABEL_7699",
"LABEL_77",
"LABEL_770",
"LABEL_7700",
"LABEL_7701",
"LABEL_7702",
"LABEL_7703",
"LABEL_7704",
"LABEL_7705",
"LABEL_7706",
"LABEL_7707",
"LABEL_7708",
"LABEL_7709",
"LABEL_771",
"LABEL_7710",
"LABEL_7711",
"LABEL_7712",
"LABEL_7713",
"LABEL_7714",
"LABEL_7715",
"LABEL_7716",
"LABEL_7717",
"LABEL_7718",
"LABEL_7719",
"LABEL_772",
"LABEL_7720",
"LABEL_7721",
"LABEL_7722",
"LABEL_7723",
"LABEL_7724",
"LABEL_7725",
"LABEL_7726",
"LABEL_7727",
"LABEL_7728",
"LABEL_7729",
"LABEL_773",
"LABEL_7730",
"LABEL_7731",
"LABEL_7732",
"LABEL_7733",
"LABEL_7734",
"LABEL_7735",
"LABEL_7736",
"LABEL_7737",
"LABEL_7738",
"LABEL_7739",
"LABEL_774",
"LABEL_7740",
"LABEL_7741",
"LABEL_7742",
"LABEL_7743",
"LABEL_7744",
"LABEL_7745",
"LABEL_7746",
"LABEL_7747",
"LABEL_7748",
"LABEL_7749",
"LABEL_775",
"LABEL_7750",
"LABEL_7751",
"LABEL_7752",
"LABEL_7753",
"LABEL_7754",
"LABEL_7755",
"LABEL_7756",
"LABEL_7757",
"LABEL_7758",
"LABEL_7759",
"LABEL_776",
"LABEL_7760",
"LABEL_7761",
"LABEL_7762",
"LABEL_7763",
"LABEL_7764",
"LABEL_7765",
"LABEL_7766",
"LABEL_7767",
"LABEL_7768",
"LABEL_7769",
"LABEL_777",
"LABEL_7770",
"LABEL_7771",
"LABEL_7772",
"LABEL_7773",
"LABEL_7774",
"LABEL_7775",
"LABEL_7776",
"LABEL_7777",
"LABEL_7778",
"LABEL_7779",
"LABEL_778",
"LABEL_7780",
"LABEL_7781",
"LABEL_7782",
"LABEL_7783",
"LABEL_7784",
"LABEL_7785",
"LABEL_7786",
"LABEL_7787",
"LABEL_7788",
"LABEL_7789",
"LABEL_779",
"LABEL_7790",
"LABEL_7791",
"LABEL_7792",
"LABEL_7793",
"LABEL_7794",
"LABEL_7795",
"LABEL_7796",
"LABEL_7797",
"LABEL_7798",
"LABEL_7799",
"LABEL_78",
"LABEL_780",
"LABEL_7800",
"LABEL_7801",
"LABEL_7802",
"LABEL_7803",
"LABEL_7804",
"LABEL_7805",
"LABEL_7806",
"LABEL_7807",
"LABEL_7808",
"LABEL_7809",
"LABEL_781",
"LABEL_7810",
"LABEL_7811",
"LABEL_7812",
"LABEL_7813",
"LABEL_7814",
"LABEL_7815",
"LABEL_7816",
"LABEL_7817",
"LABEL_7818",
"LABEL_7819",
"LABEL_782",
"LABEL_7820",
"LABEL_7821",
"LABEL_7822",
"LABEL_7823",
"LABEL_7824",
"LABEL_7825",
"LABEL_7826",
"LABEL_7827",
"LABEL_7828",
"LABEL_7829",
"LABEL_783",
"LABEL_7830",
"LABEL_7831",
"LABEL_7832",
"LABEL_7833",
"LABEL_7834",
"LABEL_7835",
"LABEL_7836",
"LABEL_7837",
"LABEL_7838",
"LABEL_7839",
"LABEL_784",
"LABEL_7840",
"LABEL_7841",
"LABEL_7842",
"LABEL_7843",
"LABEL_7844",
"LABEL_7845",
"LABEL_7846",
"LABEL_7847",
"LABEL_7848",
"LABEL_7849",
"LABEL_785",
"LABEL_7850",
"LABEL_7851",
"LABEL_7852",
"LABEL_7853",
"LABEL_7854",
"LABEL_7855",
"LABEL_7856",
"LABEL_7857",
"LABEL_7858",
"LABEL_7859",
"LABEL_786",
"LABEL_7860",
"LABEL_7861",
"LABEL_7862",
"LABEL_7863",
"LABEL_7864",
"LABEL_7865",
"LABEL_7866",
"LABEL_7867",
"LABEL_7868",
"LABEL_7869",
"LABEL_787",
"LABEL_7870",
"LABEL_7871",
"LABEL_7872",
"LABEL_7873",
"LABEL_7874",
"LABEL_7875",
"LABEL_7876",
"LABEL_7877",
"LABEL_7878",
"LABEL_7879",
"LABEL_788",
"LABEL_7880",
"LABEL_7881",
"LABEL_7882",
"LABEL_7883",
"LABEL_7884",
"LABEL_7885",
"LABEL_7886",
"LABEL_7887",
"LABEL_7888",
"LABEL_7889",
"LABEL_789",
"LABEL_7890",
"LABEL_7891",
"LABEL_7892",
"LABEL_7893",
"LABEL_7894",
"LABEL_7895",
"LABEL_7896",
"LABEL_7897",
"LABEL_7898",
"LABEL_7899",
"LABEL_79",
"LABEL_790",
"LABEL_7900",
"LABEL_7901",
"LABEL_7902",
"LABEL_7903",
"LABEL_7904",
"LABEL_7905",
"LABEL_7906",
"LABEL_7907",
"LABEL_7908",
"LABEL_7909",
"LABEL_791",
"LABEL_7910",
"LABEL_7911",
"LABEL_7912",
"LABEL_7913",
"LABEL_7914",
"LABEL_7915",
"LABEL_7916",
"LABEL_7917",
"LABEL_7918",
"LABEL_7919",
"LABEL_792",
"LABEL_7920",
"LABEL_7921",
"LABEL_7922",
"LABEL_7923",
"LABEL_7924",
"LABEL_7925",
"LABEL_7926",
"LABEL_7927",
"LABEL_7928",
"LABEL_7929",
"LABEL_793",
"LABEL_7930",
"LABEL_7931",
"LABEL_7932",
"LABEL_7933",
"LABEL_7934",
"LABEL_7935",
"LABEL_7936",
"LABEL_7937",
"LABEL_7938",
"LABEL_7939",
"LABEL_794",
"LABEL_7940",
"LABEL_7941",
"LABEL_7942",
"LABEL_7943",
"LABEL_7944",
"LABEL_7945",
"LABEL_7946",
"LABEL_7947",
"LABEL_7948",
"LABEL_7949",
"LABEL_795",
"LABEL_7950",
"LABEL_7951",
"LABEL_7952",
"LABEL_7953",
"LABEL_7954",
"LABEL_7955",
"LABEL_7956",
"LABEL_7957",
"LABEL_7958",
"LABEL_7959",
"LABEL_796",
"LABEL_7960",
"LABEL_7961",
"LABEL_7962",
"LABEL_7963",
"LABEL_7964",
"LABEL_7965",
"LABEL_7966",
"LABEL_7967",
"LABEL_7968",
"LABEL_7969",
"LABEL_797",
"LABEL_7970",
"LABEL_7971",
"LABEL_7972",
"LABEL_7973",
"LABEL_7974",
"LABEL_7975",
"LABEL_7976",
"LABEL_7977",
"LABEL_7978",
"LABEL_7979",
"LABEL_798",
"LABEL_7980",
"LABEL_7981",
"LABEL_7982",
"LABEL_7983",
"LABEL_7984",
"LABEL_7985",
"LABEL_7986",
"LABEL_7987",
"LABEL_7988",
"LABEL_7989",
"LABEL_799",
"LABEL_7990",
"LABEL_7991",
"LABEL_7992",
"LABEL_7993",
"LABEL_7994",
"LABEL_7995",
"LABEL_7996",
"LABEL_7997",
"LABEL_7998",
"LABEL_7999",
"LABEL_8",
"LABEL_80",
"LABEL_800",
"LABEL_8000",
"LABEL_8001",
"LABEL_8002",
"LABEL_8003",
"LABEL_8004",
"LABEL_8005",
"LABEL_8006",
"LABEL_8007",
"LABEL_8008",
"LABEL_8009",
"LABEL_801",
"LABEL_8010",
"LABEL_8011",
"LABEL_8012",
"LABEL_8013",
"LABEL_8014",
"LABEL_8015",
"LABEL_8016",
"LABEL_8017",
"LABEL_8018",
"LABEL_8019",
"LABEL_802",
"LABEL_8020",
"LABEL_8021",
"LABEL_8022",
"LABEL_8023",
"LABEL_8024",
"LABEL_8025",
"LABEL_8026",
"LABEL_8027",
"LABEL_8028",
"LABEL_8029",
"LABEL_803",
"LABEL_8030",
"LABEL_8031",
"LABEL_8032",
"LABEL_8033",
"LABEL_8034",
"LABEL_8035",
"LABEL_8036",
"LABEL_8037",
"LABEL_8038",
"LABEL_8039",
"LABEL_804",
"LABEL_8040",
"LABEL_8041",
"LABEL_8042",
"LABEL_8043",
"LABEL_8044",
"LABEL_8045",
"LABEL_8046",
"LABEL_8047",
"LABEL_8048",
"LABEL_8049",
"LABEL_805",
"LABEL_8050",
"LABEL_8051",
"LABEL_8052",
"LABEL_8053",
"LABEL_8054",
"LABEL_8055",
"LABEL_8056",
"LABEL_8057",
"LABEL_8058",
"LABEL_8059",
"LABEL_806",
"LABEL_8060",
"LABEL_8061",
"LABEL_8062",
"LABEL_8063",
"LABEL_8064",
"LABEL_8065",
"LABEL_8066",
"LABEL_8067",
"LABEL_8068",
"LABEL_8069",
"LABEL_807",
"LABEL_8070",
"LABEL_8071",
"LABEL_8072",
"LABEL_8073",
"LABEL_8074",
"LABEL_8075",
"LABEL_8076",
"LABEL_8077",
"LABEL_8078",
"LABEL_8079",
"LABEL_808",
"LABEL_8080",
"LABEL_8081",
"LABEL_8082",
"LABEL_8083",
"LABEL_8084",
"LABEL_8085",
"LABEL_8086",
"LABEL_8087",
"LABEL_8088",
"LABEL_8089",
"LABEL_809",
"LABEL_8090",
"LABEL_8091",
"LABEL_8092",
"LABEL_8093",
"LABEL_8094",
"LABEL_8095",
"LABEL_8096",
"LABEL_8097",
"LABEL_8098",
"LABEL_8099",
"LABEL_81",
"LABEL_810",
"LABEL_8100",
"LABEL_8101",
"LABEL_8102",
"LABEL_8103",
"LABEL_8104",
"LABEL_8105",
"LABEL_8106",
"LABEL_8107",
"LABEL_8108",
"LABEL_8109",
"LABEL_811",
"LABEL_8110",
"LABEL_8111",
"LABEL_8112",
"LABEL_8113",
"LABEL_8114",
"LABEL_8115",
"LABEL_8116",
"LABEL_8117",
"LABEL_8118",
"LABEL_8119",
"LABEL_812",
"LABEL_8120",
"LABEL_8121",
"LABEL_8122",
"LABEL_8123",
"LABEL_8124",
"LABEL_8125",
"LABEL_8126",
"LABEL_8127",
"LABEL_8128",
"LABEL_8129",
"LABEL_813",
"LABEL_8130",
"LABEL_8131",
"LABEL_8132",
"LABEL_8133",
"LABEL_8134",
"LABEL_8135",
"LABEL_8136",
"LABEL_8137",
"LABEL_8138",
"LABEL_8139",
"LABEL_814",
"LABEL_8140",
"LABEL_8141",
"LABEL_8142",
"LABEL_8143",
"LABEL_8144",
"LABEL_8145",
"LABEL_8146",
"LABEL_8147",
"LABEL_8148",
"LABEL_8149",
"LABEL_815",
"LABEL_8150",
"LABEL_8151",
"LABEL_8152",
"LABEL_8153",
"LABEL_8154",
"LABEL_8155",
"LABEL_8156",
"LABEL_8157",
"LABEL_8158",
"LABEL_8159",
"LABEL_816",
"LABEL_8160",
"LABEL_8161",
"LABEL_8162",
"LABEL_8163",
"LABEL_8164",
"LABEL_8165",
"LABEL_8166",
"LABEL_8167",
"LABEL_8168",
"LABEL_8169",
"LABEL_817",
"LABEL_8170",
"LABEL_8171",
"LABEL_8172",
"LABEL_8173",
"LABEL_8174",
"LABEL_8175",
"LABEL_8176",
"LABEL_8177",
"LABEL_8178",
"LABEL_8179",
"LABEL_818",
"LABEL_8180",
"LABEL_8181",
"LABEL_8182",
"LABEL_8183",
"LABEL_8184",
"LABEL_8185",
"LABEL_8186",
"LABEL_8187",
"LABEL_8188",
"LABEL_8189",
"LABEL_819",
"LABEL_8190",
"LABEL_8191",
"LABEL_8192",
"LABEL_8193",
"LABEL_8194",
"LABEL_8195",
"LABEL_8196",
"LABEL_8197",
"LABEL_8198",
"LABEL_8199",
"LABEL_82",
"LABEL_820",
"LABEL_8200",
"LABEL_8201",
"LABEL_8202",
"LABEL_8203",
"LABEL_8204",
"LABEL_8205",
"LABEL_8206",
"LABEL_8207",
"LABEL_8208",
"LABEL_8209",
"LABEL_821",
"LABEL_8210",
"LABEL_8211",
"LABEL_8212",
"LABEL_8213",
"LABEL_8214",
"LABEL_8215",
"LABEL_8216",
"LABEL_8217",
"LABEL_8218",
"LABEL_8219",
"LABEL_822",
"LABEL_8220",
"LABEL_8221",
"LABEL_8222",
"LABEL_8223",
"LABEL_8224",
"LABEL_8225",
"LABEL_8226",
"LABEL_8227",
"LABEL_8228",
"LABEL_8229",
"LABEL_823",
"LABEL_8230",
"LABEL_8231",
"LABEL_8232",
"LABEL_8233",
"LABEL_8234",
"LABEL_8235",
"LABEL_8236",
"LABEL_8237",
"LABEL_8238",
"LABEL_8239",
"LABEL_824",
"LABEL_8240",
"LABEL_8241",
"LABEL_8242",
"LABEL_8243",
"LABEL_8244",
"LABEL_8245",
"LABEL_8246",
"LABEL_8247",
"LABEL_8248",
"LABEL_8249",
"LABEL_825",
"LABEL_8250",
"LABEL_8251",
"LABEL_8252",
"LABEL_8253",
"LABEL_8254",
"LABEL_8255",
"LABEL_8256",
"LABEL_8257",
"LABEL_8258",
"LABEL_8259",
"LABEL_826",
"LABEL_8260",
"LABEL_8261",
"LABEL_8262",
"LABEL_8263",
"LABEL_8264",
"LABEL_8265",
"LABEL_8266",
"LABEL_8267",
"LABEL_8268",
"LABEL_8269",
"LABEL_827",
"LABEL_8270",
"LABEL_8271",
"LABEL_8272",
"LABEL_8273",
"LABEL_8274",
"LABEL_8275",
"LABEL_8276",
"LABEL_8277",
"LABEL_8278",
"LABEL_8279",
"LABEL_828",
"LABEL_8280",
"LABEL_8281",
"LABEL_8282",
"LABEL_8283",
"LABEL_8284",
"LABEL_8285",
"LABEL_8286",
"LABEL_8287",
"LABEL_8288",
"LABEL_8289",
"LABEL_829",
"LABEL_8290",
"LABEL_8291",
"LABEL_8292",
"LABEL_8293",
"LABEL_8294",
"LABEL_8295",
"LABEL_8296",
"LABEL_8297",
"LABEL_8298",
"LABEL_8299",
"LABEL_83",
"LABEL_830",
"LABEL_8300",
"LABEL_8301",
"LABEL_8302",
"LABEL_8303",
"LABEL_8304",
"LABEL_8305",
"LABEL_8306",
"LABEL_8307",
"LABEL_8308",
"LABEL_8309",
"LABEL_831",
"LABEL_8310",
"LABEL_8311",
"LABEL_8312",
"LABEL_8313",
"LABEL_8314",
"LABEL_8315",
"LABEL_8316",
"LABEL_8317",
"LABEL_8318",
"LABEL_8319",
"LABEL_832",
"LABEL_8320",
"LABEL_8321",
"LABEL_8322",
"LABEL_8323",
"LABEL_8324",
"LABEL_8325",
"LABEL_8326",
"LABEL_8327",
"LABEL_8328",
"LABEL_8329",
"LABEL_833",
"LABEL_8330",
"LABEL_8331",
"LABEL_8332",
"LABEL_8333",
"LABEL_8334",
"LABEL_8335",
"LABEL_8336",
"LABEL_8337",
"LABEL_8338",
"LABEL_8339",
"LABEL_834",
"LABEL_8340",
"LABEL_8341",
"LABEL_8342",
"LABEL_8343",
"LABEL_8344",
"LABEL_8345",
"LABEL_8346",
"LABEL_8347",
"LABEL_8348",
"LABEL_8349",
"LABEL_835",
"LABEL_8350",
"LABEL_8351",
"LABEL_8352",
"LABEL_8353",
"LABEL_8354",
"LABEL_8355",
"LABEL_8356",
"LABEL_8357",
"LABEL_8358",
"LABEL_8359",
"LABEL_836",
"LABEL_8360",
"LABEL_8361",
"LABEL_8362",
"LABEL_8363",
"LABEL_8364",
"LABEL_8365",
"LABEL_8366",
"LABEL_8367",
"LABEL_8368",
"LABEL_8369",
"LABEL_837",
"LABEL_8370",
"LABEL_8371",
"LABEL_8372",
"LABEL_8373",
"LABEL_8374",
"LABEL_8375",
"LABEL_8376",
"LABEL_8377",
"LABEL_8378",
"LABEL_8379",
"LABEL_838",
"LABEL_8380",
"LABEL_8381",
"LABEL_8382",
"LABEL_8383",
"LABEL_8384",
"LABEL_8385",
"LABEL_8386",
"LABEL_8387",
"LABEL_8388",
"LABEL_8389",
"LABEL_839",
"LABEL_8390",
"LABEL_8391",
"LABEL_8392",
"LABEL_8393",
"LABEL_8394",
"LABEL_8395",
"LABEL_8396",
"LABEL_8397",
"LABEL_8398",
"LABEL_8399",
"LABEL_84",
"LABEL_840",
"LABEL_8400",
"LABEL_8401",
"LABEL_8402",
"LABEL_8403",
"LABEL_8404",
"LABEL_8405",
"LABEL_8406",
"LABEL_8407",
"LABEL_8408",
"LABEL_8409",
"LABEL_841",
"LABEL_8410",
"LABEL_8411",
"LABEL_8412",
"LABEL_8413",
"LABEL_8414",
"LABEL_8415",
"LABEL_8416",
"LABEL_8417",
"LABEL_8418",
"LABEL_8419",
"LABEL_842",
"LABEL_8420",
"LABEL_8421",
"LABEL_8422",
"LABEL_8423",
"LABEL_8424",
"LABEL_8425",
"LABEL_8426",
"LABEL_8427",
"LABEL_8428",
"LABEL_8429",
"LABEL_843",
"LABEL_8430",
"LABEL_8431",
"LABEL_8432",
"LABEL_8433",
"LABEL_8434",
"LABEL_8435",
"LABEL_8436",
"LABEL_8437",
"LABEL_8438",
"LABEL_8439",
"LABEL_844",
"LABEL_8440",
"LABEL_8441",
"LABEL_8442",
"LABEL_8443",
"LABEL_8444",
"LABEL_8445",
"LABEL_8446",
"LABEL_8447",
"LABEL_8448",
"LABEL_8449",
"LABEL_845",
"LABEL_8450",
"LABEL_8451",
"LABEL_8452",
"LABEL_8453",
"LABEL_8454",
"LABEL_8455",
"LABEL_8456",
"LABEL_8457",
"LABEL_8458",
"LABEL_8459",
"LABEL_846",
"LABEL_8460",
"LABEL_8461",
"LABEL_8462",
"LABEL_8463",
"LABEL_8464",
"LABEL_8465",
"LABEL_8466",
"LABEL_8467",
"LABEL_8468",
"LABEL_8469",
"LABEL_847",
"LABEL_8470",
"LABEL_8471",
"LABEL_8472",
"LABEL_8473",
"LABEL_8474",
"LABEL_8475",
"LABEL_8476",
"LABEL_8477",
"LABEL_8478",
"LABEL_8479",
"LABEL_848",
"LABEL_8480",
"LABEL_8481",
"LABEL_8482",
"LABEL_8483",
"LABEL_8484",
"LABEL_8485",
"LABEL_8486",
"LABEL_8487",
"LABEL_8488",
"LABEL_8489",
"LABEL_849",
"LABEL_8490",
"LABEL_8491",
"LABEL_8492",
"LABEL_8493",
"LABEL_8494",
"LABEL_8495",
"LABEL_8496",
"LABEL_8497",
"LABEL_8498",
"LABEL_8499",
"LABEL_85",
"LABEL_850",
"LABEL_8500",
"LABEL_8501",
"LABEL_8502",
"LABEL_8503",
"LABEL_8504",
"LABEL_8505",
"LABEL_8506",
"LABEL_8507",
"LABEL_8508",
"LABEL_8509",
"LABEL_851",
"LABEL_8510",
"LABEL_8511",
"LABEL_8512",
"LABEL_8513",
"LABEL_8514",
"LABEL_8515",
"LABEL_8516",
"LABEL_8517",
"LABEL_8518",
"LABEL_8519",
"LABEL_852",
"LABEL_8520",
"LABEL_8521",
"LABEL_8522",
"LABEL_8523",
"LABEL_8524",
"LABEL_8525",
"LABEL_8526",
"LABEL_8527",
"LABEL_8528",
"LABEL_8529",
"LABEL_853",
"LABEL_8530",
"LABEL_8531",
"LABEL_8532",
"LABEL_8533",
"LABEL_8534",
"LABEL_8535",
"LABEL_8536",
"LABEL_8537",
"LABEL_8538",
"LABEL_8539",
"LABEL_854",
"LABEL_8540",
"LABEL_8541",
"LABEL_8542",
"LABEL_8543",
"LABEL_8544",
"LABEL_8545",
"LABEL_8546",
"LABEL_8547",
"LABEL_8548",
"LABEL_8549",
"LABEL_855",
"LABEL_8550",
"LABEL_8551",
"LABEL_8552",
"LABEL_8553",
"LABEL_8554",
"LABEL_8555",
"LABEL_8556",
"LABEL_8557",
"LABEL_8558",
"LABEL_8559",
"LABEL_856",
"LABEL_8560",
"LABEL_8561",
"LABEL_8562",
"LABEL_8563",
"LABEL_8564",
"LABEL_8565",
"LABEL_8566",
"LABEL_8567",
"LABEL_8568",
"LABEL_8569",
"LABEL_857",
"LABEL_8570",
"LABEL_8571",
"LABEL_8572",
"LABEL_8573",
"LABEL_8574",
"LABEL_8575",
"LABEL_8576",
"LABEL_8577",
"LABEL_8578",
"LABEL_8579",
"LABEL_858",
"LABEL_8580",
"LABEL_8581",
"LABEL_8582",
"LABEL_8583",
"LABEL_8584",
"LABEL_8585",
"LABEL_8586",
"LABEL_8587",
"LABEL_8588",
"LABEL_8589",
"LABEL_859",
"LABEL_8590",
"LABEL_8591",
"LABEL_8592",
"LABEL_8593",
"LABEL_8594",
"LABEL_8595",
"LABEL_8596",
"LABEL_8597",
"LABEL_8598",
"LABEL_8599",
"LABEL_86",
"LABEL_860",
"LABEL_8600",
"LABEL_8601",
"LABEL_8602",
"LABEL_8603",
"LABEL_8604",
"LABEL_8605",
"LABEL_8606",
"LABEL_8607",
"LABEL_8608",
"LABEL_8609",
"LABEL_861",
"LABEL_8610",
"LABEL_8611",
"LABEL_8612",
"LABEL_8613",
"LABEL_8614",
"LABEL_8615",
"LABEL_8616",
"LABEL_8617",
"LABEL_8618",
"LABEL_8619",
"LABEL_862",
"LABEL_8620",
"LABEL_8621",
"LABEL_8622",
"LABEL_8623",
"LABEL_8624",
"LABEL_8625",
"LABEL_8626",
"LABEL_8627",
"LABEL_8628",
"LABEL_8629",
"LABEL_863",
"LABEL_8630",
"LABEL_8631",
"LABEL_8632",
"LABEL_8633",
"LABEL_8634",
"LABEL_8635",
"LABEL_8636",
"LABEL_8637",
"LABEL_8638",
"LABEL_8639",
"LABEL_864",
"LABEL_8640",
"LABEL_8641",
"LABEL_8642",
"LABEL_8643",
"LABEL_8644",
"LABEL_8645",
"LABEL_8646",
"LABEL_8647",
"LABEL_8648",
"LABEL_8649",
"LABEL_865",
"LABEL_8650",
"LABEL_8651",
"LABEL_8652",
"LABEL_8653",
"LABEL_8654",
"LABEL_8655",
"LABEL_8656",
"LABEL_8657",
"LABEL_8658",
"LABEL_8659",
"LABEL_866",
"LABEL_8660",
"LABEL_8661",
"LABEL_8662",
"LABEL_8663",
"LABEL_8664",
"LABEL_8665",
"LABEL_8666",
"LABEL_8667",
"LABEL_8668",
"LABEL_8669",
"LABEL_867",
"LABEL_8670",
"LABEL_8671",
"LABEL_8672",
"LABEL_8673",
"LABEL_8674",
"LABEL_8675",
"LABEL_8676",
"LABEL_8677",
"LABEL_8678",
"LABEL_8679",
"LABEL_868",
"LABEL_8680",
"LABEL_8681",
"LABEL_8682",
"LABEL_8683",
"LABEL_8684",
"LABEL_8685",
"LABEL_8686",
"LABEL_8687",
"LABEL_8688",
"LABEL_8689",
"LABEL_869",
"LABEL_8690",
"LABEL_8691",
"LABEL_8692",
"LABEL_8693",
"LABEL_8694",
"LABEL_8695",
"LABEL_8696",
"LABEL_8697",
"LABEL_8698",
"LABEL_8699",
"LABEL_87",
"LABEL_870",
"LABEL_8700",
"LABEL_8701",
"LABEL_8702",
"LABEL_8703",
"LABEL_8704",
"LABEL_8705",
"LABEL_8706",
"LABEL_8707",
"LABEL_8708",
"LABEL_8709",
"LABEL_871",
"LABEL_8710",
"LABEL_8711",
"LABEL_8712",
"LABEL_8713",
"LABEL_8714",
"LABEL_8715",
"LABEL_8716",
"LABEL_8717",
"LABEL_8718",
"LABEL_8719",
"LABEL_872",
"LABEL_8720",
"LABEL_8721",
"LABEL_8722",
"LABEL_8723",
"LABEL_8724",
"LABEL_8725",
"LABEL_8726",
"LABEL_8727",
"LABEL_8728",
"LABEL_8729",
"LABEL_873",
"LABEL_8730",
"LABEL_8731",
"LABEL_8732",
"LABEL_8733",
"LABEL_8734",
"LABEL_8735",
"LABEL_8736",
"LABEL_8737",
"LABEL_8738",
"LABEL_8739",
"LABEL_874",
"LABEL_8740",
"LABEL_8741",
"LABEL_8742",
"LABEL_8743",
"LABEL_8744",
"LABEL_8745",
"LABEL_8746",
"LABEL_8747",
"LABEL_8748",
"LABEL_8749",
"LABEL_875",
"LABEL_8750",
"LABEL_8751",
"LABEL_8752",
"LABEL_8753",
"LABEL_8754",
"LABEL_8755",
"LABEL_8756",
"LABEL_8757",
"LABEL_8758",
"LABEL_8759",
"LABEL_876",
"LABEL_8760",
"LABEL_8761",
"LABEL_8762",
"LABEL_8763",
"LABEL_8764",
"LABEL_8765",
"LABEL_8766",
"LABEL_8767",
"LABEL_8768",
"LABEL_8769",
"LABEL_877",
"LABEL_8770",
"LABEL_8771",
"LABEL_8772",
"LABEL_8773",
"LABEL_8774",
"LABEL_8775",
"LABEL_8776",
"LABEL_8777",
"LABEL_8778",
"LABEL_8779",
"LABEL_878",
"LABEL_8780",
"LABEL_8781",
"LABEL_8782",
"LABEL_8783",
"LABEL_8784",
"LABEL_8785",
"LABEL_8786",
"LABEL_8787",
"LABEL_8788",
"LABEL_8789",
"LABEL_879",
"LABEL_8790",
"LABEL_8791",
"LABEL_8792",
"LABEL_8793",
"LABEL_8794",
"LABEL_8795",
"LABEL_8796",
"LABEL_8797",
"LABEL_8798",
"LABEL_8799",
"LABEL_88",
"LABEL_880",
"LABEL_8800",
"LABEL_8801",
"LABEL_8802",
"LABEL_8803",
"LABEL_8804",
"LABEL_8805",
"LABEL_8806",
"LABEL_8807",
"LABEL_8808",
"LABEL_8809",
"LABEL_881",
"LABEL_8810",
"LABEL_8811",
"LABEL_8812",
"LABEL_8813",
"LABEL_8814",
"LABEL_8815",
"LABEL_8816",
"LABEL_8817",
"LABEL_8818",
"LABEL_8819",
"LABEL_882",
"LABEL_8820",
"LABEL_8821",
"LABEL_8822",
"LABEL_8823",
"LABEL_8824",
"LABEL_8825",
"LABEL_8826",
"LABEL_8827",
"LABEL_8828",
"LABEL_8829",
"LABEL_883",
"LABEL_8830",
"LABEL_8831",
"LABEL_8832",
"LABEL_8833",
"LABEL_8834",
"LABEL_8835",
"LABEL_8836",
"LABEL_8837",
"LABEL_8838",
"LABEL_8839",
"LABEL_884",
"LABEL_8840",
"LABEL_8841",
"LABEL_8842",
"LABEL_8843",
"LABEL_8844",
"LABEL_8845",
"LABEL_8846",
"LABEL_8847",
"LABEL_8848",
"LABEL_8849",
"LABEL_885",
"LABEL_8850",
"LABEL_8851",
"LABEL_8852",
"LABEL_8853",
"LABEL_8854",
"LABEL_8855",
"LABEL_8856",
"LABEL_8857",
"LABEL_8858",
"LABEL_8859",
"LABEL_886",
"LABEL_8860",
"LABEL_8861",
"LABEL_8862",
"LABEL_8863",
"LABEL_8864",
"LABEL_8865",
"LABEL_8866",
"LABEL_8867",
"LABEL_8868",
"LABEL_8869",
"LABEL_887",
"LABEL_8870",
"LABEL_8871",
"LABEL_8872",
"LABEL_8873",
"LABEL_8874",
"LABEL_8875",
"LABEL_8876",
"LABEL_8877",
"LABEL_8878",
"LABEL_8879",
"LABEL_888",
"LABEL_8880",
"LABEL_8881",
"LABEL_8882",
"LABEL_8883",
"LABEL_8884",
"LABEL_8885",
"LABEL_8886",
"LABEL_8887",
"LABEL_8888",
"LABEL_8889",
"LABEL_889",
"LABEL_8890",
"LABEL_8891",
"LABEL_8892",
"LABEL_8893",
"LABEL_8894",
"LABEL_8895",
"LABEL_8896",
"LABEL_8897",
"LABEL_8898",
"LABEL_8899",
"LABEL_89",
"LABEL_890",
"LABEL_8900",
"LABEL_8901",
"LABEL_8902",
"LABEL_8903",
"LABEL_8904",
"LABEL_8905",
"LABEL_8906",
"LABEL_8907",
"LABEL_8908",
"LABEL_8909",
"LABEL_891",
"LABEL_8910",
"LABEL_8911",
"LABEL_8912",
"LABEL_8913",
"LABEL_8914",
"LABEL_8915",
"LABEL_8916",
"LABEL_8917",
"LABEL_8918",
"LABEL_8919",
"LABEL_892",
"LABEL_8920",
"LABEL_8921",
"LABEL_8922",
"LABEL_8923",
"LABEL_8924",
"LABEL_8925",
"LABEL_8926",
"LABEL_8927",
"LABEL_8928",
"LABEL_8929",
"LABEL_893",
"LABEL_8930",
"LABEL_8931",
"LABEL_8932",
"LABEL_8933",
"LABEL_8934",
"LABEL_8935",
"LABEL_8936",
"LABEL_8937",
"LABEL_8938",
"LABEL_8939",
"LABEL_894",
"LABEL_8940",
"LABEL_8941",
"LABEL_8942",
"LABEL_8943",
"LABEL_8944",
"LABEL_8945",
"LABEL_8946",
"LABEL_8947",
"LABEL_8948",
"LABEL_8949",
"LABEL_895",
"LABEL_8950",
"LABEL_8951",
"LABEL_8952",
"LABEL_8953",
"LABEL_8954",
"LABEL_8955",
"LABEL_8956",
"LABEL_8957",
"LABEL_8958",
"LABEL_8959",
"LABEL_896",
"LABEL_8960",
"LABEL_8961",
"LABEL_8962",
"LABEL_8963",
"LABEL_8964",
"LABEL_8965",
"LABEL_8966",
"LABEL_8967",
"LABEL_8968",
"LABEL_8969",
"LABEL_897",
"LABEL_8970",
"LABEL_8971",
"LABEL_8972",
"LABEL_8973",
"LABEL_8974",
"LABEL_8975",
"LABEL_8976",
"LABEL_8977",
"LABEL_8978",
"LABEL_8979",
"LABEL_898",
"LABEL_8980",
"LABEL_8981",
"LABEL_8982",
"LABEL_8983",
"LABEL_8984",
"LABEL_8985",
"LABEL_8986",
"LABEL_8987",
"LABEL_8988",
"LABEL_8989",
"LABEL_899",
"LABEL_8990",
"LABEL_8991",
"LABEL_8992",
"LABEL_8993",
"LABEL_8994",
"LABEL_8995",
"LABEL_8996",
"LABEL_8997",
"LABEL_8998",
"LABEL_8999",
"LABEL_9",
"LABEL_90",
"LABEL_900",
"LABEL_9000",
"LABEL_9001",
"LABEL_9002",
"LABEL_9003",
"LABEL_9004",
"LABEL_9005",
"LABEL_9006",
"LABEL_9007",
"LABEL_9008",
"LABEL_9009",
"LABEL_901",
"LABEL_9010",
"LABEL_9011",
"LABEL_9012",
"LABEL_9013",
"LABEL_9014",
"LABEL_9015",
"LABEL_9016",
"LABEL_9017",
"LABEL_9018",
"LABEL_9019",
"LABEL_902",
"LABEL_9020",
"LABEL_9021",
"LABEL_9022",
"LABEL_9023",
"LABEL_9024",
"LABEL_9025",
"LABEL_9026",
"LABEL_9027",
"LABEL_9028",
"LABEL_9029",
"LABEL_903",
"LABEL_9030",
"LABEL_9031",
"LABEL_9032",
"LABEL_9033",
"LABEL_9034",
"LABEL_9035",
"LABEL_9036",
"LABEL_9037",
"LABEL_9038",
"LABEL_9039",
"LABEL_904",
"LABEL_9040",
"LABEL_9041",
"LABEL_9042",
"LABEL_9043",
"LABEL_9044",
"LABEL_9045",
"LABEL_9046",
"LABEL_9047",
"LABEL_9048",
"LABEL_9049",
"LABEL_905",
"LABEL_9050",
"LABEL_9051",
"LABEL_9052",
"LABEL_9053",
"LABEL_9054",
"LABEL_9055",
"LABEL_9056",
"LABEL_9057",
"LABEL_9058",
"LABEL_9059",
"LABEL_906",
"LABEL_9060",
"LABEL_9061",
"LABEL_9062",
"LABEL_9063",
"LABEL_9064",
"LABEL_9065",
"LABEL_9066",
"LABEL_9067",
"LABEL_9068",
"LABEL_9069",
"LABEL_907",
"LABEL_9070",
"LABEL_9071",
"LABEL_9072",
"LABEL_9073",
"LABEL_9074",
"LABEL_9075",
"LABEL_9076",
"LABEL_9077",
"LABEL_9078",
"LABEL_9079",
"LABEL_908",
"LABEL_9080",
"LABEL_9081",
"LABEL_9082",
"LABEL_9083",
"LABEL_9084",
"LABEL_9085",
"LABEL_9086",
"LABEL_9087",
"LABEL_9088",
"LABEL_9089",
"LABEL_909",
"LABEL_9090",
"LABEL_9091",
"LABEL_9092",
"LABEL_9093",
"LABEL_9094",
"LABEL_9095",
"LABEL_9096",
"LABEL_9097",
"LABEL_9098",
"LABEL_9099",
"LABEL_91",
"LABEL_910",
"LABEL_9100",
"LABEL_9101",
"LABEL_9102",
"LABEL_9103",
"LABEL_9104",
"LABEL_9105",
"LABEL_9106",
"LABEL_9107",
"LABEL_9108",
"LABEL_9109",
"LABEL_911",
"LABEL_9110",
"LABEL_9111",
"LABEL_9112",
"LABEL_9113",
"LABEL_9114",
"LABEL_9115",
"LABEL_9116",
"LABEL_9117",
"LABEL_9118",
"LABEL_9119",
"LABEL_912",
"LABEL_9120",
"LABEL_9121",
"LABEL_9122",
"LABEL_9123",
"LABEL_9124",
"LABEL_9125",
"LABEL_9126",
"LABEL_9127",
"LABEL_9128",
"LABEL_9129",
"LABEL_913",
"LABEL_9130",
"LABEL_9131",
"LABEL_9132",
"LABEL_9133",
"LABEL_9134",
"LABEL_9135",
"LABEL_9136",
"LABEL_9137",
"LABEL_9138",
"LABEL_9139",
"LABEL_914",
"LABEL_9140",
"LABEL_9141",
"LABEL_9142",
"LABEL_9143",
"LABEL_9144",
"LABEL_9145",
"LABEL_9146",
"LABEL_9147",
"LABEL_9148",
"LABEL_9149",
"LABEL_915",
"LABEL_9150",
"LABEL_9151",
"LABEL_9152",
"LABEL_9153",
"LABEL_9154",
"LABEL_9155",
"LABEL_9156",
"LABEL_9157",
"LABEL_9158",
"LABEL_9159",
"LABEL_916",
"LABEL_9160",
"LABEL_9161",
"LABEL_9162",
"LABEL_9163",
"LABEL_9164",
"LABEL_9165",
"LABEL_9166",
"LABEL_9167",
"LABEL_9168",
"LABEL_9169",
"LABEL_917",
"LABEL_9170",
"LABEL_9171",
"LABEL_9172",
"LABEL_9173",
"LABEL_9174",
"LABEL_9175",
"LABEL_9176",
"LABEL_9177",
"LABEL_9178",
"LABEL_9179",
"LABEL_918",
"LABEL_9180",
"LABEL_9181",
"LABEL_9182",
"LABEL_9183",
"LABEL_9184",
"LABEL_9185",
"LABEL_9186",
"LABEL_9187",
"LABEL_9188",
"LABEL_9189",
"LABEL_919",
"LABEL_9190",
"LABEL_9191",
"LABEL_9192",
"LABEL_9193",
"LABEL_9194",
"LABEL_9195",
"LABEL_9196",
"LABEL_9197",
"LABEL_9198",
"LABEL_9199",
"LABEL_92",
"LABEL_920",
"LABEL_9200",
"LABEL_9201",
"LABEL_9202",
"LABEL_9203",
"LABEL_9204",
"LABEL_9205",
"LABEL_9206",
"LABEL_9207",
"LABEL_9208",
"LABEL_9209",
"LABEL_921",
"LABEL_9210",
"LABEL_9211",
"LABEL_9212",
"LABEL_9213",
"LABEL_9214",
"LABEL_9215",
"LABEL_9216",
"LABEL_9217",
"LABEL_9218",
"LABEL_9219",
"LABEL_922",
"LABEL_9220",
"LABEL_9221",
"LABEL_9222",
"LABEL_9223",
"LABEL_9224",
"LABEL_9225",
"LABEL_9226",
"LABEL_9227",
"LABEL_9228",
"LABEL_9229",
"LABEL_923",
"LABEL_9230",
"LABEL_9231",
"LABEL_9232",
"LABEL_9233",
"LABEL_9234",
"LABEL_9235",
"LABEL_9236",
"LABEL_9237",
"LABEL_9238",
"LABEL_9239",
"LABEL_924",
"LABEL_9240",
"LABEL_9241",
"LABEL_9242",
"LABEL_9243",
"LABEL_9244",
"LABEL_9245",
"LABEL_9246",
"LABEL_9247",
"LABEL_9248",
"LABEL_9249",
"LABEL_925",
"LABEL_9250",
"LABEL_9251",
"LABEL_9252",
"LABEL_9253",
"LABEL_9254",
"LABEL_9255",
"LABEL_9256",
"LABEL_9257",
"LABEL_9258",
"LABEL_9259",
"LABEL_926",
"LABEL_9260",
"LABEL_9261",
"LABEL_9262",
"LABEL_9263",
"LABEL_9264",
"LABEL_9265",
"LABEL_9266",
"LABEL_9267",
"LABEL_9268",
"LABEL_9269",
"LABEL_927",
"LABEL_9270",
"LABEL_9271",
"LABEL_9272",
"LABEL_9273",
"LABEL_9274",
"LABEL_9275",
"LABEL_9276",
"LABEL_9277",
"LABEL_9278",
"LABEL_9279",
"LABEL_928",
"LABEL_9280",
"LABEL_9281",
"LABEL_9282",
"LABEL_9283",
"LABEL_9284",
"LABEL_9285",
"LABEL_9286",
"LABEL_9287",
"LABEL_9288",
"LABEL_9289",
"LABEL_929",
"LABEL_9290",
"LABEL_9291",
"LABEL_9292",
"LABEL_9293",
"LABEL_9294",
"LABEL_9295",
"LABEL_9296",
"LABEL_9297",
"LABEL_9298",
"LABEL_9299",
"LABEL_93",
"LABEL_930",
"LABEL_9300",
"LABEL_9301",
"LABEL_9302",
"LABEL_9303",
"LABEL_9304",
"LABEL_9305",
"LABEL_9306",
"LABEL_9307",
"LABEL_9308",
"LABEL_9309",
"LABEL_931",
"LABEL_9310",
"LABEL_9311",
"LABEL_9312",
"LABEL_9313",
"LABEL_9314",
"LABEL_9315",
"LABEL_9316",
"LABEL_9317",
"LABEL_9318",
"LABEL_9319",
"LABEL_932",
"LABEL_9320",
"LABEL_9321",
"LABEL_9322",
"LABEL_9323",
"LABEL_9324",
"LABEL_9325",
"LABEL_9326",
"LABEL_9327",
"LABEL_9328",
"LABEL_9329",
"LABEL_933",
"LABEL_9330",
"LABEL_9331",
"LABEL_9332",
"LABEL_9333",
"LABEL_9334",
"LABEL_9335",
"LABEL_9336",
"LABEL_9337",
"LABEL_9338",
"LABEL_9339",
"LABEL_934",
"LABEL_9340",
"LABEL_9341",
"LABEL_9342",
"LABEL_9343",
"LABEL_9344",
"LABEL_9345",
"LABEL_9346",
"LABEL_9347",
"LABEL_9348",
"LABEL_9349",
"LABEL_935",
"LABEL_9350",
"LABEL_9351",
"LABEL_9352",
"LABEL_9353",
"LABEL_9354",
"LABEL_9355",
"LABEL_9356",
"LABEL_9357",
"LABEL_9358",
"LABEL_9359",
"LABEL_936",
"LABEL_9360",
"LABEL_9361",
"LABEL_9362",
"LABEL_9363",
"LABEL_9364",
"LABEL_9365",
"LABEL_9366",
"LABEL_9367",
"LABEL_9368",
"LABEL_9369",
"LABEL_937",
"LABEL_9370",
"LABEL_9371",
"LABEL_9372",
"LABEL_9373",
"LABEL_9374",
"LABEL_9375",
"LABEL_9376",
"LABEL_9377",
"LABEL_9378",
"LABEL_9379",
"LABEL_938",
"LABEL_9380",
"LABEL_9381",
"LABEL_9382",
"LABEL_9383",
"LABEL_9384",
"LABEL_9385",
"LABEL_9386",
"LABEL_9387",
"LABEL_9388",
"LABEL_9389",
"LABEL_939",
"LABEL_9390",
"LABEL_9391",
"LABEL_9392",
"LABEL_9393",
"LABEL_9394",
"LABEL_9395",
"LABEL_9396",
"LABEL_9397",
"LABEL_9398",
"LABEL_9399",
"LABEL_94",
"LABEL_940",
"LABEL_9400",
"LABEL_9401",
"LABEL_9402",
"LABEL_9403",
"LABEL_9404",
"LABEL_9405",
"LABEL_9406",
"LABEL_9407",
"LABEL_9408",
"LABEL_9409",
"LABEL_941",
"LABEL_9410",
"LABEL_9411",
"LABEL_9412",
"LABEL_9413",
"LABEL_9414",
"LABEL_9415",
"LABEL_9416",
"LABEL_9417",
"LABEL_9418",
"LABEL_9419",
"LABEL_942",
"LABEL_9420",
"LABEL_9421",
"LABEL_9422",
"LABEL_9423",
"LABEL_9424",
"LABEL_9425",
"LABEL_9426",
"LABEL_9427",
"LABEL_9428",
"LABEL_9429",
"LABEL_943",
"LABEL_9430",
"LABEL_9431",
"LABEL_9432",
"LABEL_9433",
"LABEL_9434",
"LABEL_9435",
"LABEL_9436",
"LABEL_9437",
"LABEL_9438",
"LABEL_9439",
"LABEL_944",
"LABEL_9440",
"LABEL_9441",
"LABEL_9442",
"LABEL_9443",
"LABEL_9444",
"LABEL_9445",
"LABEL_9446",
"LABEL_9447",
"LABEL_9448",
"LABEL_9449",
"LABEL_945",
"LABEL_9450",
"LABEL_9451",
"LABEL_9452",
"LABEL_9453",
"LABEL_9454",
"LABEL_9455",
"LABEL_9456",
"LABEL_9457",
"LABEL_9458",
"LABEL_9459",
"LABEL_946",
"LABEL_9460",
"LABEL_9461",
"LABEL_9462",
"LABEL_9463",
"LABEL_9464",
"LABEL_9465",
"LABEL_9466",
"LABEL_9467",
"LABEL_9468",
"LABEL_9469",
"LABEL_947",
"LABEL_9470",
"LABEL_9471",
"LABEL_9472",
"LABEL_9473",
"LABEL_9474",
"LABEL_9475",
"LABEL_9476",
"LABEL_9477",
"LABEL_9478",
"LABEL_9479",
"LABEL_948",
"LABEL_9480",
"LABEL_9481",
"LABEL_9482",
"LABEL_9483",
"LABEL_9484",
"LABEL_9485",
"LABEL_9486",
"LABEL_9487",
"LABEL_9488",
"LABEL_9489",
"LABEL_949",
"LABEL_9490",
"LABEL_9491",
"LABEL_9492",
"LABEL_9493",
"LABEL_9494",
"LABEL_9495",
"LABEL_9496",
"LABEL_9497",
"LABEL_9498",
"LABEL_9499",
"LABEL_95",
"LABEL_950",
"LABEL_9500",
"LABEL_9501",
"LABEL_9502",
"LABEL_9503",
"LABEL_9504",
"LABEL_9505",
"LABEL_9506",
"LABEL_9507",
"LABEL_9508",
"LABEL_9509",
"LABEL_951",
"LABEL_9510",
"LABEL_9511",
"LABEL_9512",
"LABEL_9513",
"LABEL_9514",
"LABEL_9515",
"LABEL_9516",
"LABEL_9517",
"LABEL_9518",
"LABEL_9519",
"LABEL_952",
"LABEL_9520",
"LABEL_9521",
"LABEL_9522",
"LABEL_9523",
"LABEL_9524",
"LABEL_9525",
"LABEL_9526",
"LABEL_9527",
"LABEL_9528",
"LABEL_9529",
"LABEL_953",
"LABEL_9530",
"LABEL_9531",
"LABEL_9532",
"LABEL_9533",
"LABEL_9534",
"LABEL_9535",
"LABEL_9536",
"LABEL_9537",
"LABEL_9538",
"LABEL_9539",
"LABEL_954",
"LABEL_9540",
"LABEL_9541",
"LABEL_9542",
"LABEL_9543",
"LABEL_9544",
"LABEL_9545",
"LABEL_9546",
"LABEL_9547",
"LABEL_9548",
"LABEL_9549",
"LABEL_955",
"LABEL_9550",
"LABEL_9551",
"LABEL_9552",
"LABEL_9553",
"LABEL_9554",
"LABEL_9555",
"LABEL_9556",
"LABEL_9557",
"LABEL_9558",
"LABEL_9559",
"LABEL_956",
"LABEL_9560",
"LABEL_9561",
"LABEL_9562",
"LABEL_9563",
"LABEL_9564",
"LABEL_9565",
"LABEL_9566",
"LABEL_9567",
"LABEL_9568",
"LABEL_9569",
"LABEL_957",
"LABEL_9570",
"LABEL_9571",
"LABEL_9572",
"LABEL_9573",
"LABEL_9574",
"LABEL_9575",
"LABEL_9576",
"LABEL_9577",
"LABEL_9578",
"LABEL_9579",
"LABEL_958",
"LABEL_9580",
"LABEL_9581",
"LABEL_9582",
"LABEL_9583",
"LABEL_9584",
"LABEL_9585",
"LABEL_9586",
"LABEL_9587",
"LABEL_9588",
"LABEL_9589",
"LABEL_959",
"LABEL_9590",
"LABEL_9591",
"LABEL_9592",
"LABEL_9593",
"LABEL_9594",
"LABEL_9595",
"LABEL_9596",
"LABEL_9597",
"LABEL_9598",
"LABEL_9599",
"LABEL_96",
"LABEL_960",
"LABEL_9600",
"LABEL_9601",
"LABEL_9602",
"LABEL_9603",
"LABEL_9604",
"LABEL_9605",
"LABEL_9606",
"LABEL_9607",
"LABEL_9608",
"LABEL_9609",
"LABEL_961",
"LABEL_9610",
"LABEL_9611",
"LABEL_9612",
"LABEL_9613",
"LABEL_9614",
"LABEL_9615",
"LABEL_9616",
"LABEL_9617",
"LABEL_9618",
"LABEL_9619",
"LABEL_962",
"LABEL_9620",
"LABEL_9621",
"LABEL_9622",
"LABEL_9623",
"LABEL_9624",
"LABEL_9625",
"LABEL_9626",
"LABEL_9627",
"LABEL_9628",
"LABEL_9629",
"LABEL_963",
"LABEL_9630",
"LABEL_9631",
"LABEL_9632",
"LABEL_9633",
"LABEL_9634",
"LABEL_9635",
"LABEL_9636",
"LABEL_9637",
"LABEL_9638",
"LABEL_9639",
"LABEL_964",
"LABEL_9640",
"LABEL_9641",
"LABEL_9642",
"LABEL_9643",
"LABEL_9644",
"LABEL_9645",
"LABEL_9646",
"LABEL_9647",
"LABEL_9648",
"LABEL_9649",
"LABEL_965",
"LABEL_9650",
"LABEL_9651",
"LABEL_9652",
"LABEL_9653",
"LABEL_9654",
"LABEL_9655",
"LABEL_9656",
"LABEL_9657",
"LABEL_9658",
"LABEL_9659",
"LABEL_966",
"LABEL_9660",
"LABEL_9661",
"LABEL_9662",
"LABEL_9663",
"LABEL_9664",
"LABEL_9665",
"LABEL_9666",
"LABEL_9667",
"LABEL_9668",
"LABEL_9669",
"LABEL_967",
"LABEL_9670",
"LABEL_9671",
"LABEL_9672",
"LABEL_9673",
"LABEL_9674",
"LABEL_9675",
"LABEL_9676",
"LABEL_9677",
"LABEL_9678",
"LABEL_9679",
"LABEL_968",
"LABEL_9680",
"LABEL_9681",
"LABEL_9682",
"LABEL_9683",
"LABEL_9684",
"LABEL_9685",
"LABEL_9686",
"LABEL_9687",
"LABEL_9688",
"LABEL_9689",
"LABEL_969",
"LABEL_9690",
"LABEL_9691",
"LABEL_9692",
"LABEL_9693",
"LABEL_9694",
"LABEL_9695",
"LABEL_9696",
"LABEL_9697",
"LABEL_9698",
"LABEL_9699",
"LABEL_97",
"LABEL_970",
"LABEL_9700",
"LABEL_9701",
"LABEL_9702",
"LABEL_9703",
"LABEL_9704",
"LABEL_9705",
"LABEL_9706",
"LABEL_9707",
"LABEL_9708",
"LABEL_9709",
"LABEL_971",
"LABEL_9710",
"LABEL_9711",
"LABEL_9712",
"LABEL_9713",
"LABEL_9714",
"LABEL_9715",
"LABEL_9716",
"LABEL_9717",
"LABEL_9718",
"LABEL_9719",
"LABEL_972",
"LABEL_9720",
"LABEL_9721",
"LABEL_9722",
"LABEL_9723",
"LABEL_9724",
"LABEL_9725",
"LABEL_9726",
"LABEL_9727",
"LABEL_9728",
"LABEL_9729",
"LABEL_973",
"LABEL_9730",
"LABEL_9731",
"LABEL_9732",
"LABEL_9733",
"LABEL_9734",
"LABEL_9735",
"LABEL_9736",
"LABEL_9737",
"LABEL_9738",
"LABEL_9739",
"LABEL_974",
"LABEL_9740",
"LABEL_9741",
"LABEL_9742",
"LABEL_9743",
"LABEL_9744",
"LABEL_9745",
"LABEL_9746",
"LABEL_9747",
"LABEL_9748",
"LABEL_9749",
"LABEL_975",
"LABEL_9750",
"LABEL_9751",
"LABEL_9752",
"LABEL_9753",
"LABEL_9754",
"LABEL_9755",
"LABEL_9756",
"LABEL_9757",
"LABEL_9758",
"LABEL_9759",
"LABEL_976",
"LABEL_9760",
"LABEL_9761",
"LABEL_9762",
"LABEL_9763",
"LABEL_9764",
"LABEL_9765",
"LABEL_9766",
"LABEL_9767",
"LABEL_9768",
"LABEL_9769",
"LABEL_977",
"LABEL_9770",
"LABEL_9771",
"LABEL_9772",
"LABEL_9773",
"LABEL_9774",
"LABEL_9775",
"LABEL_9776",
"LABEL_9777",
"LABEL_9778",
"LABEL_9779",
"LABEL_978",
"LABEL_9780",
"LABEL_9781",
"LABEL_9782",
"LABEL_9783",
"LABEL_9784",
"LABEL_9785",
"LABEL_9786",
"LABEL_9787",
"LABEL_9788",
"LABEL_9789",
"LABEL_979",
"LABEL_9790",
"LABEL_9791",
"LABEL_9792",
"LABEL_9793",
"LABEL_9794",
"LABEL_9795",
"LABEL_9796",
"LABEL_9797",
"LABEL_9798",
"LABEL_9799",
"LABEL_98",
"LABEL_980",
"LABEL_9800",
"LABEL_9801",
"LABEL_9802",
"LABEL_9803",
"LABEL_9804",
"LABEL_9805",
"LABEL_9806",
"LABEL_9807",
"LABEL_9808",
"LABEL_9809",
"LABEL_981",
"LABEL_9810",
"LABEL_9811",
"LABEL_9812",
"LABEL_9813",
"LABEL_9814",
"LABEL_9815",
"LABEL_9816",
"LABEL_9817",
"LABEL_9818",
"LABEL_9819",
"LABEL_982",
"LABEL_9820",
"LABEL_9821",
"LABEL_9822",
"LABEL_9823",
"LABEL_9824",
"LABEL_9825",
"LABEL_9826",
"LABEL_9827",
"LABEL_9828",
"LABEL_9829",
"LABEL_983",
"LABEL_9830",
"LABEL_9831",
"LABEL_9832",
"LABEL_9833",
"LABEL_9834",
"LABEL_9835",
"LABEL_9836",
"LABEL_9837",
"LABEL_9838",
"LABEL_9839",
"LABEL_984",
"LABEL_9840",
"LABEL_9841",
"LABEL_9842",
"LABEL_9843",
"LABEL_9844",
"LABEL_9845",
"LABEL_9846",
"LABEL_9847",
"LABEL_9848",
"LABEL_9849",
"LABEL_985",
"LABEL_9850",
"LABEL_9851",
"LABEL_9852",
"LABEL_9853",
"LABEL_9854",
"LABEL_9855",
"LABEL_9856",
"LABEL_9857",
"LABEL_9858",
"LABEL_9859",
"LABEL_986",
"LABEL_9860",
"LABEL_9861",
"LABEL_9862",
"LABEL_9863",
"LABEL_9864",
"LABEL_9865",
"LABEL_9866",
"LABEL_9867",
"LABEL_9868",
"LABEL_9869",
"LABEL_987",
"LABEL_9870",
"LABEL_9871",
"LABEL_9872",
"LABEL_9873",
"LABEL_9874",
"LABEL_9875",
"LABEL_9876",
"LABEL_9877",
"LABEL_9878",
"LABEL_9879",
"LABEL_988",
"LABEL_9880",
"LABEL_9881",
"LABEL_9882",
"LABEL_9883",
"LABEL_9884",
"LABEL_9885",
"LABEL_9886",
"LABEL_9887",
"LABEL_9888",
"LABEL_9889",
"LABEL_989",
"LABEL_9890",
"LABEL_9891",
"LABEL_9892",
"LABEL_9893",
"LABEL_9894",
"LABEL_9895",
"LABEL_9896",
"LABEL_9897",
"LABEL_9898",
"LABEL_9899",
"LABEL_99",
"LABEL_990",
"LABEL_9900",
"LABEL_9901",
"LABEL_9902",
"LABEL_9903",
"LABEL_9904",
"LABEL_9905",
"LABEL_9906",
"LABEL_9907",
"LABEL_9908",
"LABEL_9909",
"LABEL_991",
"LABEL_9910",
"LABEL_9911",
"LABEL_9912",
"LABEL_9913",
"LABEL_9914",
"LABEL_9915",
"LABEL_9916",
"LABEL_9917",
"LABEL_9918",
"LABEL_9919",
"LABEL_992",
"LABEL_9920",
"LABEL_9921",
"LABEL_9922",
"LABEL_9923",
"LABEL_9924",
"LABEL_9925",
"LABEL_9926",
"LABEL_9927",
"LABEL_9928",
"LABEL_9929",
"LABEL_993",
"LABEL_9930",
"LABEL_9931",
"LABEL_9932",
"LABEL_9933",
"LABEL_9934",
"LABEL_9935",
"LABEL_9936",
"LABEL_9937",
"LABEL_9938",
"LABEL_9939",
"LABEL_994",
"LABEL_9940",
"LABEL_9941",
"LABEL_9942",
"LABEL_9943",
"LABEL_9944",
"LABEL_9945",
"LABEL_9946",
"LABEL_9947",
"LABEL_9948",
"LABEL_9949",
"LABEL_995",
"LABEL_9950",
"LABEL_9951",
"LABEL_9952",
"LABEL_9953",
"LABEL_9954",
"LABEL_9955",
"LABEL_9956",
"LABEL_9957",
"LABEL_9958",
"LABEL_9959",
"LABEL_996",
"LABEL_9960",
"LABEL_9961",
"LABEL_9962",
"LABEL_9963",
"LABEL_9964",
"LABEL_9965",
"LABEL_9966",
"LABEL_9967",
"LABEL_9968",
"LABEL_9969",
"LABEL_997",
"LABEL_9970",
"LABEL_9971",
"LABEL_9972",
"LABEL_9973",
"LABEL_9974",
"LABEL_9975",
"LABEL_9976",
"LABEL_9977",
"LABEL_9978",
"LABEL_9979",
"LABEL_998",
"LABEL_9980",
"LABEL_9981",
"LABEL_9982",
"LABEL_9983",
"LABEL_9984",
"LABEL_9985",
"LABEL_9986",
"LABEL_9987",
"LABEL_9988",
"LABEL_9989",
"LABEL_999",
"LABEL_9990",
"LABEL_9991",
"LABEL_9992",
"LABEL_9993",
"LABEL_9994",
"LABEL_9995",
"LABEL_9996",
"LABEL_9997",
"LABEL_9998",
"LABEL_9999"
] | ---
license: apache-2.0
tags:
- text-classification
---
# Clinical BERT for ICD-10 Prediction
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries.
---
## How to use the model
Load the model via the transformers library:
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
model = BertForSequenceClassification.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
config = model.config
Run the model with clinical diagonosis text:
text = "subarachnoid hemorrhage scalp laceration service: surgery major surgical or invasive"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
Return the Top-5 predicted ICD-10 codes:
results = output.logits.detach().cpu().numpy()[0].argsort()[::-1][:5]
return [ config.id2label[ids] for ids in results] |
72 | adorkin/xlm-roberta-en-ru-emoji | [
"☀",
"✨",
"❤",
"🇺🇸",
"🎄",
"💕",
"💙",
"💜",
"💯",
"📷",
"📸",
"🔥",
"😁",
"😂",
"😉",
"😊",
"😍",
"😎",
"😘",
"😜"
] | ---
language:
- en
- ru
datasets:
- tweet_eval
model_index:
- name: xlm-roberta-en-ru-emoji
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: Tweet Eval
type: tweet_eval
args: emoji
widget:
- text: "Отлично!"
- text: "Awesome!"
- text: "lol"
---
# xlm-roberta-en-ru-emoji
- Problem type: Multi-class Classification |
73 | AlekseyKorshuk/bert | [
"0",
"1",
"2",
"3",
"4"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5316
- Accuracy: 0.2936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.5355 | 1.0 | 6195 | 1.5339 | 0.2923 |
| 1.5248 | 2.0 | 12390 | 1.5316 | 0.2936 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
75 | Alireza1044/albert-base-v2-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metric:
name: Accuracy
type: accuracy
value: 0.8500813669650122
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5383
- Accuracy: 0.8501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
81 | Alireza1044/albert-base-v2-stsb | [
"LABEL_0"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model_index:
- name: stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metric:
name: Spearmanr
type: spearmanr
value: 0.9050744778895732
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stsb
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3978
- Pearson: 0.9090
- Spearmanr: 0.9051
- Combined Score: 0.9071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
85 | Anamika/autonlp-Feedback1-479512837 | [
"Claim",
"Concluding Statement",
"Counterclaim",
"Evidence",
"Lead",
"Position",
"Rebuttal"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Anamika/autonlp-data-Feedback1
co2_eq_emissions: 123.88023112815048
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 479512837
- CO2 Emissions (in grams): 123.88023112815048
## Validation Metrics
- Loss: 0.6220805048942566
- Accuracy: 0.7961119332705503
- Macro F1: 0.7616345204219084
- Micro F1: 0.7961119332705503
- Weighted F1: 0.795387503907883
- Macro Precision: 0.782839455262034
- Micro Precision: 0.7961119332705503
- Weighted Precision: 0.7992606754484262
- Macro Recall: 0.7451485972167191
- Micro Recall: 0.7961119332705503
- Weighted Recall: 0.7961119332705503
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-Feedback1-479512837
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
86 | Anamika/autonlp-fa-473312409 | [
"Claim",
"Concluding Statement",
"Counterclaim",
"Evidence",
"Lead",
"Position",
"Rebuttal"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Anamika/autonlp-data-fa
co2_eq_emissions: 25.128735714898614
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 473312409
- CO2 Emissions (in grams): 25.128735714898614
## Validation Metrics
- Loss: 0.6010786890983582
- Accuracy: 0.7990650945370823
- Macro F1: 0.7429662929144928
- Micro F1: 0.7990650945370823
- Weighted F1: 0.7977660363770382
- Macro Precision: 0.7744390888231261
- Micro Precision: 0.7990650945370823
- Weighted Precision: 0.800444194278352
- Macro Recall: 0.7198278524814119
- Micro Recall: 0.7990650945370823
- Weighted Recall: 0.7990650945370823
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-fa-473312409
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
88 | Aron/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.92
- name: F1
type: f1
value: 0.9201604193183255
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2295
- Accuracy: 0.92
- F1: 0.9202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8187 | 1.0 | 250 | 0.3137 | 0.902 | 0.8983 |
| 0.2514 | 2.0 | 500 | 0.2295 | 0.92 | 0.9202 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
90 | BaptisteDoyen/camembert-base-xnli | [
"entailment",
"neutral",
"contradiction"
] | ---
language:
- fr
thumbnail:
tags:
- zero-shot-classification
- xnli
- nli
- fr
license: mit
pipeline_tag: zero-shot-classification
datasets:
- xnli
metrics:
- accuracy
---
# camembert-base-xnli
## Model description
Camembert-base model fine-tuned on french part of XNLI dataset. <br>
One of the few Zero-Shot classification model working on French 🇫🇷
## Intended uses & limitations
#### How to use
Two different usages :
- As a Zero-Shot sequence classifier :
```python
classifier = pipeline("zero-shot-classification",
model="BaptisteDoyen/camembert-base-xnli")
sequence = "L'équipe de France joue aujourd'hui au Parc des Princes"
candidate_labels = ["sport","politique","science"]
hypothesis_template = "Ce texte parle de {}."
classifier(sequence, candidate_labels, hypothesis_template=hypothesis_template)
# outputs :
# {'sequence': "L'équipe de France joue aujourd'hui au Parc des Princes",
# 'labels': ['sport', 'politique', 'science'],
# 'scores': [0.8595073223114014, 0.10821866989135742, 0.0322740375995636]}
```
- As a premise/hypothesis checker : <br>
The idea is here to compute a probability of the form \\( P(premise|hypothesis ) \\)
```python
# load model and tokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained("BaptisteDoyen/camembert-base-xnli")
tokenizer = AutoTokenizer.from_pretrained("BaptisteDoyen/camembert-base-xnli")
# sequences
premise = "le score pour les bleus est élevé"
hypothesis = "L'équipe de France a fait un bon match"
# tokenize and run through model
x = tokenizer.encode(premise, hypothesis, return_tensors='pt')
logits = nli_model(x)[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (0) as the probability of the label being true
entail_contradiction_logits = logits[:,::2]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,0]
prob_label_is_true[0].tolist() * 100
# outputs
# 86.40775084495544
```
## Training data
Training data is the french fold of the [XNLI](https://research.fb.com/publications/xnli-evaluating-cross-lingual-sentence-representations/) dataset released in 2018 by Facebook. <br>
Available with great ease using the ```datasets``` library :
```python
from datasets import load_dataset
dataset = load_dataset('xnli', 'fr')
```
## Training/Fine-Tuning procedure
Training procedure is here pretty basic and was performed on the cloud using a single GPU. <br>
Main training parameters :
- ```lr = 2e-5``` with ```lr_scheduler_type = "linear"```
- ```num_train_epochs = 4```
- ```batch_size = 12``` (limited by GPU-memory)
- ```weight_decay = 0.01```
- ```metric_for_best_model = "eval_accuracy"```
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | Accuracy |
| ---------- |-------------|
| validation | 81.4 |
| test | 81.7 |
|
96 | CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | [
"البسيط",
"الخفيف",
"الدوبيت",
"الرجز",
"الرمل",
"السريع",
"السلسلة",
"الطويل",
"الكامل",
"المتدارك",
"المتقارب",
"المجتث",
"المديد",
"المضارع",
"المقتضب",
"المنسرح",
"المواليا",
"الهزج",
"الوافر",
"شعر التفعيلة",
"شعر حر",
"عامي",
"موشح"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-CA Poetry Classification Model
## Model description
**CAMeLBERT-CA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9845284819602966},
{'label': 'الكامل', 'score': 0.752918004989624}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
97 | CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | [
"negative",
"neutral",
"positive"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT-CA SA Model
## Model description
**CAMeLBERT-CA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
e
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
98 | CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | [
"البسيط",
"الخفيف",
"الدوبيت",
"الرجز",
"الرمل",
"السريع",
"السلسلة",
"الطويل",
"الكامل",
"المتدارك",
"المتقارب",
"المجتث",
"المديد",
"المضارع",
"المقتضب",
"المنسرح",
"المواليا",
"الهزج",
"الوافر",
"شعر التفعيلة",
"شعر حر",
"عامي",
"موشح"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-DA Poetry Classification Model
## Model description
**CAMeLBERT-DA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9874765276908875},
{'label': 'السلسلة', 'score': 0.6877778172492981}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
99 | CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment | [
"negative",
"neutral",
"positive"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT-DA SA Model
## Model description
**CAMeLBERT-DA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
100 | CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26 | [
"ALE",
"ALG",
"ALX",
"AMM",
"ASW",
"BAG",
"BAS",
"BEI",
"BEN",
"CAI",
"DAM",
"DOH",
"FES",
"JED",
"JER",
"KHA",
"MOS",
"MSA",
"MUS",
"RAB",
"RIY",
"SAL",
"SAN",
"SFX",
"TRI",
"TUN"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-Mix DID Madar Corpus26 Model
## Model description
**CAMeLBERT-Mix DID Madar Corpus26 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [MADAR Corpus 26](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 26 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar26')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'CAI', 'score': 0.8751305937767029},
{'label': 'DOH', 'score': 0.9867215156555176}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
101 | CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6 | [
"BEI",
"CAI",
"DOH",
"MSA",
"RAB",
"TUN"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-Mix DID MADAR Corpus6 Model
## Model description
**CAMeLBERT-Mix DID MADAR Corpus6 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [MADAR Corpus 6](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 6 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID MADAR Corpus6 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar6')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'CAI', 'score': 0.9996405839920044},
{'label': 'DOH', 'score': 0.9997853636741638}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
102 | CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi | [
"Algeria",
"Bahrain",
"Djibouti",
"Egypt",
"Iraq",
"Jordan",
"Kuwait",
"Lebanon",
"Libya",
"Mauritania",
"Morocco",
"Oman",
"Palestine",
"Qatar",
"Saudi_Arabia",
"Somalia",
"Sudan",
"Syria",
"Tunisia",
"United_Arab_Emirates",
"Yemen"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-Mix DID NADI Model
## Model description
**CAMeLBERT-Mix DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID NADI model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.920274019241333},
{'label': 'Saudi_Arabia', 'score': 0.26750022172927856}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
103 | CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry | [
"البسيط",
"الخفيف",
"الدوبيت",
"الرجز",
"الرمل",
"السريع",
"السلسلة",
"الطويل",
"الكامل",
"المتدارك",
"المتقارب",
"المجتث",
"المديد",
"المضارع",
"المقتضب",
"المنسرح",
"المواليا",
"الهزج",
"الوافر",
"شعر التفعيلة",
"شعر حر",
"عامي",
"موشح"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-Mix Poetry Classification Model
## Model description
**CAMeLBERT-Mix Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9937475919723511},
{'label': 'الكامل', 'score': 0.971284031867981}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
104 | CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | [
"negative",
"neutral",
"positive"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT Mix SA Model
## Model description
**CAMeLBERT Mix SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT Mix SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
105 | CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5 | [
"Algeria",
"Bahrain",
"Djibouti",
"Egypt",
"Iraq",
"Jordan",
"Kuwait",
"Lebanon",
"Libya",
"Mauritania",
"Morocco",
"Oman",
"Palestine",
"Qatar",
"Saudi_Arabia",
"Somalia",
"Sudan",
"Syria",
"Tunisia",
"United_Arab_Emirates",
"Yemen"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-MSA DID MADAR Twitter-5 Model
## Model description
**CAMeLBERT-MSA DID MADAR Twitter-5 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [MADAR Twitter-5](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.5741344094276428},
{'label': 'Kuwait', 'score': 0.5225679278373718}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
106 | CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi | [
"Algeria",
"Bahrain",
"Djibouti",
"Egypt",
"Iraq",
"Jordan",
"Kuwait",
"Lebanon",
"Libya",
"Mauritania",
"Morocco",
"Oman",
"Palestine",
"Qatar",
"Saudi_Arabia",
"Somalia",
"Sudan",
"Syria",
"Tunisia",
"United_Arab_Emirates",
"Yemen"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-MSA DID NADI Model
## Model description
**CAMeLBERT-MSA DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA DID NADI model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.9242768287658691},
{'label': 'Saudi_Arabia', 'score': 0.3400847613811493}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
107 | CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry | [
"البسيط",
"الخفيف",
"الدوبيت",
"الرجز",
"الرمل",
"السريع",
"السلسلة",
"الطويل",
"الكامل",
"المتدارك",
"المتقارب",
"المجتث",
"المديد",
"المضارع",
"المقتضب",
"المنسرح",
"المواليا",
"الهزج",
"الوافر",
"شعر التفعيلة",
"شعر حر",
"عامي",
"موشح"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-MSA Poetry Classification Model
## Model description
**CAMeLBERT-MSA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9914996027946472},
{'label': 'الكامل', 'score': 0.917242169380188}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
108 | CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment | [
"negative",
"neutral",
"positive"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT MSA SA Model
## Model description
**CAMeLBERT MSA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT MSA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
109 | CLTL/icf-domains | [
"ADM",
"ATT",
"BER",
"ENR",
"ETN",
"FAC",
"INS",
"MBW",
"STM"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# A-PROOF ICF-domains Classification
## Description
A fine-tuned multi-label classification model that detects 9 [WHO-ICF](https://www.who.int/standards/classifications/international-classification-of-functioning-disability-and-health) domains in clinical text in Dutch. The model is based on a pre-trained Dutch medical language model ([link to be added]()), a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC.
## ICF domains
The model can detect 9 domains, which were chosen due to their relevance to recovery from COVID-19:
ICF code | Domain | name in repo
---|---|---
b440 | Respiration functions | ADM
b140 | Attention functions | ATT
d840-d859 | Work and employment | BER
b1300 | Energy level | ENR
d550 | Eating | ETN
d450 | Walking | FAC
b455 | Exercise tolerance functions | INS
b530 | Weight maintenance functions | MBW
b152 | Emotional functions | STM
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import MultiLabelClassificationModel
model = MultiLabelClassificationModel(
'roberta',
'CLTL/icf-domains',
use_cuda=False,
)
example = 'Nu sinds 5-6 dagen progressieve benauwdheidsklachten (bij korte stukken lopen al kortademig), terwijl dit eerder niet zo was.'
predictions, raw_outputs = model.predict([example])
```
The predictions look like this:
```
[[1, 0, 0, 0, 0, 1, 1, 0, 0]]
```
The indices of the multi-label stand for:
```
[ADM, ATT, BER, ENR, ETN, FAC, INS, MBW, STM]
```
In other words, the above prediction corresponds to assigning the labels ADM, FAC and INS to the example sentence.
The raw outputs look like this:
```
[[0.51907885 0.00268032 0.0030862 0.03066113 0.00616694 0.64720929
0.67348498 0.0118863 0.0046311 ]]
```
For this model, the threshold at which the prediction for a label flips from 0 to 1 is **0.5**.
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
- Threshold: 0.5
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
### Sentence-level
| | ADM | ATT | BER | ENR | ETN | FAC | INS | MBW | STM
|---|---|---|---|---|---|---|---|---|---
precision | 0.98 | 0.98 | 0.56 | 0.96 | 0.92 | 0.84 | 0.89 | 0.79 | 0.70
recall | 0.49 | 0.41 | 0.29 | 0.57 | 0.49 | 0.71 | 0.26 | 0.62 | 0.75
F1-score | 0.66 | 0.58 | 0.35 | 0.72 | 0.63 | 0.76 | 0.41 | 0.70 | 0.72
support | 775 | 39 | 54 | 160 | 382 | 253 | 287 | 125 | 181
### Note-level
| | ADM | ATT | BER | ENR | ETN | FAC | INS | MBW | STM
|---|---|---|---|---|---|---|---|---|---
precision | 1.0 | 1.0 | 0.66 | 0.96 | 0.95 | 0.84 | 0.95 | 0.87 | 0.80
recall | 0.89 | 0.56 | 0.44 | 0.70 | 0.72 | 0.89 | 0.46 | 0.87 | 0.87
F1-score | 0.94 | 0.71 | 0.50 | 0.81 | 0.82 | 0.86 | 0.61 | 0.87 | 0.84
support | 231 | 27 | 34 | 92 | 165 | 95 | 116 | 64 | 94
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD |
110 | CLTL/icf-levels-adm | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Respiration Functioning Levels (ICF b440)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing respiration functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about respiration functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with respiration, and/or respiratory rate is normal (EWS: 9-20).
3 | Shortness of breath in exercise (saturation ≥90), and/or respiratory rate is slightly increased (EWS: 21-30).
2 | Shortness of breath in rest (saturation ≥90), and/or respiratory rate is fairly increased (EWS: 31-35).
1 | Needs oxygen at rest or during exercise (saturation <90), and/or respiratory rate >35.
0 | Mechanical ventilation is needed.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-adm',
use_cuda=False,
)
example = 'Nu sinds 5-6 dagen progressieve benauwdheidsklachten (bij korte stukken lopen al kortademig), terwijl dit eerder niet zo was.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
2.26
```
The raw outputs look like this:
```
[[2.26074648]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.48 | 0.37
mean squared error | 0.55 | 0.34
root mean squared error | 0.74 | 0.58
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
111 | CLTL/icf-levels-att | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Attention Functioning Levels (ICF b140)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing attention functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about attention functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with concentrating / directing / holding / dividing attention.
3 | Slight problem with concentrating / directing / holding / dividing attention for a longer period of time or for complex tasks.
2 | Can concentrate / direct / hold / divide attention only for a short time.
1 | Can barely concentrate / direct / hold / divide attention.
0 | Unable to concentrate / direct / hold / divide attention.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-att',
use_cuda=False,
)
example = 'Snel afgeleid, moeite aandacht te behouden.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
2.89
```
The raw outputs look like this:
```
[[2.89226103]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.99 | 1.03
mean squared error | 1.35 | 1.47
root mean squared error | 1.16 | 1.21
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
112 | CLTL/icf-levels-ber | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Work and Employment Functioning Levels (ICF d840-d859)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing work and employment functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about work and employment functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | Can work/study fully (like when healthy).
3 | Can work/study almost fully.
2 | Can work/study only for about 50\%, or can only work at home and cannot go to school / office.
1 | Work/study is severely limited.
0 | Cannot work/study.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-ber',
use_cuda=False,
)
example = 'Fysiek zwaar werk is niet mogelijk, maar administrative taken zou zij wel aan moeten kunnen.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
2.41
```
The raw outputs look like this:
```
[[2.40793037]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 1.56 | 1.49
mean squared error | 3.06 | 2.85
root mean squared error | 1.75 | 1.69
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
113 | CLTL/icf-levels-enr | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Energy Levels (ICF b1300)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing energy level. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about energy level in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with the energy level.
3 | Slight fatigue that causes mild limitations.
2 | Moderate fatigue; the patient gets easily tired from light activities or needs a long time to recover after an activity.
1 | Severe fatigue; the patient is capable of very little.
0 | Very severe fatigue; unable to do anything and mostly lays in bed.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-enr',
use_cuda=False,
)
example = 'Al jaren extreme vermoeidheid overdag, valt overdag in slaap tijdens school- en werkactiviteiten en soms zelfs tijdens een gesprek.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.98
```
The raw outputs look like this:
```
[[1.97520316]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.48 | 0.43
mean squared error | 0.49 | 0.42
root mean squared error | 0.70 | 0.65
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
114 | CLTL/icf-levels-etn | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Eating Functioning Levels (ICF d550)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing eating functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about eating functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | Can eat independently (in culturally acceptable ways), good intake, eats according to her/his needs.
3 | Can eat independently but with adjustments, and/or somewhat reduced intake (>75% of her/his needs), and/or good intake can be achieved with proper advice.
2 | Reduced intake, and/or stimulus / feeding modules / nutrition drinks are needed (but not tube feeding / TPN).
1 | Intake is severely reduced (<50% of her/his needs), and/or tube feeding / TPN is needed.
0 | Cannot eat, and/or fully dependent on tube feeding / TPN.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-etn',
use_cuda=False,
)
example = 'Sondevoeding is geïndiceerd'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
0.89
```
The raw outputs look like this:
```
[[0.8872931]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.59 | 0.50
mean squared error | 0.65 | 0.47
root mean squared error | 0.81 | 0.68
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
115 | CLTL/icf-levels-fac | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Walking Functioning Levels (ICF d550)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing walking functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about walking functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
5 | Patient can walk independently anywhere: level surface, uneven surface, slopes, stairs.
4 | Patient can walk independently on level surface but requires help on stairs, inclines, uneven surface; or, patient can walk independently, but the walking is not fully normal.
3 | Patient requires verbal supervision for walking, without physical contact.
2 | Patient needs continuous or intermittent support of one person to help with balance and coordination.
1 | Patient needs firm continuous support from one person who helps carrying weight and with balance.
0 | Patient cannot walk or needs help from two or more people; or, patient walks on a treadmill.
The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-fac',
use_cuda=False,
)
example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
4.2
```
The raw outputs look like this:
```
[[4.20903111]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.70 | 0.66
mean squared error | 0.91 | 0.93
root mean squared error | 0.95 | 0.96
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
116 | CLTL/icf-levels-ins | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Exercise Tolerance Functioning Levels (ICF b455)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing exercise tolerance functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about exercise tolerance functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
5 | MET>6. Can tolerate jogging, hard exercises, running, climbing stairs fast, sports.
4 | 4≤MET≤6. Can tolerate walking / cycling at a brisk pace, considerable effort (e.g. cycling from 16 km/h), heavy housework.
3 | 3≤MET<4. Can tolerate walking / cycling at a normal pace, gardening, exercises without equipment.
2 | 2≤MET<3. Can tolerate walking at a slow to moderate pace, grocery shopping, light housework.
1 | 1≤MET<2. Can tolerate sitting activities.
0 | 0≤MET<1. Can physically tolerate only recumbent activities.
The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-ins',
use_cuda=False,
)
example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
3.13
```
The raw outputs look like this:
```
[[3.1300993]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.69 | 0.61
mean squared error | 0.80 | 0.64
root mean squared error | 0.89 | 0.80
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
117 | CLTL/icf-levels-mbw | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Weight Maintenance Functioning Levels (ICF b530)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing weight maintenance functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about weight maintenance functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | Healthy weight, no unintentional weight loss or gain, SNAQ 0 or 1.
3 | Some unintentional weight loss or gain, or lost a lot of weight but gained some of it back afterwards.
2 | Moderate unintentional weight loss or gain (more than 3 kg in the last month), SNAQ 2.
1 | Severe unintentional weight loss or gain (more than 6 kg in the last 6 months), SNAQ ≥ 3.
0 | Severe unintentional weight loss or gain (more than 6 kg in the last 6 months) and admitted to ICU.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-mbw',
use_cuda=False,
)
example = 'Tijdens opname >10 kg afgevallen.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.95
```
The raw outputs look like this:
```
[[1.95429301]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.81 | 0.60
mean squared error | 0.83 | 0.56
root mean squared error | 0.91 | 0.75
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
118 | CLTL/icf-levels-stm | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Emotional Functioning Levels (ICF b152)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing emotional functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about emotional functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with emotional functioning: emotions are appropriate, well regulated, etc.
3 | Slight problem with emotional functioning: irritable, gloomy, etc.
2 | Moderate problem with emotional functioning: negative emotions, such as fear, anger, sadness, etc.
1 | Severe problem with emotional functioning: intense negative emotions, such as fear, anger, sadness, etc.
0 | Flat affect, apathy, unstable, inappropriate emotions.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-stm',
use_cuda=False,
)
example = 'Naarmate het somatische beeld een herstellende trend laat zien, valt op dat patient zich depressief en suicidaal uit.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.60
```
The raw outputs look like this:
```
[[1.60418844]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.76 | 0.68
mean squared error | 1.03 | 0.87
root mean squared error | 1.01 | 0.93
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
119 | CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | emilyalsentzer/Bio_ClinicalBERT with additional training through the finetuning pipeline described in "Extracting Seizure Frequency From Epilepsy Clinic Notes: A Machine Reading Approach To Natural Language Processing."
Citation: Kevin Xie, Ryan S Gallagher, Erin C Conrad, Chadric O Garrick, Steven N Baldassano, John M Bernabei, Peter D Galer, Nina J Ghosn, Adam S Greenblatt, Tara Jennings, Alana Kornspun, Catherine V Kulick-Soper, Jal M Panchal, Akash R Pattnaik, Brittany H Scheid, Danmeng Wei, Micah Weitzman, Ramya Muthukrishnan, Joongwon Kim, Brian Litt, Colin A Ellis, Dan Roth, Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing, Journal of the American Medical Informatics Association, 2022;, ocac018, https://doi.org/10.1093/jamia/ocac018
Bio_ClinicalBERT_for_seizureFreedom_classification classifies patients has having seizures or being seizure free using the HPI and/or Interval History paragraphs from a medical note. |
122 | Captain-1337/CrudeBERT | [
"negative",
"neutral",
"positive"
] | # Master Thesis
## Predictive Value of Sentiment Analysis from Headlines for Crude Oil Prices
### Understanding and Exploiting Deep Learning-based Sentiment Analysis from News Headlines for Predicting Price Movements of WTI Crude Oil
The focus of this thesis deals with the task of research and development of state-of-the-art sentiment analysis methods, which can potentially provide helpful quantification of news that can be used to assess the future price movements of crude oil.
CrudeBERT is a pre-trained NLP model to analyze sentiment of news headlines relevant to crude oil.
It was developed by fine tuning [FinBERT: Financial Sentiment Analysis with Pre-trained Language Models](https://arxiv.org/pdf/1908.10063.pdf).

Performing sentiment analysis on the news regarding a specific asset requires domain adaptation.
Domain adaptation requires training data made up of examples with text and its associated polarity of sentiment.
The experiments show that pre-trained deep learning-based sentiment analysis can be further fine-tuned, and the conclusions of these experiments are as follows:
* Deep learning-based sentiment analysis models from the general financial world such as FinBERT are of little or hardly any significance concerning the price development of crude oil. The reason behind this is a lack of domain adaptation of the sentiment. Moreover, the polarity of sentiment cannot be generalized and is highly dependent on the properties of its target.
* The properties of crude oil prices are, according to the literature, determined by changes in supply and demand.
News can convey information about these direction changes and can broadly be identified through query searches and serve as a foundation for creating a training dataset to perform domain adaptation. For this purpose, news headlines tend to be rich enough in content to provide insights into supply and demand changes.
Even when significantly reducing the number of headlines to more reputable sources.
* Domain adaptation can be achieved to some extend by analyzing the properties of the target through literature review and creating a corresponding training dataset to fine-tune the model. For example, considering supply and demand changes regarding crude oil seems to be a suitable component for a domain adaptation.
In order to advance sentiment analysis applications in the domain of crude oil, this paper presents CrudeBERT.
In general, sentiment analysis of headlines from crude oil through CrudeBERT could be a viable source of insight for the price behaviour of WTI crude oil.
However, further research is required to see if CrudeBERT can serve as beneficial for predicting oil prices.
For this matter, the codes and the thesis is made publicly available on [GitHub] (https://github.com/Captain-1337/Master-Thesis). |
123 | ClaudeYang/awesome_fb_model | [
"contradiction",
"entailment",
"neutral"
] | ---
pipeline_tag: zero-shot-classification
datasets:
- multi_nli
widget:
- text: "ETH"
candidate_labels: "Location & Address, Employment, Organizational, Name, Service, Studies, Science"
hypothesis_template: "This is {}."
---
ETH Zeroshot |
124 | CogComp/bart-faithful-summary-detector | [
"FAITHFUL",
"HALLUCINATED"
] | ---
language:
- en
thumbnail: https://cogcomp.seas.upenn.edu/images/logo.png
tags:
- text-classification
- bart
- xsum
license: cc-by-sa-4.0
datasets:
- xsum
widget:
- text: "<s> Ban Ki-moon was elected for a second term in 2007. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
- text: "<s> Ban Ki-moon was elected for a second term in 2011. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
---
# bart-faithful-summary-detector
## Model description
A BART (base) model trained to classify whether a summary is *faithful* to the original article. See our [paper in NAACL'21](https://www.seas.upenn.edu/~sihaoc/static/pdf/CZSR21.pdf) for details.
## Usage
Concatenate a summary and a source document as input (note that the summary needs to be the **first** sentence).
Here's an example usage (with PyTorch)
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CogComp/bart-faithful-summary-detector")
model = AutoModelForSequenceClassification.from_pretrained("CogComp/bart-faithful-summary-detector")
article = "Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
bad_summary = "Ban Ki-moon was elected for a second term in 2007."
good_summary = "Ban Ki-moon was elected for a second term in 2011."
bad_pair = tokenizer(text=bad_summary, text_pair=article, return_tensors='pt')
good_pair = tokenizer(text=good_summary, text_pair=article, return_tensors='pt')
bad_score = model(**bad_pair)
good_score = model(**good_pair)
print(good_score[0][:, 1] > bad_score[0][:, 1]) # True, label mapping: "0" -> "Hallucinated" "1" -> "Faithful"
```
### BibTeX entry and citation info
```bibtex
@inproceedings{CZSR21,
author = {Sihao Chen and Fan Zhang and Kazoo Sone and Dan Roth},
title = {{Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection}},
booktitle = {NAACL},
year = {2021}
}
``` |
125 | CouchCat/ma_mlc_v7_distil | [
"delivery",
"return",
"product",
"monetary"
] | ---
language: en
license: mit
tags:
- multi-label
widget:
- text: "I would like to return these pants and shoes"
---
### Description
A Multi-label text classification model trained on a customer feedback data using DistilBert.
Possible labels are:
- Delivery (delivery status, time of arrival, etc.)
- Return (return confirmation, return label requests, etc.)
- Product (quality, complaint, etc.)
- Monetary (pending transactions, refund, etc.)
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_mlc_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_mlc_v7_distil")
``` |
126 | CouchCat/ma_sa_v7_distil | [
"negative",
"neutral",
"positive"
] | ---
language: en
license: mit
tags:
- sentiment-analysis
widget:
- text: "I am disappointed in the terrible quality of my dress"
---
### Description
A Sentiment Analysis model trained on customer feedback data using DistilBert.
Possible sentiments are:
* negative
* neutral
* positive
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_sa_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_sa_v7_distil")
``` |
127 | Crasher222/kaggle-comp-test | [
"0",
"1",
"2",
"3",
"4"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Crasher222/autonlp-data-kaggle-test
co2_eq_emissions: 60.744727079482495
---
# Model Finetuned from BERT-base for
- Problem type: Multi-class Classification
- Model ID: 25805800
## Validation Metrics
- Loss: 0.4422711133956909
- Accuracy: 0.8615328555811976
- Macro F1: 0.8642434650461513
- Micro F1: 0.8615328555811976
- Weighted F1: 0.8617743626671308
- Macro Precision: 0.8649112225076049
- Micro Precision: 0.8615328555811976
- Weighted Precision: 0.8625407179375096
- Macro Recall: 0.8640777539828228
- Micro Recall: 0.8615328555811976
- Weighted Recall: 0.8615328555811976
## Usage
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Crasher222/kaggle-comp-test")
tokenizer = AutoTokenizer.from_pretrained("Crasher222/kaggle-comp-test")
inputs = tokenizer("I am in love with you", return_tensors="pt")
outputs = model(**inputs)
``` |
128 | Crives/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215538311282218
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7814 | 1.0 | 250 | 0.3105 | 0.907 | 0.9046 |
| 0.2401 | 2.0 | 500 | 0.2175 | 0.9215 | 0.9216 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
130 | DSI/human-directed-sentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ** Human-Directed Sentiment Analysis in Arabic
A supervised training procedure to classify human-directed-sentiment in a text. We define the human-directed-sentiment as the polarity of one user towards a second person who is involved with him in a discussion. |
131 | DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support | [
"not-applicable\n",
"ok\n",
"too-loose\n",
"too-strict\n"
] | ---
language:
- multilingual
- nl
- fr
- en
tags:
- Tweets
- Sentiment analysis
widget:
- text: "I really wish I could leave my house after midnight, this makes no sense!"
---
# Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT
[Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947)
This model can be used to determine if a tweet expresses support or not for a curfew. The model was trained on manually labeled tweets from Belgium in Dutch, French and English.
We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top).

Models used in this paper are on HuggingFace:
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
|
132 | DTAI-KULeuven/mbert-corona-tweets-belgium-topics | [
"closing-horeca",
"curfew",
"lockdown",
"masks",
"not-applicable",
"other-measure",
"quarantine",
"schools",
"testing",
"vaccine"
] | ---
language:
- multilingual
- nl
- fr
- en
tags:
- Dutch
- French
- English
- Tweets
- Topic classification
widget:
- text: "I really can't wait for this lockdown to be over and go back to waking up early."
---
# Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT
[Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947)
We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top).

Models used in this paper are on HuggingFace:
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
|
133 | alexandrainst/da-binary-emotion-classification-base | [
"emotional",
"no emotion"
] | ---
language:
- da
license: cc-by-sa-4.0
widget:
- text: Der er et træ i haven.
---
# Danish BERT for emotion detection
The BERT Emotion model detects whether a Danish text is emotional or not.
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-binary-emotion-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-binary-emotion-classification-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
|
134 | alexandrainst/da-emotion-classification-base | [
"Foragt/Modvilje",
"Forventning/Interrese",
"Frygt/Bekymret",
"Glæde/Sindsro",
"Overasket/Målløs",
"Sorg/trist",
"Tillid/Accept",
"Vrede/Irritation"
] | ---
language:
- da
license: cc-by-sa-4.0
widget:
- text: Jeg ejer en rød bil og det er en god bil.
---
# Danish BERT for emotion classification
The BERT Emotion model classifies a Danish text in one of the following class:
* Glæde/Sindsro
* Tillid/Accept
* Forventning/Interrese
* Overasket/Målløs
* Vrede/Irritation
* Foragt/Modvilje
* Sorg/trist
* Frygt/Bekymret
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
This model should be used after detecting whether the text contains emotion or not, using the binary [BERT Emotion model](https://huggingface.co/alexandrainst/da-binary-emotion-classification-base).
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-emotion-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-emotion-classification-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
|
135 | alexandrainst/da-hatespeech-classification-base | [
"Personangreb",
"Spam & indhold",
"Sprogbrug",
"Særlig opmærksomhed"
] | ---
language:
- da
license: cc-by-sa-4.0
widget:
- text: "Senile gamle idiot"
---
# Danish BERT for hate speech classification
The BERT HateSpeech model classifies offensive Danish text into 4 categories:
* `Særlig opmærksomhed` (special attention, e.g. threat)
* `Personangreb` (personal attack)
* `Sprogbrug` (offensive language)
* `Spam & indhold` (spam)
This model is intended to be used after the [BERT HateSpeech detection model](https://huggingface.co/alexandrainst/da-hatespeech-detection-base).
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-hatespeech-classification-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
|
136 | alexandrainst/da-hatespeech-detection-base | [
"not offensive",
"offensive"
] | ---
language:
- da
license: cc-by-sa-4.0
widget:
- text: "Senile gamle idiot"
---
# Danish BERT for hate speech (offensive language) detection
The BERT HateSpeech model detects whether a Danish text is offensive or not.
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-detection-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-hatespeech-detection-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
|
137 | alexandrainst/da-sentiment-base | [
"positive",
"neutral",
"negative"
] |
---
language:
- da
license: cc-by-sa-4.0
widget:
- text: Det er super godt
---
# Model Card for Danish BERT
Danish BERT Tone for sentiment polarity detection
# Model Details
## Model Description
The BERT Tone model detects sentiment polarity (positive, neutral or negative) in Danish texts. It has been finetuned on the pretrained Danish BERT model by BotXO.
- **Developed by:** DaNLP
- **Shared by [Optional]:** Hugging Face
- **Model type:** Text Classification
- **Language(s) (NLP):** Danish (da)
- **License:** cc-by-sa-4.0
- **Related Models:** More information needed
- **Parent Model:** BERT
- **Resources for more information:**
- [GitHub Repo](https://github.com/certainlyio/nordic_bert)
- [Associated Documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-tone)
# Uses
## Direct Use
This model can be used for text classification
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The data used for training come from the [Twitter Sentiment](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#twitsent) and [EuroParl sentiment 2](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#europarl-sentiment2) datasets.
## Training Procedure
### Preprocessing
It has been finetuned on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO.
### Speeds, Sizes, Times
More information needed.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed.
### Factors
### Metrics
F1
## Results
More information needed.
# Model Examination
More information needed.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed.
- **Hours used:** More information needed.
- **Cloud Provider:** More information needed.
- **Compute Region:** More information needed.
- **Carbon Emitted:** More information needed.
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed.
## Compute Infrastructure
More information needed.
### Hardware
More information needed.
### Software
More information needed.
# Citation
**BibTeX:**
More information needed.
**APA:**
More information needed.
# Glossary [optional]
More information needed.
# More Information [optional]
More information needed.
# Model Card Authors [optional]
DaNLP in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-sentiment-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-sentiment-base")
```
</details>
|
138 | alexandrainst/da-subjectivivity-classification-base | [
"objective",
"subjective"
] | ---
language:
- da
license: cc-by-sa-4.0
datasets:
- DDSC/twitter-sent
- DDSC/europarl
widget:
- text: Jeg tror alligvel, det bliver godt
---
# Danish BERT Tone for the detection of subjectivity/objectivity
The BERT Tone model detects whether a text (in Danish) is subjective or objective.
The model is based on the finetuning of the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-tone) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-subjectivivity-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-subjectivivity-classification-base")
```
## Training data
The data used for training come from the [Twitter Sentiment](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#twitsent) and [EuroParl sentiment 2](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#europarl-sentiment2) datasets. |
139 | alexandrainst/da-hatespeech-detection-small | [
"not offensive",
"offensive"
] | ---
language:
- da
license: cc-by-4.0
widget:
- text: "Senile gamle idiot"
---
# Danish ELECTRA for hate speech (offensive language) detection
The ELECTRA Offensive model detects whether a Danish text is offensive or not.
It is based on the pretrained [Danish Ælæctra](Maltehb/aelaectra-danish-electra-small-cased) model.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#electra) for more details.
Here is how to use the model:
```python
from transformers import ElectraTokenizer, ElectraForSequenceClassification
model = ElectraForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-detection-small")
tokenizer = ElectraTokenizer.from_pretrained("alexandrainst/da-hatespeech-detection-small")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
|
140 | alexandrainst/da-ned-base | [
"mentioned",
"not mentioned"
] |
---
language:
- da
license: cc-by-sa-4.0
---
# XLM-Roberta fine-tuned for Named Entity Disambiguation
Given a sentence and a knowledge graph context, the model detects whether a specific entity (represented by the knowledge graph context) is mentioned in the sentence (binary classification).
The base language model used is the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base).
Here is how to use the model:
```python
from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
model = XLMRobertaForSequenceClassification.from_pretrained("alexandrainst/da-ned-base")
tokenizer = XLMRobertaTokenizer.from_pretrained("alexandrainst/da-ned-base")
```
The tokenizer takes 2 strings has input: the sentence and the knowledge graph (KG) context.
Here is an example:
```python
sentence = "Karen Blixen vendte tilbage til Danmark, hvor hun boede resten af sit liv på Rungstedlund, som hun arvede efter sin mor i 1939"
kg_context = "udmærkelser modtaget Kritikerprisen udmærkelser modtaget Tagea Brandts Rejselegat udmærkelser modtaget Ingenio et arti udmærkelser modtaget Holbergmedaljen udmærkelser modtaget De Gyldne Laurbær mor Ingeborg Dinesen ægtefælle Bror von Blixen-Finecke køn kvinde Commons-kategori Karen Blixen LCAuth no95003722 VIAF 90663542 VIAF 121643918 GND-identifikator 118637878 ISNI 0000 0001 2096 6265 ISNI 0000 0003 6863 4408 ISNI 0000 0001 1891 0457 fødested Rungstedlund fødested Rungsted dødssted Rungstedlund dødssted København statsborgerskab Danmark NDL-nummer 00433530 dødsdato +1962-09-07T00:00:00Z dødsdato +1962-01-01T00:00:00Z fødselsdato +1885-04-17T00:00:00Z fødselsdato +1885-01-01T00:00:00Z AUT NKC jn20000600905 AUT NKC jo2015880827 AUT NKC xx0196181 emnets hovedkategori Kategori:Karen Blixen tilfælde af menneske billede Karen Blixen cropped from larger original.jpg IMDb-identifikationsnummer nm0227598 Freebase-ID /m/04ymd8w BNF 118857710 beskæftigelse skribent beskæftigelse selvbiograf beskæftigelse novelleforfatter ..."
```
A KG context, for a specific entity, can be generated from its Wikidata page.
In the previous example, the KG context is a string representation of the Wikidata page of [Karen Blixen (QID=Q182804)](https://www.wikidata.org/wiki/Q182804).
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/ned.html#xlmr) for more details about how to generate a KG context.
## Training Data
The model has been trained on the [DaNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#daned) and [DaWikiNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dawikined) datasets. |
141 | DanL/scientific-challenges-and-directions | [
"Challenge",
"Direction"
] | ---
tags:
- generated_from_trainer
- text-classification
language:
- en
datasets:
- DanL/scientific-challenges-and-directions-dataset
widget:
- text: "severe atypical cases of pneumonia emerged and quickly spread worldwide."
example_title: "challenge"
- text: "we speculate that studying IL-6 will be beneficial."
example_title: "direction"
- text: "in future studies, both PRRs should be tested as the cause for multiple deaths."
example_title: "both"
- text: "IbMADS1-transformed potatoes exhibited tuber morphogenesis in the fibrous roots."
example_title: "neither"
metrics:
- precision
- recall
- f1
model-index:
- name: scientific-challenges-and-directions
results: []
---
# scientific-challenges-and-directions
We present a novel resource to help scientists and medical professionals discover challenges and potential directions across scientific literature, focusing on a broad corpus pertaining to the COVID-19 pandemic and related historical research. At a high level, the _challenges_ and _directions_ are defined as follows:
* **Challenge**: A sentence mentioning a problem, difficulty, flaw, limitation, failure, lack of clarity, or knowledge gap.
* **Research direction**: A sentence mentioning suggestions or needs for further research, hypotheses, speculations, indications or hints that an issue is worthy of exploration.
* This model here is described in our paper: [A Search Engine for Discovery of Scientific Challenges and Directions](https://arxiv.org/abs/2108.13751) (though we've upgraded the infrastructure since the paper was released - there are slight differences in the results).
* Our dataset can be found [here](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset).
* Please cite our paper if you use our datasets or models in your project. See the [BibTeX](#citation).
* Feel free to [email us](#contact-us).
* Also, check out [our search engine](https://challenges.apps.allenai.org/), as an example application.
## Model description
This model is a fine-tuned version of [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [scientific-challenges-and-directions-dataset](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset), designed for multi-label text classification.
## Training and evaluation data
The scientific-challenges-and-directions model is trained based on a dataset that is a collection of 2894 sentences and their surrounding contexts, from 1786 full-text papers in the CORD-19 corpus, labeled for classification of challenges and directions by expert annotators with biomedical and bioNLP backgrounds. For full details on the train/test/split of the data see section 3.1 in our [paper](https://arxiv.org/abs/2108.13751)
## Example notebook
We include an example notebook that uses the model for inference in our [repo](https://github.com/Dan-La/scientific-challenges-and-directions). See `Inference_Notebook.ipynb`.
A training notebook is also included.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning rate: 2e-05
- train batch size: 8
- eval batch size: 4
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr scheduler type: linear
- lr scheduler warmup steps: 500
- num epochs: 30
### Training results
The achieves the following results on the test set:
- Precision Challenge: 0.768719
- Recall Challenge: 0.780405
- F1 Challenge: 0.774518
- Precision Direction: 0.758112
- Recall Direction: 0.774096
- F1 Direction: 0.766021
- Precision (micro avg. on both labels): 0.764894
- Recall (micro avg. on both labels): 0.778139
- F1 (micro avg. on both labels): 0.771459
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
## Citation
If using our dataset and models, please cite:
```
@misc{lahav2021search,
title={A Search Engine for Discovery of Scientific Challenges and Directions},
author={Dan Lahav and Jon Saad Falcon and Bailey Kuehl and Sophie Johnson and Sravanthi Parasa and Noam Shomron and Duen Horng Chau and Diyi Yang and Eric Horvitz and Daniel S. Weld and Tom Hope},
year={2021},
eprint={2108.13751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact us
Please don't hesitate to reach out.
**Email:** `lahav@mail.tau.ac.il`,`tomh@allenai.org`.
|
142 | Darkrider/covidbert_medmarco | [
"LABEL_0"
] | Fine-tuned CovidBERT on Med-Marco Dataset for passage ranking
# CovidBERT-MedNLI
This is the model **CovidBERT** trained by DeepSet on AllenAI's [CORD19 Dataset](https://pages.semanticscholar.org/coronavirus-research) of scientific articles about coronaviruses.
The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [1] using the **average pooling strategy** and a **softmax loss**.
It is further fine-tuned Med-Marco Dataset. MacAvaney et.al in their [paper](https://arxiv.org/abs/2010.05987) titled “SLEDGE-Z: A Zero-Shot Baseline for COVID-19 Literature Search” used MedSyn a lexicon of layperson and expert terminology for various medical conditions to filter for medical questions. One can also replace this by UMLs ontologies but the beauty of MedSyn is that the terms are more general human conversation lingo and not terms based on scientific literature.
Parameter details for the original training on CORD-19 are available on [DeepSet's MLFlow](https://public-mlflow.deepset.ai/#/experiments/2/runs/ba27d00c30044ef6a33b1d307b4a6cba)
**Base model**: `deepset/covid_bert_base` from HuggingFace's `AutoModel`.
|
143 | Davlan/naija-twitter-sentiment-afriberta-large | [
"negative",
"neutral",
"positive"
] | Hugging Face's logo
---
language:
- hau
- ibo
- pcm
- yor
- multilingual
---
# naija-twitter-sentiment-afriberta-large
## Model description
**naija-twitter-sentiment-afriberta-large** is the first multilingual twitter **sentiment classification** model for four (4) Nigerian languages (Hausa, Igbo, Nigerian Pidgin, and Yorùbá) based on a fine-tuned castorini/afriberta_large large model.
It achieves the **state-of-the-art performance** for the twitter sentiment classification task trained on the [NaijaSenti corpus](https://github.com/hausanlp/NaijaSenti).
The model has been trained to classify tweets into 3 sentiment classes: negative, neutral and positive
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of 4 Nigerian language datasets obtained from [NaijaSenti](https://github.com/hausanlp/NaijaSenti) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for Sentiment Classification.
```python
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
MODEL = "Davlan/naija-twitter-sentiment-afriberta-large"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
text = "I like you"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
id2label = {0:"positive", 1:"neutral", 2:"negative"}
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
#### Limitations and bias
This model is limited by its training dataset and domain i.e Twitter. This may not generalize well for all use cases in different domains.
## Training procedure
This model was trained on a single Nvidia RTX 2080 GPU with recommended hyperparameters from the [original NaijaSenti paper](https://arxiv.org/abs/2201.08277).
## Eval results on Test set (F-score), average over 5 runs.
language|F1-score
-|-
hau |81.2
ibo |80.8
pcm |74.5
yor |80.4
### BibTeX entry and citation info
```
@inproceedings{Muhammad2022NaijaSentiAN,
title={NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis},
author={Shamsuddeen Hassan Muhammad and David Ifeoluwa Adelani and Sebastian Ruder and Ibrahim Said Ahmad and Idris Abdulmumin and Bello Shehu Bello and Monojit Choudhury and Chris C. Emezue and Saheed Salahudeen Abdullahi and Anuoluwapo Aremu and Alipio Jeorge and Pavel B. Brazdil},
year={2022}
}
```
|
144 | DeadBeast/emoBERTTamil | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tamilmixsentiment
metrics:
- accuracy
model_index:
- name: emoBERTTamil
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tamilmixsentiment
type: tamilmixsentiment
args: default
metric:
name: Accuracy
type: accuracy
value: 0.671
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emoBERTTamil
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tamilmixsentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9666
- Accuracy: 0.671
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1128 | 1.0 | 250 | 1.0290 | 0.672 |
| 1.0226 | 2.0 | 500 | 1.0172 | 0.686 |
| 0.9137 | 3.0 | 750 | 0.9666 | 0.671 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
147 | DeepPavlov/roberta-large-winogrande | [
"False",
"True"
] | ---
language:
- en
datasets:
- winogrande
widget:
- text: "The roof of Rachel's home is old and falling apart, while Betty's is new. The home value of </s> Rachel is lower."
- text: "The wooden doors at my friends work are worse than the wooden desks at my work, because the </s> desks material is cheaper."
- text: "Postal Service were to reduce delivery frequency. </s> The postal service could deliver less frequently."
- text: "I put the cake away in the refrigerator. It has a lot of butter in it. </s> The cake has a lot of butter in it."
---
# RoBERTa Large model fine-tuned on Winogrande
This model was fine-tuned on Winogrande dataset (XL size) in sequence classification task format, meaning that original pairs of sentences
with corresponding options filled in were separated, shuffled and classified independently of each other.
## Model description
## Intended use & limitations
### How to use
## Training data
[WinoGrande-XL](https://huggingface.co/datasets/winogrande) reformatted the following way:
1. Each sentence was split on "`_`" placeholder symbol.
2. Each option was concatenated with the second part of the split, thus transforming each example into two text segment pairs.
3. Text segment pairs corresponding to correct and incorrect options were marked with `True` and `False` labels accordingly.
4. Text segment pairs were shuffled thereafter.
For example,
```json
{
"answer": "2",
"option1": "plant",
"option2": "urn",
"sentence": "The plant took up too much room in the urn, because the _ was small."
}
```
becomes
```json
{
"sentence1": "The plant took up too much room in the urn, because the ",
"sentence2": "plant was small.",
"label": false
}
```
and
```json
{
"sentence1": "The plant took up too much room in the urn, because the ",
"sentence2": "urn was small.",
"label": true
}
```
These sentence pairs are then treated as independent examples.
### BibTeX entry and citation info
```bibtex
@article{sakaguchi2019winogrande,
title={WinoGrande: An Adversarial Winograd Schema Challenge at Scale},
author={Sakaguchi, Keisuke and Bras, Ronan Le and Bhagavatula, Chandra and Choi, Yejin},
journal={arXiv preprint arXiv:1907.10641},
year={2019}
}
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
148 | DeepPavlov/xlm-roberta-large-en-ru-mnli | [
"CONTRADICTION",
"ENTAILMENT",
"NEUTRAL"
] | ---
language:
- en
- ru
datasets:
- glue
- mnli
model_index:
- name: mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
tags:
- xlm-roberta
- xlm-roberta-large
- xlm-roberta-large-en-ru
- xlm-roberta-large-en-ru-mnli
widget:
- text: "Люблю тебя. Ненавижу тебя"
- text: "I love you. I hate you"
---
# XLM-RoBERTa-Large-En-Ru-MNLI
xlm-roberta-large-en-ru finetuned on mnli. |
149 | DemangeJeremy/4-sentiments-with-flaubert | [
"MIXED",
"NEGATIVE",
"OBJECTIVE",
"POSITIVE"
] | ---
language: fr
tags:
- sentiments
- text-classification
- flaubert
- french
- flaubert-large
---
# Modèle de détection de 4 sentiments avec FlauBERT (mixed, negative, objective, positive)
### Comment l'utiliser ?
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
loaded_tokenizer = AutoTokenizer.from_pretrained('flaubert/flaubert_large_cased')
loaded_model = AutoModelForSequenceClassification.from_pretrained("DemangeJeremy/4-sentiments-with-flaubert")
nlp = pipeline('sentiment-analysis', model=loaded_model, tokenizer=loaded_tokenizer)
print(nlp("Je suis plutôt confiant."))
```
```
[{'label': 'OBJECTIVE', 'score': 0.3320835530757904}]
```
## Résultats de l'évaluation du modèle
| Epoch | Validation Loss | Samples Per Second |
|:------:|:--------------:|:------------------:|
| 1 | 2.219246 | 49.476000 |
| 2 | 1.883753 | 47.259000 |
| 3 | 1.747969 | 44.957000 |
| 4 | 1.695606 | 43.872000 |
| 5 | 1.641470 | 45.726000 |
## Citation
Pour toute utilisation de ce modèle, merci d'utiliser cette citation :
> Jérémy Demange, Four sentiments with FlauBERT, (2021), Hugging Face repository, <https://huggingface.co/DemangeJeremy/4-sentiments-with-flaubert>
|
153 | Elron/bleurt-base-128 | [
"LABEL_0"
] | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-base-128")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-base-128")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([0.3598, 0.0723])
```
|
154 | Elron/bleurt-base-512 | [
"LABEL_0"
] | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-base-512")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-base-512")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([1.0327, 0.2055])
```
|
155 | Elron/bleurt-large-128 | [
"LABEL_0"
] | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-large-128")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-large-128")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([ 0.0020, -0.6647])
```
|
156 | Elron/bleurt-large-512 | [
"LABEL_0"
] | ## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-large-512")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-large-512")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([0.9877, 0.0475])
```
|
157 | Elron/bleurt-tiny-128 | [
"LABEL_0"
] | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-tiny-512")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-tiny-512")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([-1.0563, -0.3004])
```
|
158 | Elron/bleurt-tiny-512 | [
"LABEL_0"
] | ---
tags:
- text-classification
- bert
---
# Model Card for bleurt-tiny-512
# Model Details
## Model Description
Pytorch version of the original BLEURT models from ACL paper
- **Developed by:** Elron Bandel, Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research
- **Shared by [Optional]:** Elron Bandel
- **Model type:** Text Classification
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Parent Model:** BERT
- **Resources for more information:**
- [GitHub Repo](https://github.com/google-research/bleurt/tree/master)
- [Associated Paper](https://aclanthology.org/2020.acl-main.704/)
- [Blog Post](https://ai.googleblog.com/2020/05/evaluating-natural-language-generation.html)
# Uses
## Direct Use
This model can be used for the task of Text Classification
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model authors note in the [associated paper](https://aclanthology.org/2020.acl-main.704.pdf):
> We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the of- ficial WMT test set, which include several thou- sand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
The test sets for years 2018 and 2019 [of the WMT Metrics Shared Task, to-English language pairs.] are noisier,
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
# Citation
**BibTeX:**
```bibtex
@inproceedings{sellam2020bleurt,
title = {BLEURT: Learning Robust Metrics for Text Generation},
author = {Thibault Sellam and Dipanjan Das and Ankur P Parikh},
year = {2020},
booktitle = {Proceedings of ACL}
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Elron Bandel in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-tiny-512")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-tiny-512")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([-0.9414, -0.5678])
```
See [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) for model conversion code.
</details>
|
159 | Emanuel/bertweet-emotion-base | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: bertweet-emotion-base
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.945
name: Accuracy
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.9285
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGJhMTM3YzAyMDg0YTA1MTY4ZjMyZGY1OThjYTI0ODZlOTFlMzAwZWFkNzc3MzQ4YjNiMzViMGIxYTY4M2Q1NiIsInZlcnNpb24iOjF9.1RDEvEoO3YooUsWgDUbuRoia0PBNo6dbGn9lFiXqfeCowHQMLpagMQpBHIoofCmlQA4ZHQbBtwY5lSCzJugzBQ
- type: precision
value: 0.8884219402987917
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjQ2YzhiZDg3ZTJlOGYzNTBlNjEzZTNhYjIyMjFiNWJiZjNjNjg0MTFjMDFjNmI4MzEyZThkMTg5YTNkMzNhZCIsInZlcnNpb24iOjF9.yjvC1cZQllxTpkW3e5bLBA5Wmk9o6xTwusDSPVOQsbapD-XZ5TG06dgG8OF7yxQWvYLEiIp5K0VxnGA645ngBw
- type: precision
value: 0.9285
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDE4MjcwYTgxZmM2Y2M5YzUxNmVjMWMxYjUxYzMxNWJlMGMzOGY2MWZkYTRlZTFkMWUwOTE3YjI4MmE5ZGQ3YiIsInZlcnNpb24iOjF9.SD7BSPVASL91UHNj4vJ226sPAUteEXGoEF2KWc1pKhdwUh0ZBFlnMBYbaNH6Fey0M-Cc6kqQHsYyMpBbgBG0Cw
- type: precision
value: 0.9294663182278102
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDAzMjE3M2FmMjEwMzE2ZDA4NGI3ZDI1ZDlkMjhlZmEzNTlmZWM4NjRlMDNjODIzMTE1N2JiMTE5OTA2N2EzYSIsInZlcnNpb24iOjF9.O7Y0CljPErSGKRacqPcDuzlJEOFo_cnQMqmXcW94JFeq_jWHXEqxHb8Jszi2LCQOlDmFf81Yn1gr7qNbef0lDQ
- type: recall
value: 0.8859392810987465
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjVkODBlZTVlZmNiYjMyNDU2MDRiYWY4M2Y3MDRhNGQ0OTFlNDBiOGIwNGUxNzczMGFjMjg1YzNhNWI4N2QzMiIsInZlcnNpb24iOjF9.qBdhvXbJXKpoCQpJadg5rLlvTgfl4kitQlelAeCLNLTUyq6lBEg8onL78j2ln7m-njgF6dC0M10n4riIbTseDA
- type: recall
value: 0.9285
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2FlYjdmOWNiODUyNmI0OWViYjc2NWNhOTVlMDkyYWMxZjIyMDJlMjZkY2I3Yjg1ZjBlOTQ3MWY4ZDI3MDEwZCIsInZlcnNpb24iOjF9.ZaZNohPIOgvh5NQe6s5PWNyxwtMlrGQxsGz_zeqKshF9btY69cNQxyg9jlfXqrdmI4XhmC8K_MIEObkbfgqCBw
- type: recall
value: 0.9285
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWQ2ODgzMjE2MGE2MmM4OGEyNWUxMWU5OGE3N2JmYTY0MWMzM2JkNjQ3ZDkzMWJkZmU5YWFlYTJhYzg3ODI5NCIsInZlcnNpb24iOjF9.ELxb_KXB0H-SaXOW97WUkTaNzAPH6itG0BpOtvcY-3J33Kr7Wi4eLEyX1fYjgY01LbkPmH4UN-rUQz2pXoRBCQ
- type: f1
value: 0.8863603878501328
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGYxOWRmYzVkYWE2YWRmMTY5ODFkNWU2MGYyZWZmZmIxOTQwN2E1MTJlZjFlMTAzNjNmMzM0OGM3MTAxNzNhYSIsInZlcnNpb24iOjF9.sgcxi41I9bPbli1HO0jS9tXEVIVwdmp2nw5_nG16wO-eF5R8m7uezIUbwf8SfwTDijsZPKU7n5GI1ugKKTXbCQ
- type: f1
value: 0.9285
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWU0MGE3ZjViMzAzMTk1MzhiYjA1OTM4ZDRmZDU5NmRjODE0NThiOWY1MDVjNmU2OTI1OTAzYzY0NjY0NzMwZCIsInZlcnNpb24iOjF9.-_1WgnpD_qr18pp89fkgP651yW5YZ8Vm9i0M4gH8m8uosqOlnft8i7ppsDD5sp689aDoNjqtczPi_pGTvH8iAw
- type: f1
value: 0.9284728367890772
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDMwZDUwYThkYWU2ZDBkYzRlZGQ2YjE2MGE2YjJjNWEyMDcwM2Y2YjY1NTE1ODNmZDgzNjdhZmI4ZjFhZTM1NCIsInZlcnNpb24iOjF9.HeNsdbp4LC3pY_ZXA55xccmAvzP3LZe6ohrSuUFBInMTyO8ZExnnk5ysiXv9AJp-O3GBamQe8LKv_mxyboErAQ
- type: loss
value: 0.1349370777606964
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2RmN2U3YjVjNjg0NzU5NmMwOTcxM2NlMjNhNzdjMzVkMTVhYTJhNDhkMWEyMmFhZjg1NDgzODhjN2FlNzA4NiIsInZlcnNpb24iOjF9.mxi_oEnLE4QwXvm3LsT2wqa1zp7Ovul2SGpNdZjDOa0v-OWz6BfDwhNZFgQQFuls56Mi-yf9LkBevy0aNSBvAw
---
# bertweet-emotion-base
This model is a fine-tuned version of [Bertweet](https://huggingface.co/vinai/bertweet-base). It achieves the following results on the evaluation set:
- Loss: 0.1172
- Accuracy: 0.945
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 80
- eval_batch_size: 80
- lr_scheduler_type: linear
- num_epochs: 6.0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3 |
160 | Emanuel/twitter-emotion-deberta-v3-base | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: twitter-emotion-deberta-v3-base
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.937
name: Accuracy
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.9255
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTlhZDRlN2VkOGQ0OTg3Nzg2OWJmOTAzMDYxZjk5NzE4YmMyNDIxM2FhOTgyMDI2ZTQ3ZjkyNGMwYjI4Nzc2ZiIsInZlcnNpb24iOjF9.GaEt0ZAvLf30YcCff1mZtjms1XD57bY-b00IVak3WGtZJsgVshwAP_Vla2pylTAQvZITz4WESqSlEpyu6Bn-CA
- type: precision
value: 0.8915483806374028
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTI4MTRlN2UyMDZhODM1NWIzNzdhZTUyZjNhYjdkMmZiODRjM2ViODMzOTU4MGE1NjQ4MjM1ZWUwODQzMzk3YyIsInZlcnNpb24iOjF9.qU0v868jMD8kFNrF8CqaP0jGxLzx_ExZTJ1BIBQKEHPSv59QyDLUt6ggjL09jUcmNj-gmps2XzFO16ape0O2Ag
- type: precision
value: 0.9255
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY3NzgyMmFkYmY1NzU0ODM4NWVjZmI0MTgwYWU3OGY1MzI5NWRhNWMyYjM3NTQ0MzEzOWZmYTk5NDYxMjI0ZSIsInZlcnNpb24iOjF9.fnBjSgKbcOk3UF3pfn1rPbr87adek5YDTeSCqgSaCI4zzEqP_PWPNAinS1eBispGxEVh5iolmbO3frSZZ-TzDw
- type: precision
value: 0.9286522707274408
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTE2ZmMxYzE2Mzc4OGQ2MzA1MDA3OGQ5Y2E4N2VkZDUwN2VjYmVhZGRlZTA2Nzg5NWJlZGNlMGYwNjc4YmNlYyIsInZlcnNpb24iOjF9.gRsf37CBTZpLIaAPNfdhli5cUV6K2Rbi8gHWHZydKTse9H9bkV6K_R6o_cMPhuXAyCCWx6SI-RbzInSC9K5lBw
- type: recall
value: 0.875946770128528
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTZkNjMwOTFkZmEyYmRjNTBiOGFjYmYzYmZiMmUyY2U0ZWNhNDNmY2M3ZWZhODRjZDQ2MmFhNzZmM2ZjZDQ5OSIsInZlcnNpb24iOjF9.UTNojxmP-lR4wu13HPt7DAtgzFskdsR8IyohDDhA4sLj2_AQG7-FHdE7eE_SZ4H4FOtp-F1V-g6UoyDtFF0YCQ
- type: recall
value: 0.9255
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjczZjBlNDhhM2YwZDJiNGEwNmMwMTE3ZDQwY2FkMjY5MGMzNjI2NDMyMmNkNTg2ZGRmMWZmOTk2OTEwNGQ0ZCIsInZlcnNpb24iOjF9.DXAXqasIV3OiJGuUGSFMIDVSsM3ailYD5rHDj9bkoDJ0duVyRQdD5l_Uxs2ILUtMYvy66HG8q9hT3oaQpDDFAQ
- type: recall
value: 0.9255
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDZjNGRhNDhkOTY4NmU5ZWUwNTJkNTk3ZGUwZjQwMzYyZTQ3YTYxZTBjMzg3ZjY5YjUwZGM1ZmI4YzlhZmMwMiIsInZlcnNpb24iOjF9.0Jr2FqC3_4aCO7N_Cd-25rjzz2rtyI0w863DvQfVPJNPzkWrs8qaQ_3lcfcQaMbR9CiVfKYPsgWb7-dwrm-UDA
- type: f1
value: 0.8790048313120858
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGNmMzc1MjgxZjM4Njk5ODM2NzIzOWMwYTIyN2E2NWJhYzcwNzgzMTQ0NWZjOGJhZmFkZjg5ZmNkNzYyYzdjMSIsInZlcnNpb24iOjF9.M3qaWCQwpe1vNptl5r8M62VhNe9-0eXQBZ1gIGRaEWOx9aRoTTFAqz_pl3wlhER0dSAjZlUuKElbYCI_R0KQDw
- type: f1
value: 0.9255
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGQzNWNhOWFhZjNmYTllZTliYjRjNWVkMzgyNzE4OTcyZWIwOWY0ZTFkMjVjZDgwOTQyYWI1YzhkZjFmNWY3MiIsInZlcnNpb24iOjF9.zLzGH5b86fzDqgyM-P31QEgpVCVNXRXIxsUzWN0NinSARJDmGp0hYAKu80GwRRnCPdavIoluet1FjQaDvt6aDA
- type: f1
value: 0.92449885920049
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTQ2OTM0ZTU1MTQyNzQxNjVkNjY3ODdkYmJhOTE0ZTYxYzhiNzM3NGFhZGRiN2FiNzM5ZjFiNzczOGZhMDU1NCIsInZlcnNpb24iOjF9.33hcbfNttHRTdGFIgtD18ywdBnihqA3W2bJnwozAnpz6A1Fh9w-kHJ7WQ51XMK_MfHBNrMOO_k_x6fNS-Wm5Dg
- type: loss
value: 0.16804923117160797
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWYwMWY5MzFkYjM3YjZmNmE3MmFlYTI3OTQ1OWRhZTUzODM3MjYwNTgxY2IxMjQ5NmI0ZDk3NDExZjg5YjJjZiIsInZlcnNpb24iOjF9.bHYpW_rQcKjc0QsMe8yVgWo-toI-LxAZE307_8kUKxQwzzb4cvrjLR66ciel2dVSMsjt479vGpbbAXU_8vh6Dw
---
# twitter-emotion-deberta-v3-base
This model is a fine-tuned version of [DeBERTa-v3](https://huggingface.co/microsoft/deberta-v3-base). It achieves the following results on the evaluation set:
- Loss: 0.1474
- Accuracy: 0.937
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 80
- eval_batch_size: 80
- lr_scheduler_type: linear
- num_epochs: 6.0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3 |
161 | EnsarEmirali/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9268984054036417
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2131
- Accuracy: 0.9265
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8031 | 1.0 | 250 | 0.2973 | 0.9125 | 0.9110 |
| 0.2418 | 2.0 | 500 | 0.2131 | 0.9265 | 0.9269 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
162 | FabioDataGeek/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9258450981645597
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2196
- Accuracy: 0.926
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8279 | 1.0 | 250 | 0.3208 | 0.9025 | 0.8979 |
| 0.2538 | 2.0 | 500 | 0.2196 | 0.926 | 0.9258 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
163 | Fan-s/reddit-tc-bert | [
"matched",
"unmatched"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-uncased-base
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-base
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an Reddit-dialogue dataset.
This model can be used for Text Classification: Given two sentences, see if they are related.
It achieves the following results on the evaluation set:
- Loss: 0.2297
- Accuracy: 0.9267
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 320
- eval_batch_size: 80
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
## Usage (HuggingFace Transformers)
You can use the model like this:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# label_list
label_list = ['matched', 'unmatched']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("Fan-s/reddit-tc-bert", use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained("Fan-s/reddit-tc-bert")
# Set the input
post = "don't make gravy with asbestos."
response = "i'd expect someone with a culinary background to know that. since we're talking about school dinner ladies, they need to learn this pronto."
# Predict whether the two sentences are matched
def predict(post, response, max_seq_length=128):
with torch.no_grad():
args = (post, response)
input = tokenizer(*args, padding="max_length", max_length=max_seq_length, truncation=True, return_tensors="pt")
output = model(**input)
logits = output.logits
item = torch.argmax(logits, dim=1)
predict_label = label_list[item]
return predict_label, logits
predict_label, logits = predict(post, response)
# Matched
print("predict_label:", predict_label)
``` |
164 | Fauzan/autonlp-judulberita-32517788 | [
"0.0",
"1.0"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Fauzan/autonlp-data-judulberita
co2_eq_emissions: 0.9413042739759596
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 32517788
- CO2 Emissions (in grams): 0.9413042739759596
## Validation Metrics
- Loss: 0.32112351059913635
- Accuracy: 0.8641304347826086
- Precision: 0.8055555555555556
- Recall: 0.8405797101449275
- AUC: 0.9493383742911153
- F1: 0.8226950354609929
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Fauzan/autonlp-judulberita-32517788
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Fauzan/autonlp-judulberita-32517788", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Fauzan/autonlp-judulberita-32517788", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
165 | Fengkai/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9385
- name: F1
type: f1
value: 0.9383492808338979
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1495
- Accuracy: 0.9385
- F1: 0.9383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1739 | 1.0 | 250 | 0.1827 | 0.931 | 0.9302 |
| 0.1176 | 2.0 | 500 | 0.1567 | 0.9325 | 0.9326 |
| 0.0994 | 3.0 | 750 | 0.1555 | 0.9385 | 0.9389 |
| 0.08 | 4.0 | 1000 | 0.1496 | 0.9445 | 0.9443 |
| 0.0654 | 5.0 | 1250 | 0.1495 | 0.9385 | 0.9383 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
168 | Giannipinelli/xlm-roberta-base-finetuned-marc-en | [
"good",
"great",
"ok",
"poor",
"terrible"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9161
- Mae: 0.4634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1217 | 1.0 | 235 | 0.9396 | 0.4878 |
| 0.9574 | 2.0 | 470 | 0.9161 | 0.4634 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
173 | Harshveer/autonlp-formality_scoring_2-32597818 | [
"target"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Harshveer/autonlp-data-formality_scoring_2
co2_eq_emissions: 8.655894631203154
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 32597818
- CO2 Emissions (in grams): 8.655894631203154
## Validation Metrics
- Loss: 0.5410276651382446
- MSE: 0.5410276651382446
- MAE: 0.5694561004638672
- R2: 0.6830431129198475
- RMSE: 0.735545814037323
- Explained Variance: 0.6834385395050049
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Harshveer/autonlp-formality_scoring_2-32597818
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
174 | Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two | [
"NORMAL",
"ABUSIVE"
] | ---
language: en
license: apache-2.0
datasets:
- hatexplain
---
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
## Model Details
**Model Description:**
The model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence
- **Developed by:** Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/2012.10289) Accepted at AAAI 2021.
- [GitHub Repo with datatsets and models](https://github.com/punyajoy/HateXplain)
## How to Get Started with the Model
**Details of usage**
Please use the **Model_Rational_Label** class inside [models.py](models.py) to load the models. The default prediction in this hosted inference API may be wrong due to the use of different class initialisations.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
### from models.py
from models import *
tokenizer = AutoTokenizer.from_pretrained("Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two")
model = Model_Rational_Label.from_pretrained("Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two")
inputs = tokenizer('He is a great guy", return_tensors="pt")
prediction_logits, _ = model(input_ids=inputs['input_ids'],attention_mask=inputs['attention_mask'])
```
## Uses
#### Direct Use
This model can be used for Text Classification
#### Downstream Use
[More information needed]
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
(and if you can generate an example of a biased prediction, also something like this):
Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For 
The model author's also note in their HateXplain paper that they
> *have not considered any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Also, in this work we have focused on the English language. It does not consider multilingual hate speech into account.*
#### Training Procedure
##### Preprocessing
The authors detail their preprocessing procedure in the [Github repository](https://github.com/hate-alert/HateXplain/tree/master/Preprocess)
## Evaluation
The mode authors detail the Hidden layer size and attention for the HateXplain fien tuned models in the [associated paper](https://arxiv.org/pdf/2012.10289.pdf)
#### Results
The model authors both in their paper and in the git repository provide the illustrative output of the BERT - HateXplain in comparison to BERT and and other HateXplain fine tuned 
## Citation Information
```bibtex
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2012.10289},
year={2020}
}
```
|
175 | Hate-speech-CNERG/bert-base-uncased-hatexplain | [
"hate speech",
"normal",
"offensive"
] | ---
language: en
license: apache-2.0
datasets:
- hatexplain
---
The model is used for classifying a text as **Hatespeech**, **Offensive**, or **Normal**. The model is trained using data from Gab and Twitter and *Human Rationales* were included as part of the training data to boost the performance.
The dataset and models are available here: https://github.com/punyajoy/HateXplain
**For more details about our paper**
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee "[HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection)". Accepted at AAAI 2021.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2012.10289},
year={2020}
}
~~~
|
176 | Hate-speech-CNERG/dehatebert-mono-arabic | [
"NON_HATE",
"HATE"
] | ---
language: ar
license: apache-2.0
---
This model is used detecting **hatespeech** in **Arabic language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.877609 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
177 | Hate-speech-CNERG/dehatebert-mono-english | [
"NON_HATE",
"HATE"
] | ---
language: en
license: apache-2.0
---
This model is used detecting **hatespeech** in **English language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.726030 for a learning rate of 2e-5. Training code can be found here https://github.com/punyajoy/DE-LIMIT
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
178 | Hate-speech-CNERG/dehatebert-mono-french | [
"NON_HATE",
"HATE"
] | ---
language: fr
license: apache-2.0
---
This model is used detecting **hatespeech** in **French language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.692094 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
179 | Hate-speech-CNERG/dehatebert-mono-german | [
"NON_HATE",
"HATE"
] | ---
language: de
license: apache-2.0
---
This model is used detecting **hatespeech** in **German language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.649794 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
180 | Hate-speech-CNERG/dehatebert-mono-indonesian | [
"NON_HATE",
"HATE"
] | This model is used detecting **hatespeech** in **Indonesian language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.844494 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
181 | Hate-speech-CNERG/dehatebert-mono-italian | [
"NON_HATE",
"HATE"
] | ---
language: it
license: apache-2.0
---
This model is used detecting **hatespeech** in **Italian language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.837288 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
182 | Hate-speech-CNERG/dehatebert-mono-polish | [
"NON_HATE",
"HATE"
] | ---
language: pl
license: apache-2.0
---
This model is used detecting **hatespeech** in **Polish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.723254 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.