Datasets:
Commit
·
0f2f2d3
verified
·
0
Parent(s):
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
Browse filesCo-authored-by: Ibrahim Al Azhar <IbrahimAlAzhar@users.noreply.huggingface.co>
This view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +59 -0
- ACL_23_no_limitation/ACL23_10.json +17 -0
- ACL_23_no_limitation/ACL23_1018.json +5 -0
- ACL_23_no_limitation/ACL23_1068.json +5 -0
- ACL_23_no_limitation/ACL23_11.json +17 -0
- ACL_23_no_limitation/ACL23_1102.json +5 -0
- ACL_23_no_limitation/ACL23_1127.json +5 -0
- ACL_23_no_limitation/ACL23_1161.json +16 -0
- ACL_23_no_limitation/ACL23_1170.json +19 -0
- ACL_23_no_limitation/ACL23_1174.json +18 -0
- ACL_23_no_limitation/ACL23_1181.json +11 -0
- ACL_23_no_limitation/ACL23_1182.json +20 -0
- ACL_23_no_limitation/ACL23_1184.json +27 -0
- ACL_23_no_limitation/ACL23_1185.json +24 -0
- ACL_23_no_limitation/ACL23_1192.json +21 -0
- ACL_23_no_limitation/ACL23_1196.json +20 -0
- ACL_23_no_limitation/ACL23_1198.json +11 -0
- ACL_23_no_limitation/ACL23_1199.json +15 -0
- ACL_23_no_limitation/ACL23_1200.json +25 -0
- ACL_23_no_limitation/ACL23_1201.json +18 -0
- ACL_23_no_limitation/ACL23_1202.json +12 -0
- ACL_23_no_limitation/ACL23_1203.json +27 -0
- ACL_23_no_limitation/ACL23_1204.json +19 -0
- ACL_23_no_limitation/ACL23_1206.json +21 -0
- ACL_23_no_limitation/ACL23_1210.json +30 -0
- ACL_23_no_limitation/ACL23_1214.json +19 -0
- ACL_23_no_limitation/ACL23_1215.json +14 -0
- ACL_23_no_limitation/ACL23_1219.json +23 -0
- ACL_23_no_limitation/ACL23_1220.json +23 -0
- ACL_23_no_limitation/ACL23_1224.json +12 -0
- ACL_23_no_limitation/ACL23_1229.json +29 -0
- ACL_23_no_limitation/ACL23_123.json +5 -0
- ACL_23_no_limitation/ACL23_1231.json +24 -0
- ACL_23_no_limitation/ACL23_1235.json +16 -0
- ACL_23_no_limitation/ACL23_1239.json +12 -0
- ACL_23_no_limitation/ACL23_1241.json +18 -0
- ACL_23_no_limitation/ACL23_1245.json +24 -0
- ACL_23_no_limitation/ACL23_1248.json +26 -0
- ACL_23_no_limitation/ACL23_1249.json +16 -0
- ACL_23_no_limitation/ACL23_1252.json +22 -0
- ACL_23_no_limitation/ACL23_1253.json +15 -0
- ACL_23_no_limitation/ACL23_1258.json +21 -0
- ACL_23_no_limitation/ACL23_1262.json +19 -0
- ACL_23_no_limitation/ACL23_1265.json +19 -0
- ACL_23_no_limitation/ACL23_1270.json +24 -0
- ACL_23_no_limitation/ACL23_1278.json +14 -0
- ACL_23_no_limitation/ACL23_1281.json +35 -0
- ACL_23_no_limitation/ACL23_1285.json +15 -0
- ACL_23_no_limitation/ACL23_1288.json +19 -0
- ACL_23_no_limitation/ACL23_1292.json +13 -0
.gitattributes
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.mds filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
# Audio files - uncompressed
|
| 39 |
+
*.pcm filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
*.sam filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
*.raw filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
# Audio files - compressed
|
| 43 |
+
*.aac filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
*.ogg filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
| 48 |
+
# Image files - uncompressed
|
| 49 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
| 50 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
| 51 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
| 52 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
| 53 |
+
# Image files - compressed
|
| 54 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 55 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 56 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
| 57 |
+
# Video files - compressed
|
| 58 |
+
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
+
*.webm filter=lfs diff=lfs merge=lfs -text
|
ACL_23_no_limitation/ACL23_10.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "10",
|
| 3 |
+
"Title": "Domain-specific transformer models for query translation",
|
| 4 |
+
"abstractText": "Due to the democratization of e-commerce, many product companies are listing their goods for online shopping. For periodic buying within a domain such as Grocery, consumers are generally inclined to buy certain brands of products. Due to a large non-English speaking population in India, we observe a significant percentage of code-mix Hinglish search queries e.g., sasta atta. An intuitive approach to dealing with code-mix queries is to train an encoder-decoder model to translate the query to English to perform the search. However, the problem becomes non-trivial when the brand names themselves have Hinglish names and possibly have a literal English translation. In such queries, only the context (non-brand name) Hinglish words needs to be translated. In this paper, we propose a simple yet effective modification to the transformer training to preserve/correct Grocery brand names in the output while selectively translating the context words. To achieve this, we use an additional dataset of popular Grocery brand names. Brand names are added as tokens to the model vocabulary, and the token embeddings are randomly initialized. Further, we introduce a Brand loss in training the translation model. Brand loss is a cross entropy loss computed using a denoising auto-encoder objective with brand name data. We warmstart the training from a public pre-trained checkpoint (such as BART/T5) and further adapt it for query translation using the domain data. The proposed model is generic and can be used with English as well as code-mix Hinglish queries alleviating the need for language detection. To reduce the latency of the model for the production deployment, we use knowledge distillation and quantization. Experimental evaluation indicates that the proposed approach improves translation results by preserving/correcting English/Hinglish brand names. After positive results with A/B testing, the model is currently deployed in production.",
|
| 5 |
+
"1 Introduction": "Due to the democratization of e-commerce, online shopping has evolved in recent times, where most customers choose to shop online. As an effect, the majority of product companies are keen on making their products available for online shopping. When it comes to domains such as Grocery, where users have to shop periodically, they typically have a preference for buying products of certain brands. Hence, for Grocery, it was observed that a significant portion of search queries contain brand names. Due to a large non-Englishspeaking population in India, we observe a significant percentage of code-mix Hinglish search queries. A Hinglish query is where one or more Hindi words are written in English, e.g., sasta atta. Since there are no standard spellings, we observe a large variation in the Hinglish words. We also observe many queries where brand names are misspelled.\nAn intuitive approach to deal with code-mix queries is to train an encoder-decoder model to translate the query to English and use an English search API to retrieve the products (Kulkarni et al., 2022). However, the problem becomes more challenging when the brand names themselves are Hinglish words and possibly have a valid English translation. We observe that in the Grocery domain, many brand names have Hinglish names, e.g. aashirvaad, gowardhan, veer, navratna etc. In such queries, only the context (non-brand name) Hinglish words need to be translated, and brand names (though Hinglish) must not be altered in the translation. E.g. for the query,\n89\n’sasta dabur lal tel’, a literal translation would be ’cheap dabur red oil’. However, the expected translation is ’cheap dabur lal oil’ since ’dabur lal’ is a brand name. Although most of the words in the query are Hinglish, only the first and last words need to be translated. If a brand name gets altered during the translation, it will lead to non-ideal search results. In some cases, the query does not need a translation even though it contains a Hinglish brand name, e.g., veer brand oil. If an English/Hinglish brand name is misspelled, it needs to be corrected in the translation. In general, the seq2seq model should be able to handle the following scenarios.\n• the query has only English words with no spell errors: the model should output the query as it is\n• the query has only English words with spell errors in either brand names or context words: the model should only correct the spell errors\n• the query contains Hinglish words without brand names: the model should translate all Hinglish words to English\n• the query contains Hinglish words with brand names: the model should selectively translate the Hinglish words without altering brand names. It should correct the brand names if it is misspelled.\nTo ensure such behavior, one would need large manually labeled data inclusive of many brand names. In this paper, we propose a simple yet effective modification to the transformer training to preserve/correct brand names in the output while selectively translating the context words. To achieve this, we use an additional dataset of high-demand Grocery brand names provided by the product team. First, to output brand names as a whole, we add them as tokens to the model vocabulary and randomly initialize the corresponding token embeddings. Further, we introduce a brand loss for training the translation model. Brand loss is a cross entropy loss computed using a denoising auto-encoder objective with brand name data. We warm-start the training from a generic pre-trained checkpoint (such\nas BART/T5) and further adapt it for query translation using the domain data. Results indicate that introducing brand loss significantly improves accuracy by preserving/correcting brand names in the translation. We also verify that introducing brand information as the loss is more effective than introducing it as the training data. The model is generic and can be used with English as well as code-mix Hinglish queries, alleviating the need for language detection. Further, to reduce the latency of the model for the production use-case, we use knowledge distillation and quantization. Using a large model as the teacher, we obtain pseudo-labels for a large set of unlabeled queries. We then train a small student opennmt (Klein et al., 2017) model on this dataset. We are able to achieve more than 28x reduction in the latency with a slight drop in accuracy. Experimental results demonstrate the efficacy of the proposed approach.",
|
| 6 |
+
"2 Related works": "Transformers (Vaswani et al., 2017) is the current state-of-the-art model for translation. Large-scale self-supervised pre-training of encoder-decoder models followed by domainspecific fine-tuning can significantly improve the translation quality with a limited labeled set (Lewis et al., 2019) (Raffel et al., 2020).\nSearch query translation is essential for Cross-Lingual Information Retrieval (CLIR). Bhattacharya et al. (Bhattacharya et al., 2016) use word vector emebedding and clustering to find groups of words representing the same concept from different languages. These multilingual word clusters are then used to perform query translation for CLIR between English, Hindi and Bengali. Kulkarni et al. (Kulkarni and Garera, 2022) proposes an approach to perform vernacular query translation without using any parallel corpus. Authors only utilize unlabeled query corpus from two languages, a pre-trained multilingual translation model, and train it with cross-language training to translate vernacular search queries to English. For code mix query translation, multilingual and English pre-trained encoder-decoder models have been explored (Jawahar et al., 2021) (Kulkarni et al., 2022). Kumar et al. (Kumar et al., 2020) explored statistical and neural ma-\nchine translation models for generating natural language questions from a given keywordbased query.\nFew techniques have been explored to preserve some of the input tokens as it is in output. CopyNet (Gu et al., 2016) enables selective use of generate and copy mode. In the copy mode, an RNN-based model can choose sub-sequences from the input sequence to put them at appropriate places in the output sequence. While in generate mode, the model can generate new tokens. On similar lines, See et al. (See et al., 2017) proposed a hybrid pointer-generator network-based approach with an ability to copy words from input to the output while retaining the ability to produce novel words through the generator.\nIn contrast to these approaches, we enforce the model to copy brand names using an additional loss component computed on the brand name data. The model still has a default generate ability which helps in correcting misspelled brand names.",
|
| 7 |
+
"3 Proposed Approach": "In the following sections, we provide details of the dataset and training methods.",
|
| 8 |
+
"3.1 Dataset": "We use a manually tagged dataset for training the model. We have a total of ~116k manually tagged query set, which contains Hinglish as well as English queries. To make use of previously tagged queries, the dataset consists of queries from Grocery and other domains\nsuch as fashion, mobile, footwear, etc. From this, we use randomly chosen 5k samples as the validation set and ~111k for the training. We use a list of 2226 high-demand Grocery brand names to compute the brand loss. The list was provided by the product team. As the test dataset, we use 10715 manually tagged queries from the Grocery domain.",
|
| 9 |
+
"3.2 Training details": "For training the translation model, we make two modifications as follows. First, we add a list of high-demand brand names as tokens in the model vocabulary and randomly initialize the corresponding token embeddings. Brand names are converted to lowercase before adding to vocab. This ensures that when a brand name is outputted in the translation, it would be outputted as a single entity, avoiding incorrect brand name variations.\nWe introduce a brand-specific loss in the model training. The translation model is trained with a combination of three loss components as follows.\nL = lSupervised + lDataAug + λ lBrand (1)\nwhere λ indicates the weighting factor for the brand loss. lSupervised indicates the standard cross entropy loss with parallel corpus. lDataAug indicates the loss calculated with spell and auto-encoder data augmentations as described in section 3.3.\nFor calculating lBrand, we use cross-entropy loss with denoising autoencoder objective\nwith brand name data using simple CharDrop data augmentation. Since non-English speakers attempt to spell the words based on the phoneme sound of it, we noticed that typically the first and last character of the brand is spelled correctly while the spelling mistakes are present in the middle of the word. To emulate this, we randomly drop a character from 30-50% of the brand name words and use original brand names as the target. Following are some of the brand name training examples.\nlBrand is computed with the teacher forcing technique. We set λ to 1 for all experiments. We also experimented by increasing and decreasing the value of λ, however, it did not lead to any significant change in the accuracy.\nWe use a pre-trained BART-base model to warm-start the training and fine-tune it further on the manually tagged data. The model is fine-tuned using AdamW optimizer with a learning rate of 1e-5 and batch size of 16. The model is trained till the validation loss does not improve for three consecutive epochs. We use label smoothing (Vaswani et al., 2017) during the training, where we set the label smoothing parameter to 0.1 for all the experiments. We use beam search decoding during the inference, where the beam size is set to 3. The model has ~141M trainable parameters post adding the brand tokens.",
|
| 10 |
+
"3.3 Data Augmentations": "We experimented with Autoencoder and spell augmentation to compute data augmentation loss (lDataAug). For Autoencoder, we use target English text as the input and train the model to reconstruct it. Though simple, it has shown to be effective in query translation since it provides an advantage similar to a language model regularizer (Kulkarni et al., 2022). For the batch of labeled queries, we add spell augmentations to the source (Ma, 2019) and train the model with the same target. For each batch\nof queries, data augmentation is chosen randomly.",
|
| 11 |
+
"4 Results": "Table 3 shows the BLEU score comparison of different model settings on the test set. In the first experiment, we verify the effectiveness of additional brand loss during the training. We train the model with and without brand loss. From the BLEU score comparison, it can be seen that brand loss training provides good improvements in test accuracy. In table 1, we show the comparison of query translation results with and without brand loss. With the brand loss, the model corrects the brand names whenever it is entered wrongly (first 7 examples). It also preserves brand names better when it’s entered correctly (last 3 examples). Overall, the model provides translations better aligned with the ground truth.",
|
| 12 |
+
"4.1 Using brand names as data": "Intuitively, it’s possible to input the brand name information as the parallel corpus, where we can add CharDrop augmentation to the brand names, and the original brand name can be used as the target. Hence, we wanted to verify the effectiveness of introducing brand information through the loss compared to inputting it through the training data. We created additional training data from the brand names with CharDrop augmentations and appended it to the original training set. We use 50 augmentations for each brand name. Table 5 shows the BLEU score comparison result. We notice that adding brand info as a loss is more effective than adding it as training data. This could be because, with the brand as loss, the model is able to translate context words more effectively. Table 4 shows the query translation comparison result. Note that brand as loss is better at correcting misspelled brand names while providing better translations of context words.",
|
| 13 |
+
"4.2 Comparison with T5": "We compared the results of BART-base with T5-base and T5-small models under similar training settings, i.e., adding brand tokens to the vocab and training with brand loss. Table 6 shows the comparison result. We noticed that BART works significantly better as compared to T5. This could be because denoising training objectives such as brand loss and data augmentation are more aligned with the BART pre-training than T5. Hence, BART can provide good results with a limited labeled set, especially when brand token embeddings need to be learned from scratch.",
|
| 14 |
+
"4.3 Pre-training on large query": "Since the search model would be witnessing large traffic and a variety of queries, we pretrain BART-base model on a large query parallel corpus to make it suitable for production use case. We collected a large Hindi (Devanagari) unlabeled query corpus from the internal\ndatabase. Since our Hindi search model currently supports different verticals such as fashion, mobile, footwear, etc., we suspect only a small percentage of Grocery related queries in the dataset. The Hindi queries are detected using a simple script-based detection. If any of the characters in the query are from Devanagari unicode range, the query is termed Hindi. We then use an in-house Hindi to English query translation model to create a parallel corpus from the unlabeled set. Further, we use an in-house transliteration model to convert a Hindi query to a Hinglish query. This way, we obtained a ~38M Hinglish to English query parallel corpus for training. The model is trained using AdamW optimizer with a learning rate of 5e-6. We pre-trained the BART-base model on this large set and then finetuned on the manually tagged set in the same manner described in section 3.2. Table 7 shows the result of the experiment. Pre-training on the large set gives a significant boost to accuracy. To verify if brand loss based finetuning still complements the advantage provided by the pre-training, we finetuned the query pretrained model without the brand loss. It can be seen that training with brand loss boosts accuracy in addition to the pre-training.",
|
| 15 |
+
"5 Knowledge distillation for improved latency": "The search query translation models are userfacing and need to have low latency to support high throughput. Though the BART-base model with query pre-training and fine-tuning provided good accuracy on the test set, it was not sufficient for production deployment due to the latency constraints. We observed that the p95 latency of the BART-base model with PyTorch implementation was ~200 ms, which is not acceptable for the production use-case.\nTo reduce the latency of the model, we use knowledge distillation with open-nmt (Klein et al., 2017) framework. Open-nmt provides a Ctranslate wrapper for faster inference, making it a good choice for low latency use-cases. Our approach is to train a small open-nmt student model using Grocery BART-base model as the teacher model. Since the student model resides in another programming framework, we use a pseudo-labeling approach to transfer knowledge from the teacher to the student. To create a parallel corpus for open-nmt model training, we obtain translation labels on ~38M query set using the teacher model. We then train the open-nmt model on this large parallel corpus and the manually tagged set. We use a single layer open-nmt model with a vocab size of 18k and a hidden dimension of 384. The model has ~23M trainable parameters. For open-nmt model as well, we add the brand name tokens to the vocab. We use weight quantization during model inference. Table 8 shows the BLEU score comparison result with the open-nmt student model. The student model provides more than 28x speed up for the inference with just a 0.2 drop in the BLEU score. The reason a single layer student model could be providing comparable results to the teacher model can be two-fold. First, search queries rarely have grammar and hence may not a deeper network for translation. Second, the teacher through pseudo labeling is providing cleaner and consistent labels for the student to learn from.\nWe performed A/B testing of the open-nmt student model w.r.t. an earlier model which does not use brand loss. We observed 10 basis points (bps) improvement in search ClickThrough-Rate (CTR) and improved search con-\nversion. The model is currently deployed in production and serves a large volume of queries.",
|
| 16 |
+
"6 Conclusion": "In this paper, we proposed a simple yet effective approach for domain-specific query translation. For the grocery domain, it was noticed that a significant percentage of queries contained brand names due to user preferences and periodic buying. We also observed a significant percentage of code-mix Hinglish queries and queries with grammatical errors. Since some grocery brand names are themselves Hinglish words, we wanted a brand-aware query translation model. To better preserve brand names in translation, we added brand name tokens to the model vocab and introduced an additional brand loss in transformer training. The modification improved translation accuracy by depicting desired brand name preserving effect. To reduce the latency of the model for the production deployment, we used knowledge distillation with the open-nmt student. Using a large model as a teacher and with pseudo labeling, we trained a single layer open-nmt student model. We could obtain more than a 28x reduction in latency with a slight drop in accuracy. After positive results with A/B testing, the model was deployed in production."
|
| 17 |
+
}
|
ACL_23_no_limitation/ACL23_1018.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1018",
|
| 3 |
+
"Title": "Improving Syntactic Probing Correctness and Robustness with Control Tasks",
|
| 4 |
+
"abstractText": "Syntactic probing methods have been used to examine whether and how pre-trained language models (PLMs) encode syntactic relations. However, the probing methods are usually biased by the PLMs’ memorization of common word co-occurrences, even if they do not form syntactic relations. This paper presents a random-word-substitution and random-labelmatching control task to reduce these biases and improve the robustness of syntactic probing methods. Our control tasks are also shown to notably improve the consistency of probing results between different probing methods and make the methods more robust with respect to the text attributes of the probing instances. Our control tasks make syntactic probing methods better at reconstructing syntactic relations and more generalizable to unseen text domains. Our experiments show that our proposed control tasks are effective on different PLMs, probing methods, and syntactic relations."
|
| 5 |
+
}
|
ACL_23_no_limitation/ACL23_1068.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1068",
|
| 3 |
+
"Title": "",
|
| 4 |
+
"abstractText": ""
|
| 5 |
+
}
|
ACL_23_no_limitation/ACL23_11.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "11",
|
| 3 |
+
"Title": "Label efficient semi-supervised conversational intent classification",
|
| 4 |
+
"abstractText": "To provide a convenient shopping experience and to answer user queries at scale, conversational platforms are essential for e-commerce. The user queries can be prepurchase questions, such as product specifications and delivery time related, or postpurchase queries, such as exchange and return. A chatbot should be able to understand and answer a variety of such queries to help users with relevant information. One of the important modules in the chatbot is automated intent identification, i.e., understanding the user’s intention from the query text. Due to non-English speaking users interacting with the chatbot, we often get a significant percentage of code mix queries and queries with grammatical errors, which makes the problem more challenging. This paper proposes a simple yet competent Semi-Supervised Learning (SSL) approach for label-efficient intent classification. We use a small labeled corpus and relatively larger unlabeled query data to train a transformer model. For training the model with labeled data, we explore supervised MixUp data augmentation. To train with unlabeled data, we explore label consistency with dropout noise. We experiment with different pre-trained transformer architectures, such as BERT and sentence-BERT. Experimental results demonstrate that the proposed approach significantly improves over the supervised baseline, even with a limited labeled set. A variant of the model is currently deployed in production.",
|
| 5 |
+
"1 Introduction": "An automated conversational chatbot is essential to provide a seamless shopping experience and answer product-related questions at scale. An effective chatbot can assist and answer pre-purchase queries such as product\nspecifications, offers, discounts, delivery time, and stock availability, as well as post-purchase queries such as exchange and return. Due to users from diverse backgrounds interacting with the chatbot and minimizing a human agent transfer, a chatbot should be able to understand and handle a variety of user queries.\nOne of the important ML components in the chatbot is automated intent identification, i.e., understanding the user’s intention from the query text. Post the correct intent identification, an appropriate dialog-flow can be initiated. An incorrect intent prediction negatively affects the dialog-flow and, hence the overall user experience. Further, due to nonEnglish speakers interacting with the chatbot, we observe a significant percentage of codemix Hinglish queries ( 30%) and queries with grammatical errors, making intent detection even more challenging. Training a supervised intent classification model under such a scenario would require a large amount of manually tagged data. However, due to internetscale operations, we have unlabeled query data available in a relatively large volume.\nThis paper proposes a simple yet competent Semi-Supervised Learning (SSL) approach for label-efficient intent classification. SSL has been proven effective in leveraging unlabeled data when only a small labeled set is available. Specifically, we train a transformer BERT model on a small labeled corpus along with a larger unlabeled query data. Starting with limited labeled queries, we explore supervised as well as unsupervised data augmentation techniques. For the supervised data augmentation, we explore MixUp (Zhang and Vaidya, 2021) and simple label preserving NLP augmentations (Ma, 2019). For training with unlabeled data, typically, SSL algorithms rely on an extra smoothness constraint which enforces the\n96\nmodel to make consistent predictions on an unlabeled sample and its slightly perturbed version. Moreover, it is observed that the type of noise/perturbation plays an important role and a trivial noise may not provide desired improvements (Xie et al., 2020). Recently, a simple noise such as dropout has shown promising results for contrastive learning (Gao et al., 2021). We explore label consistency loss with dropout noise to train the BERT model with unlabeled data. The model is trained with the linear combination of supervised and unsupervised loss components. One of the challenges with a limited labeled set is how to halt the training when the validation set is not available; otherwise, it may result in over-fitting. In our experiments, we perform the model updates till the training loss is converged. Interestingly, training with dropout label consistency loss is less prone to over-fitting even with no validation set. We also noticed that the choice of label consistency loss has a prominent effect on the accuracy. For warm starting the training, we experiment with pre-trained BERT and sentence-BERT architectures. Experimental results demonstrate that, over the supervised baseline, the intent classification accuracy can be boosted significantly with the proposed semi-supervised approach.",
|
| 6 |
+
"2 Related works": "SSL approaches have been extensively studied in the literature. Instead of providing an extensive list of references, we only cite a few relevant prior works in this section. An extensive survey can be found in (Yang et al., 2021).\nUnsupervised Data Augmentation (UDA) (Xie et al., 2020) has shown promising results for learning with unlabeled data along with a small labeled corpus. The idea is to enforce label consistency between two augmentations of the unlabeled sample. The authors also point out that the type of augmentation used significantly affects the accuracy of the model, and a trivial augmentation (such as adding Gaussian noise) may not lead to desired improvements. Recently, a contrastive learning approach that uses dropout noise has been shown to work well for self-supervised learning with textual data (Gao et al., 2021). Since dropout is inher-\nently present in pre-trained transformer models, this provides a simple yet efficient method for data augmentation. Interpolation Consistency Training (ICT) (Verma et al., 2022) is a computationally efficient approach to train the model with SSL. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. For classification problems, ICT moves the decision boundary to low density regions of the data distribution.\nFor the supervised classification, MixUp has been found to be an effective data augmentation technique (Jindal et al., 2020). MixUp is performed in the representation space for the text classification with transformers and is known to provide better regularization, and model calibration (Sun et al., 2020).",
|
| 7 |
+
"3 Proposed approach": "In this section, we describe details of the dataset, loss functions experimented with, and model training.",
|
| 8 |
+
"3.1 Dataset": "Our intent classification dataset consists of queries from the pre-defined set of 28 intents. The queries consist of pre-purchase as well as post-purchase user questions. For each intent, we have 250 manually labeled samples; hence, the train set comprises 7k labeled examples. As the test set, we use a manually tagged dataset of 7569 samples. Table 1 shows examples of the queries from the test set and corresponding ground truth intents. Note that the test set consists of code-mix Hinglish queries and queries with grammatical errors. For the unlabeled data, we use a query corpus of size ~925k obtained from the internal database. For all the queries (labeled and unlabeled), we convert them to lowercase and remove punctuation (if any). We do not apply any further pre-processing.",
|
| 9 |
+
"3.2 Loss functions experimented": "We experiment with the following loss functions and their linear combination to train the model.\n3.2.1 Supervised cross-entropy loss (ls) For a small set of labeled data, we use the standard supervised cross entropy loss for the\n2\ntraining. We use label smoothing while training where the smoothing parameter is set to 0.1. This loss function is included in all the experiments.\n3.2.2 Supervised Grammar loss (lsg)\nFor the batch of labeled data, we add grammar augmentations to the input queries, such as spell errors and word swaps, to create additional train data (Ma, 2019). We use cross entropy loss and label smoothing for this.\n3.2.3 Supervised MixUp loss (lsm)\nThe idea behind supervised MixUp is to create an additional labeled train set through linear interpolating of the features and corresponding one-hot labels. For the transformer models, MixUp is performed on the feature representations of the queries in the following manner.\nx̃ = λ xi + (1 − λ) xj ỹ = λ yi + (1 − λ) yj\n(1)\nHere, λ ∼ U(0, 1). xi and xj indicates the features from last hidden layer. We use cross entropy loss for this.\n3.2.4 Unsupervised Dropout loss (lud) We use dropout noise for enforcing prediction label consistency to train the transformer model on unlabeled data. We sample a batch of queries from the unlabeled query corpus and make two independent forward passes through the transformer to obtain two label predictions. The label consistency loss is then calculated to minimize the distance measure D between these predictions.\nlud = Eu∼U(x) D(pθ(y1|u), pθ(y2|u)) (2) Here, y1 and y2 indicate predicted labels for an unlabeled batch u. For D, we experimented with Cross Entropy (CE) and Mean-SquareError (MSE) loss. For text classification, UDA uses round-trip back-translation as the data augmentation (Xie et al., 2020). They keep one copy of the network weights fixed while updating another copy. For the dropout, label predictions are calculated with the current network parameters, and the same is updated during training.",
|
| 10 |
+
"3.3 Training details": "For the pre-trained BERT model, we use bertbase-uncased while for the pre-trained sentence-\n3\nBERT model, we use paraphrase-mpnet-v2. Both bert-base-uncased and paraphrase-mpnet-v2 are 12 layers models with ~109M trainable parameters. For the BERT model, we use a feature corresponding to the [CLS] token from the last hidden layer (without tanh activation) as the query representation. For the sentence-BERT model, we use a mean-pooled representation of the token embeddings from the last hidden layer. The mean pooling uses an attention mask to avoid averaging representations from the padding tokens.\nFor the supervised losses (ls, lsg, lsm), we use a batch size of 32, while for unsupervised loss (lud), we use a batch size of 96. We use AdamW optimizer with a constant learning rate of 1e-5. One major challenge with limited labeled sets is to halt the training without the validation set. In our experiments, we stop the training when the absolute difference in the train loss from the consecutive epochs remains below the threshold (ϵ) for a certain number of epochs (patience). In all our experiments, we use ϵ of 0.1 and patience of 5.\nThe models are trained under three different settings.\n• Only with labeled loss, LS = ls\n• With labeled loss (LS) and supervised data augmentation loss, LSD = lsg + lsm\n• With labeled loss (LS), supervised data augmentation loss (LSD) and unsupervised dropout label consistency loss LUD = lud. We use log probabilities along with MSE loss for LUD and a weight factor α of 10 (to match the scales).\nFigure 1 shows the comparison results for BERT and sentence-BERT models for varying number of labeled samples. We make a few observations from these results. Sentence-BERT works better than BERT, especially with a low number of labeled samples. Our findings align with the recent work demonstrating the effectiveness of Sentence-BERT for few shot learning (Tunstall et al., 2022). Supervised data augmentations (grammer + mixup) provide only a slight advantage over purely supervised baseline (Figure 1 (b)). We suspect it is happening due to over-fitting because of a small labeled corpus and lack of validation set to stop the training. We validate this hypothesis with an additional experiment, using some validation data to halt the training. Results are provided in the ablation study section 5.1. Unsupervised label consistency with dropout noise and MSE loss provides a significant advantage over the supervised baseline. Interestingly, even though the models are updated till the train loss is converged, training with this loss provides better regularization and is less prone to over-fitting. We also observe that the choice of unsupervised loss has a prominent effect on the accuracy. Section 5.3 in the ablation study shows the comparison results with different loss functions for lud.\nSince Hinglish constitutes a significant percentage (30%) of queries, we specifically compared the performance of BERT and sentenceBERT models for Hinglish query classification. First, we detect Hinglish queries from the test set using an approach proposed in (Kulkarni et al., 2022) and calculate F1-score on these queries with the semi-supervised approach.\n4\nFigure 2 demonstrates the result. We observe that sentence-BERT inherently provides better accuracies for Hinglish queries.\nWe also compare the Expected Calibration Error (ECE) on the test set for the BERT and sentence-BERT models. For this, we use the prediction result for the model trained on all the labeled samples. Table 2 shows the result. sentence-BERT achieves better calibration as compared to the BERT model.",
|
| 11 |
+
"4 Comparison with Unsupervised MixUp approach": "We compare the dropout label consistency approach with another SSL method: Unsupervised MixUp. Verma et al. (Verma et al., 2022) proposed a MixUp approach for training with unlabeled data. Feature MixUp is performed on the transformer representations for the two batches of unlabeled samples. For labels, MixUp on model predictions for the same unlabeled batches is used. We randomly sample two batches (u1, u2) from unlabeled queries and calculate their feature representation (x1, x2). The Unsupervised MixUp loss (lum) is then calculated as follows.\nlum = Eu1,u2∼U(x) D( fθ(Mixλ(x1, x2)), Mixλ( fθ′(x1), fθ′(x2))) (3)\nAs suggested in (Xie et al., 2020), for calculating the second term in the equation, we use a fixed copy (θ′) of the network, and the update is applied to the current copy of the weights (θ). At the end of each epoch, a fixed copy is replaced with the current weights. The model is trained with supervised losses and the Unsupervised MixUp loss. We use MSE loss and α of 10. Figure 3 indicates the comparison result. Despite being simple, dropout label consistency performs better than Unsupervised MixUp. This could be because, at the start of the training, the predictions from the models may not be accurate. Hence, the updates to the model with Unsupervised MixUp loss are computed against noisy labels. On the contrary, the dropout consistency loss only enforces the smoothing constraint on the label predictions.",
|
| 12 |
+
"5 Ablation study": "In this section, we report ablation study results with different experimental settings.",
|
| 13 |
+
"5.1 Comparison of with and without validation loss monitoring": "Since supervised MixUp provided only a slight improvement over the purely supervised baseline with sentence-BERT, we suspect that it is happening because of over-fitting since we do not have validation loss based stopping criteria during training. To confirm this, we conducted an additional experiment using a validation set (of size 8318) and halted the training when validation loss did not improve for five consecutive epochs. Figure 4 shows the F1-score comparison with and without validation monitoring. The plot indicates\n5\nthat the supervised MixUp, when trained with a low number of labeled samples and without validation monitoring, is prone to over-fitting. Hence, it alone might not lead to good improvements for the limited labeled scenario.",
|
| 14 |
+
"5.2 Choice of label consistency loss": "We observed that the choice of loss used for dropout label consistency has a prominent effect on the model accuracy. Figure 5 shows the comparison of CE and MSE loss. For CE loss, we use α of 1, while for the MSE loss, α is set to 10 (to match the scales). It can be seen that the MSE loss consistently outperforms the CE loss.",
|
| 15 |
+
"5.3 Effect of varying dropout probability": "To understand whether model dropout probability affects the accuracy, we performed an experiment where we trained a sentenceBERT model with varied dropout probability.\nSentence-BERT has a default dropout probability of 0.1. In this experiment, we set the dropout value to a lower (0.05) and a higher (0.2) value and trained the model with supervised and dropout label consistency losses. Figure 6 shows the resulting plot. We observe that increasing or decreasing the dropout probability does not significantly affect the model accuracy.",
|
| 16 |
+
"6 Conclusion": "This paper proposes a simple yet competent semi-supervised learning approach for label-efficient conversational intent classification. We trained different transformer models with labeled as well as unlabeled data. We explored supervised MixUp data augmentation for training with labeled samples, while for training with unlabeled samples, we experimented with label consistency loss with dropout. The results demonstrated that classification accuracy could be improved significantly over the supervised baseline with the proposed semi-supervised approach. Specifically, sentence-BERT was observed to perform better with a small number of labeled samples and even with code-mix Hinglish queries. Even without validation loss monitoring, it was noticed that training with dropout label consistency is less prone to over-fitting. Through the ablation study, we studied the effect of the choice of label consistency loss and dropout probability on the accuracy. Experimental results demonstrated the efficacy of the proposed approach. A variant of the model is currently deployed in production.\n6"
|
| 17 |
+
}
|
ACL_23_no_limitation/ACL23_1102.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1102",
|
| 3 |
+
"Title": "Summarizing, Simplifying, and Synthesizing Medical Evidence Using GPT-3 (with Varying Success)",
|
| 4 |
+
"abstractText": "Large language models, particularly GPT3, are able to produce high quality summaries of general domain news articles in fewand zero-shot settings. However, it is unclear if such models are similarly capable in more specialized, high-stakes domains such as biomedicine. In this paper, we enlist domain experts (individuals with medical training) to evaluate summaries of biomedical articles generated by GPT-3, given zero supervision. We consider both singleand multi-document settings. In the former, GPT-3 is tasked with generating regular and plain-language summaries of articles describing randomized controlled trials; in the latter, we assess the degree to which GPT-3 is able to synthesize evidence reported across a collection of articles. We design an annotation scheme for evaluating model outputs, with an emphasis on assessing the factual accuracy of generated summaries. We find that while GPT-3 is able to summarize and simplify single biomedical articles faithfully, it struggles to provide accurate aggregations of findings over multiple documents. We release all data and annotations used in this work.1"
|
| 5 |
+
}
|
ACL_23_no_limitation/ACL23_1127.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1127",
|
| 3 |
+
"Title": "Are Sample-Efficient NLP Models More Robust?",
|
| 4 |
+
"abstractText": "Recent results in image classification and extractive question answering have observed that pre-trained models trained on less in-distribution data have better out-ofdistribution performance. However, it is unclear how broadly these trends hold. We conduct a large empirical study across three tasks, three broadly-applicable modeling interventions (increasing model size, using a different adaptation method, and pre-training on more data), and 14 diverse datasets to investigate the relationship between sample efficiency (amount of data needed to reach a given ID accuracy) and robustness (how models fare on OOD evaluation). We find that higher sample efficiency is only correlated with better average OOD robustness on some modeling interventions and tasks, but not others. On individual datasets, models with lower sample efficiency can even be more robust. These results suggest that general-purpose methods for improving sample efficiency are unlikely to yield universal OOD robustness improvements, since such improvements are highly datasetand task-dependent. Even in an era of large, multi-purpose pre-trained models, task-specific decisions may often be necessary for OOD generalization."
|
| 5 |
+
}
|
ACL_23_no_limitation/ACL23_1161.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1161",
|
| 3 |
+
"Title": "Can LMs Store and Retrieve 1-to-N Relational Knowledge?",
|
| 4 |
+
"abstractText": "It has been suggested that pretrained language models can be viewed as knowledge bases. One of the prerequisites for using language models as knowledge bases is how accurately they can store and retrieve world knowledge. It is already revealed that language models can store much 1-to-1 relational knowledge, such as “country and its capital,” with high memorization accuracy. On the other hand, world knowledge includes not only 1-to-1 but also 1to-N relational knowledge, such as “parent and children.” However, it is not clear how accurately language models can handle 1-to-N relational knowledge. To investigate language models’ abilities toward 1-to-N relational knowledge, we start by designing the problem settings. Specifically, we organize the character of 1-to-N relational knowledge and define two essential skills: (i) memorizing multiple objects individually and (ii) retrieving multiple stored objects without excesses or deficiencies at once. We inspect LMs’ ability to handle 1-to-N relational knowledge on the controlled synthesized data. As a result, we report that it is possible to memorize multiple objects with high accuracy, but generalizing the retrieval ability (expressly, enumeration) is challenging.",
|
| 5 |
+
"1 Introduction": "As a result of their pretraining on large amounts of text, language models (LMs) store certain world knowledge facts, such as “Paris is the capital of France”, in their parameters and can retrieve that knowledge when given a suitable prompt. Since the ability to store and retrieve knowledge is also a key functionality of knowledge bases (KBs; Weikum et al., 2021), prior work has proposed to view language models as knowledge bases (Petroni et al., 2019). Quantitative evaluation of world knowledge in LMs has focused on 1-to-1 relational knowledge involving two entities, such as a country and its capital (Petroni et al., 2019; Heinzerling and Inui, 2021; Safavi and Koutra, 2021; Razniewski et al.,\n2021). However, the question if and how well LMs can handle 1-to-N relations, such as relations between parents and their children, is underexplored so far.\nHere, we conduct a study to assess the capability of LMs to store and retrieve 1-to-N relations in a manner similar to knowledge bases. We consider a setting in which the model first is trained to memorize individual relation instances, such as “Tom has a child named Emma”, “Bob has a child named Ava”, “Tom has a child named Lucas”, and “Tom has a child named Olivia”. During inference the model then has to retrieve 1-to-N relation, e.g., “Tom has children named Emma, Lucas, Olivia” (Figure 1).\nTo investigate the possibility of viewing LMs as KBs more precisely, it is necessary to clarify the basic abilities of LMs, such as how accurately they can store 1-to-N relational knowledge and how flexibly they can retrieve multiple entities they have stored.\n130\nOur study represents the first comprehensive investigation of 1-to-N relational knowledge. Our contributions are summarized as follows: (1) We identified the capabilities necessary for LMs to handle 1-to-N relational knowledge, taking into account its unique properties. Specifically, LMs must be able to accurately memorize any object appearing discretely and enumerate multiple objects without over- or under-recall based on memory. (§ 3) (2) Based on the identified capabilities, we formulated two training schemes: element-valued supervision for “memorization” and set-valued supervision for “enumerating.” (§ 4) (3) We conducted a quantitative evaluation of LMs’ “memorization” abilities from both subject-oriented and object-oriented perspectives and categorized the errors encountered during “enumerating.” Our results suggest that LMs are able to store 1-to-N relational knowledge with reasonable accuracy, but generalizing the ability to enumerate proves to be challenging. (§ 6)",
|
| 6 |
+
"2 Related Work": "Factual knowledge probing Petroni et al. (2019) investigated how much knowledge LMs had acquired from large corpora by having models such as pretrained BERT (Devlin et al., 2019) solve problems in the “fill-in-the-blank” format. They also pointed out three critical advantages of treating LMs as KBs: “LMs require no schema engineering, do not need human annotations, and support an open set of queries.”\nJiang et al. (2020) and Brown et al. (2020) also worked on creating optimal prompts for extracting correct answers from pretrained LMs. These investigations aim to extract knowledge that LMs have acquired implicitly during pretraining. On the other hand, we are interested in the degree to which knowledge can be handled accurately when LMs explicitly learn it. Thus, investigating what and how well pretrained LMs acquire 1-to-N relational knowledge from corpora is beyond our scope.\nStoring 1-to-1 relational knowledge Heinzerling and Inui (2021) established two basic requirements for treating LMs as KBs: “(i) the ability to store a lot of facts involving a large number of entities and (ii) the ability to query stored facts.” Based on these requirements, they elaborately examined how much and how accurately LMs can store 1-to1 relational knowledge by comparing various entity representations. However, the behavior of LMs concerning 1-to-N relational knowledge remains\nunclear.\nSet handling This study explores handling multiple objects, which can be achieved by handling a set of objects. Previous works such as Deep Sets (Zaheer et al., 2017) and Set Transformer (Lee et al., 2019) are representative ones that address set handling in neural networks or transformers (Vaswani et al., 2017).\nBoth focus on sets as inputs, being permutationinvariant and treating sets of arbitrary size. While this study focuses on sets as outputs rather than inputs, the properties such as permutation-invariant are considered to be essential aspects in common.",
|
| 7 |
+
"3 Designing an approach to 1-to-N relational knowledge": "In this section, we describe the unique properties of 1-to-N relational knowledge and what capabilities of LMs are needed to handle 1-to-N relational knowledge.\nTo begin with, we define three significant unique factors that make 1-to-N relational knowledge challenging to deal with: First, when the subject or relation under consideration changes, the number of objects associated with it changes. For example, consider answering the question, “{Subject} has children named <mask>.” The difficulty is that the number of correct objects changes depending on the input. Second, considering existing corpora, multiple objects are likely to occur discretely. For example, Barack Obama has two children, Malia and Sasha, but only Malia may appear in some specific contexts, and only Sasha may appear in other contexts.. Finally, third, when we assume a situation where an LM is used practically as a KB, it is necessary to output these discretely appearing objects together to avoid generating an inadequate response to the input query.\nTherefore, given the above properties, the two essential LMs’ competencies considered necessary to manage 1-to-N relational knowledge are as follows. (i) “the ability to accurately memorize any objects appearing discretely.” (ii) “the ability to retrieve multiple objects without over- or underrecall based on memory.” In order to consider an end-to-end approach to 1-to-N relational knowledge, this study tackles it as a generative task using the sequence-to-sequence model (Sutskever et al., 2014), which allows for flexible responses based on input.",
|
| 8 |
+
"4.1 Terminology": "In this work, we make use of the following terms:\nRelation triple: A triple consisting of a subject and an object entity, as well as a predicate that describes the relation that holds between the subject and the object, e.g., (Tom, hasChild, Emma).\n1-to-N relation: A set of relation triples with the same subject and predicate, but different objects, e.g., (Tom, hasChild, Emma) and (Tom, hasChild, Lucas).\nIndividual relation instance: A relation triple expressed in text, for example “Tom has a child named Emma.”\nElement: Viewing a 1-to-N relation as a set, we refer to individual relation instances as elements of that set, e.g., “Tom has a child named Emma.” is an\nelement of the 1-to-N relation that holds between Tom and his children.\nElement-valued supervision: One of the two supervised training schemes we employ. A model is trained on elements, i.e., individual relation instances, of 1-to-N relations. Concretely, the model is given a relation instance with the object masked out, e.g., “Tom has a child named <mask>.” and has to predict the masked out object, e.g., “Emma”. The goal of this training scheme is to have the model memorize individual objects based on their corresponding subjects.\nSet-valued supervision: In the second of our supervised training schemes the model is trained to predict the set of all objects for a given subject and predicate, e.g., given “Tom has children named <mask>.”, the model has to generate the text “Emma, Lucas, Olivia”.",
|
| 9 |
+
"4.2 Handling of 1-to-N Relational Knowledge": "We investigate the behavior of LMs for 1-to-N relational knowledge when explicitly trained. Specifically, we use the sequence-to-sequence model to generate variable-length responses to inputs.\nAs described in § 3, the two abilities necessary for LMs to handle 1-to-N relational knowledge are (i)memorizing multiple discretely appearing objects and (ii)enumerating memorized objects without excess or deficiency. In this section, we conduct two experiments, each corresponding to the essential abilities.\n(i) Memorization The first experiment is aimed at “memorization” through element-valued supervision. Here, 1-to-N relational knowledge is decomposed into a one-to-one form, and we train LMs to memorize multiple objects individually. In the learning process, one object is output in response to an input for a particular subject, and then all objects will be memorized in this fashion. Therefore, the state in which the LMs memorize all N objects can also be paraphrased as the state in which the LMs can output all N objects.\nTherefore, the evaluation of whether LMs memorized multiple objects is checked by generating multiple sequences using beam-search. Specifically, N sequences are generated for a subject using the same query as the training data. By checking how many correct objects are included in the sequences, we evaluate how many objects the LMs memorized.\n(ii) Enumeration The second experiment attempts to acquire “the ability to enumerate memorized objects.” Here, training by set-valued supervision is performed in conjunction with memorization by element-valued supervision. The reason for using the two supervisory methods together is the premise that to enumerate multiple objects, it is necessary to memorize them in the first place. Although it is possible to perform element-valued\nsupervision and then shift to set-valued supervision, catastrophic forgetting of memorized objects may occur during the training of set-valued supervision. Indeed, we have confirmed that catastrophic forgetting of memorized objects occurs during set-valued supervision, so in this paper, the two supervisory methods are used together. For some subjects in the training data, LMs explicitly learn the behavior of enumerating the objects in response to queries that explicitly ask for multiple objects. We then test whether set-valued supervision allows LMs to enumerate objects for other subjects as well, i.e., whether they can generalize the ability to enumerate.",
|
| 10 |
+
"5.1 Synthetic Data": "In the following experiments, we uniquely prepared the 1-to-N dataset to measure how well LMs can accurately store plenty of facts. Specifically, we randomly obtained canonical names of parents and their two to four children from Wikidata (Vrandečić and Krötzsch, 2014). We also randomly obtained the canonical names of directors and their two to four representative films from IMDb Datasets1. Therefore, by preparing 1-to-2, 1-to-3, and 1-to-4 relational knowledge, we will observe how LMs performance changes as the number of objects increases. We only collected data that meets the following conditions.\n• To ensure that all entities are distinguishable, there is no data with the same canonical name across both subjects and objects.\n• Only entities consisting of four or fewer words separated by spaces or hyphens are used to adjust for storing difficulty due to word length.\nWe only consider memorizing and enumerating entities which appear in the training data.\n1https://www.imdb.com/interfaces/\nParent-child: objs covered ratio\nDirector-titles: objs covered ratio\nDirector-titles: sbs w/ perfect memorization",
|
| 11 |
+
"5.2 Models and Training settings": "We used the pretrained BART-base (Lewis et al., 2020) and T5-base (Raffel et al., 2019) as the sequence-to-sequence model in the experiments. The training in the two experiments described below (§ 6.1 and § 6.2) was continued until the models strongly overfit the training data. Precisely, we continued training until the accuracy of the training data no longer improved by more than 30 epochs.\nThe accuracy was calculated as follows: for element-valued supervision, the accuracy was determined by whether the model could generate the correct object for each subject in the input. If the model generated one of the correct N objects for each subject, it was considered correct; otherwise, incorrect. For set-valued supervision, the accuracy was determined by whether the model generated a set of multiple correct objects with no omissions or additions. If the model generated a complete set of correct objects, it was considered correct; otherwise, incorrect.\nAs detailed training settings, the learning rate was started at 5e-5 in common with BART and T5, and it was reduced by half if the accuracy did not\nimprove by more than three epochs. The batch size was varied according to the model and training data size/domain. AdamW (Loshchilov and Hutter, 2019) was commonly used as the optimizer. In addition, a different template was used for each model so that the input sentence templates were similar to the pretraining settings for each (BART uses <mask> token in pretraining, but T5 does not.) The templates used are listed in Table 1.",
|
| 12 |
+
"6.1 Element-valued supervision": "In the first experiment, we investigated the ability to memorize multiple objects using element-valued supervision. Here, we tested whether the LMs could correctly store N objects associated with a single subject. Specifically, as shown in Figure 2, the learning process of having one object generated for each input sentence, such as “{Subject} has a child named <mask>.” or “{Subject} directed a film titled <mask>.” was performed for all objects. Thus, the learning setup is such that there are as many target sentences as objects for each input sentence.\nWe then checked the degree to which LMs trained with element-valued supervision could recall multiple objects through the generation of N sequences using beam search. To be precise, N was for the number of objects associated with the input subject, and we analyzed the count of correct objects within those sequences.\nIn this experiment, we also tested whether the LMs’ memorization accuracy changed when the training data size, i.e., the number of entities, was varied. Here, we evaluated this memorization accuracy from two perspectives.\nObject-oriented memorization accuracy The first perspective is object-oriented memorization accuracy, shown in Figure 3, which evaluates the degree of recall of objects in the training data. Figure 3a and 3b correspond to the parent-children and director-titles datasets, respectively. The solid blue line corresponds to T5, and the dashed yellow line to BART, with darker colors corresponding to 1toN relational knowledge with more objects. The results show that T5 has better memorization accuracy than BART, although no significant differences by data domain were observed. Also, the larger N, i.e., the greater the number of objects associated with one subject, the more likely N entities could not be memorized.\nSubject-oriented memorization accuracy The second perspective, subject-oriented memorization accuracy, evaluated how many subjects were memorized with all related N objects. Specifically, in generating multiple objects by beam search, we show how many subjects existed for which all N objects were generated.\nThe results are shown in Figure 4, where 4a and 4b correspond to the parent-children and director-title datasets, respectively, as in Figure 3. The results confirmed that, overall, T5 has higher memorization accuracy. Looking at performance\nby the number of objects, it is clear that, in common with the two data domains and two models, the greater the number of objects, the more difficult it was to remember all of them in conjunction with the subject.\nInterestingly, both memorization accuracies in the two perspectives show roughly independent behavior concerning data size. One possible reason for the higher overall memory accuracy of T5 is that the parameter size of the T5-base is about 1.5 times larger than that of BART-base. This may contribute to higher memory accuracy. The fact that 100% memorization accuracy was not achieved for either data size may suggest that memorizing 1-to-N relational knowledge is not easy for LMs. Examples of LMs’ predictions are shown in Table 3.",
|
| 13 |
+
"6.2 Element-valued and Set-valued supervision": "In this subsequent experiment, the model was trained with element-valued and set-valued supervision to acquire the ability to enumerate all associated objects. More expressly, compared to the first experiment, we additionally employed set-valued supervision, which involved using “{Subject} has children named <mask>.” as the input sentence and “{Object1}, {Object2}, ...” as the corresponding target sentence, as an example. This approach aimed to generalize the model’s ability to enumerate all accurately memorized objects in response to queries requesting multiple objects.\nWe conducted both element-valued and setvalued supervision during training. Specifically, we trained LMs using element-valued supervision on all subjects to memorize all associated objects. We fixed the training data size at 3000 subjects for each. Simultaneously, we randomly selected 20% of the subjects, i.e, 600 subjects, as a test set for set-valued supervision. For the remaining 80% of\nthe subjects, we varied the proportion of subjects for which set-valued supervision was applied (i.e., 30%, 60%, or 90%) to examine whether the generalization ability would change depending on the number of instances that the LMs learned how to enumerate their corresponding objects.\nThe goal was to investigate how well the model could generalize to subjects in the test set when using set-valued supervision and to determine the impact of varying the proportion of subjects with set-valued supervision on model performance.\nThe results (Table 2) show that the enumerating accuracy is highest when the supervision ratio is 90% for all, indicating that it is important to have many training instances to generalize the enumerating capability.\nAlthough there are differences in the enumerating accuracy scores across data domains and models, we found a tendency for the enumeration performance to decrease significantly as the number of target words increases.\nError analysis Quantitative error distributions are shown in Table 4, and specific examples of incorrect answers are shown in Table 5. Table 4 shows that for small numbers of objects (e.g., 1- to-2), BART tended to generate incorrect objects (labeled “Incorrect”), while T5 often duplicated the same object (labeled “Duplication”), highlighting a noticeable difference between the two models. As the number of objects increased (e.g., 1-to-3, 1-\nto-4), both models were more likely to produce wrong answers due to missing objects (labeled “Missing”). The distribution of errors across different datasets was generally similar, but both models were more prone to missing objects in the parentchildren dataset, suggesting that the type of entity names might have an impact on the error patterns.",
|
| 14 |
+
"7 Conclusion": "We addressed handling 1-to-N relational knowledge by a generative approach using the sequenceto-sequence model. Since little work has been done on 1-to-N relational knowledge in previous studies, we started by organizing the properties of 1-to-N relational knowledge and setting up the capabilities considered necessary for LMs based on these properties.\nSpecifically, we defined two essential capabilities: “memory of discretely appearing multiple objects” and “enumeration of objects based on memory.” Then, we developed training schemes based on these perspectives. We used element-valued supervision and beam search for the former to memorize and evaluate multiple objects. We found that nearly 90% of the objects could be memorized, although we observed a tendency for memory omissions to occur as the number of objects increased. However, we also confirmed that it is challenging to achieve 100% perfect memory.\nFor the latter, we attempted to generalize “enu-\nmeration ability” by set-valued supervision in conjunction with memorization by element-valued supervision. The results showed that learning more data improved the generalization performance for acquiring enumeration ability. However, we also observed the LM’s behavior, which aligns with human intuition: the more objects increase, the more difficult it becomes to enumerate all of them correctly. Notably, the generalization performance for 1-to-2 relational knowledge was only about 50% for the test set, and for 1-to-4 relational knowledge, only about 10% generalization performance at most.\nFor our next steps, we are considering the following approach. The training setup of the current element-valued supervision is characterized by multiple target sentences for one input sentence, which is incompatible with the model’s learning algorithm. Therefore, we would like to test a memorizing method using ordinal numerals such as first and second to distinguish each template for N objects. We would also like to investigate this memorization method’s effect on the generalization performance of enumeration.\nAs for enumeration, which has been difficult to generalize, we would like to examine effective means of improving performance for a small number of objects. Specifically, we are considering\nadjusting the hyperparameters for text generation and verifying whether errors in enumerating will be reduced. After that, we would like to explore learning methods to enumerate N objects without needing hyperparameters adjustment in stages.\nIntroducing our 1-to-N problem setting into the LMs-as-KBs paradigm opens up many more intriguing challenges. While we investigated this setting under a controlled condition with a uniform frequency of object appearance, the frequency of each of the N objects in a corpus is likely to vary in reality. Furthermore, there may be multiple paraphrases expressing the same relation.\nFor example, in our study, we only considered the phrase “{Subject} has a child named {Object}.” but there are other phrases such as “{Subject}’s child is {Object}.” or “{Object} is a daughter of {Subject}.” As a primary avenue for future research, we will explore whether LMs can handle 1-to-N relational knowledge effectively under these more complex conditions.",
|
| 15 |
+
"Acknowledgements": "This work was supported by JSPS KAKENHI Grant Number 21K17814 and JST CREST Grant Number JPMJCR20D2, Japan."
|
| 16 |
+
}
|
ACL_23_no_limitation/ACL23_1170.json
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1170",
|
| 3 |
+
"Title": "Is a Knowledge-based Response Engaging?: An Analysis on Knowledge-Grounded Dialogue with Information Source Annotation",
|
| 4 |
+
"abstractText": "Currently, most knowledge-grounded dialogue response generation models focus on reflecting given external knowledge. However, even when conveying external knowledge, humans integrate their own knowledge, experiences, and opinions with external knowledge to make their utterances engaging. In this study, we analyze such human behavior by annotating the utterances in an existing knowledge-grounded dialogue corpus. Each entity in the corpus is annotated with its information source, either derived from external knowledge (database-derived) or the speaker’s own knowledge, experiences, and opinions (speaker-derived). Our analysis shows that the presence of speaker-derived information in the utterance improves dialogue engagingness. We also confirm that responses generated by an existing model, which is trained to reflect the given knowledge, cannot include speakerderived information in responses as often as humans do.",
|
| 5 |
+
"1 Introduction": "More and more dialogue research has utilized external knowledge to enable dialogue systems to generate rich and informative responses (Ghazvininejad et al., 2018; Zhou et al., 2018; Moghe et al., 2018; Dinan et al., 2019; Zhao et al., 2020). The major focus of such research is in how to select appropriate external knowledge and reflect it accurately in the response (Kim et al., 2020; Zhan et al., 2021; Rashkin et al., 2021; Li et al., 2022).\nHowever, as shown in Figure 11, a good speaker not only informs the dialogue partner of external knowledge but also incorporates his or her own knowledge, experiences, and opinions effectively, which makes the dialogue more engaging. The extent to which models specializing in reflecting\n1Examples of dialogues presented in this paper are originally in Japanese and were translated by the authors.\ngiven external knowledge can achieve such an engaging behavior has not yet been explored quantitatively.\nIn this study, we first analyze how humans incorporate speaker-derived information by annotating the utterances in an existing knowledge-grounded dialogue corpus. Each entity in the utterances is annotated with its information source, either derived from external knowledge (database-derived) or the speaker’s own knowledge, experiences, and opinions (speaker-derived). The analysis of the annotated dataset showed that engaging utterances contained more speaker-derived information.\nIn addition, we train a BART-based response generation model in a standard way, i.e., by minimizing perplexity, and investigate the extent to which it incorporates speaker-derived information. The result showed that the response generation model did not incorporate speaker-derived information into their utterances as often as humans do. This result implies that minimizing perplexity is insufficient to increase engagingness in knowledgegrounded response generation and suggests room for improvement in the training framework.\n237",
|
| 6 |
+
"2 Information Source Annotation": "This section describes the annotation scheme for information sources and the annotation results.",
|
| 7 |
+
"2.1 Scheme": "We annotate Japanese Movie Recommendation Dialogue (JMRD) (Kodama et al., 2022) with information sources2. JMRD is a human-to-human knowledge-grounded dialogue corpus in Japanese. A recommender recommends a movie to a seeker. Each utterance of the recommender is associated with movie information as external knowledge. Each piece of knowledge consists of a knowledge type (e.g., title) and the corresponding knowledge contents (e.g., “Marvel’s The Avengers”).\nIn this study, we extract entities from the recommender’s utterances and annotate them with their information source. Entities are nouns, verbs, and adjectives and are extracted together with their modifiers to make it easier to grasp their meanings. Entities are extracted using Juman++ (Tolmachev et al., 2020), a widely-used Japanese morphological analyzer. Annotators classify the extracted entities into the following information source types: Database-derived: The entity is based on the external knowledge used in that utterance. Speaker-derived: The entity is based on the knowledge, experiences, and opinions that the recommender originally has about the recommended movie. Other: The entity does not fall under the above two types (e.g., greetings).\nAn annotation example is shown below.\n(1) Utterance: The action scenes(database) are spectacular(speaker)!\nUsed knowledge: Genre, Action\nWe recruited professional annotators, who are native Japanese speakers, to annotate these information source types. One annotator was assigned to each dialogue. After the annotation, another annotator double-checked the contents.",
|
| 8 |
+
"2.2 Result": "Table 1 shows the annotation statistics. While JMRD is a knowledge-grounded dialogue corpus and thus inherently contains many database-derived entities, it also contains about 60,000 speakerderived entities. This result verifies that humans\n2Examples of dialogue and knowledge in JMRD can be found in Appendix A.1.\nincorporate their own knowledge, experiences, and opinions into their utterances, even in dialogues to convey external knowledge.",
|
| 9 |
+
"3 Analysis of Human Utterances": "We analyze human utterances at the dialogue level and utterance level.",
|
| 10 |
+
"3.1 Dialogue-level Analysis": "4,328 dialogues in JMRD have post-task questionnaires on 5-point Likert scale (5 is the best.) We regard the rating of the question to the seekers (i.e., Did you enjoy the dialogue?) as dialogue engagingness and analyze the relationship between this and the ratio of each information source label.\nFigure 2 shows that dialogues with high engagingness scores tend to have more speaker-derived entities (or less database-derived) than those with low engagingness scores. When constructing JMRD, recommenders were given a certain amount of external knowledge and asked to use that knowledge to respond. However, recommenders highly rated by their dialogue partners incorporated not only the given external knowledge but also speakerderived information to some extent in their dialogues.",
|
| 11 |
+
"3.2 Utterance-level Analysis": "We conduct the utterance-level evaluation via crowdsourcing. We randomly extract 500 responses along with their contexts (= 4 previous utterances) from the test set. For each utterance, workers rate utterance engagingness (i.e., Would you like to talk to the person who made this response?) on a 5-point Likert scale, with 5 being the best. Three workers evaluate each utterance, and the scores are averaged.\nThe average score for utterances with speakerderived entities was 3.31, while those without speaker-derived entities was 3.07. Student’s t-test with p = 0.05 revealed a statistically significant difference between these scores.\nFurthermore, Figure 3 shows the relationship between utterance engagingness and the ratio of each information source label. This figure shows that utterances with high scores tend to have more speaker-derived entities. This trend is consistent with that of the dialogue engagingness.\nDoes subjective knowledge contribute to engagingness? The knowledge type used in JMRD can be divided into subjective knowledge (review) and objective knowledge (title, etc.). Reviews are the opinions of individuals who have watched movies and have similar characteristics to speaker-derived information. We then examine whether there is a difference in engagingness between utterances using subjective and objective knowledge. The average engagingness scores were 3.32 and 3.163, respectively, and Student’s t-test with p = 0.05 revealed no statistically significant difference. The\n3We exclude utterances referring to both of subjective and objective knowledge from this result.\nabove analysis demonstrates that information obtained from the speaker’s own experience is an important factor in utterance engagingness.",
|
| 12 |
+
"4 Analysis of System Utterances": "We investigate the distribution of information source labels in the responses of the model trained on the knowledge-grounded dialogue dataset. First, we train a Response Generator (§4.1) with the dialogue contexts and external knowledge as input and responses as output. Next, an Information Source Classifier (§4.2) is trained with responses and external knowledge as input and information source labels as output. Then, the Information Source Classifier infers the information source labels for the system responses generated by the Response Generator. Finally, we analyze the distribution of inferred information source labels.",
|
| 13 |
+
"4.1 Response Generator": "We use a BARTlarge (Lewis et al., 2020) model as a backbone.4 The input to the model is formed as follows:\n[CLS]ut−4[SEP ]ut−3[SEP ]ut−2[SEP ]\nut−1[SEP ][CLSK ]kt1[SEP ]kc1[SEP ]...\n[CLSK ]kt M [SEP ]kcM [SEP ], (1)\nwhere t is the dialogue turn, ut is the t-th response, and kti and kci (1 <= i <= M) are the knowledge type and knowledge content associated with the target response, respectively (M is the maximum number of knowledge associated with ut.) [CLSK ] is a special token. We feed the gold knowledge into the model to focus on how knowledge is reflected in the responses. The model learns to minimize perplexity in generating ut.\nWe evaluated the quality of response generation with the SacreBLEU (Post, 2018). BLEU-1/2/3/4 scored high, 81.1/73.5/71.0/69.9. This result is reasonable because the gold knowledge was given.",
|
| 14 |
+
"4.2 Information Source Classifier": "We fine-tune a RoBERTalarge (Liu et al., 2019) model.5 The Information Source Classifier performs a sequence labeling task to estimate BIO6\n4https://nlp.ist.i.kyoto-u.ac. jp/?BART%E6%97%A5%E6%9C%AC%E8%AA% 9EPretrained%E3%83%A2%E3%83%87%E3%83%AB\n5https://huggingface.co/nlp-waseda/ roberta-large-japanese-seq512\n6B, I and O stand for Begin, Inside and Outside, respectively.\nlabels of the information source. The input to the model is formed as follows:\n[CLS]ut[SEP ][CLSK ]kt 1[SEP ]kc1[SEP ]...\n[CLSK ]kt M [SEP ]kcM [SEP ] (2)\nTable 3 shows precision, recall, and F1 scores for each label and micro average scores across all labels. The micro average F1 score was 90.50, which is accurate enough for the further analysis.",
|
| 15 |
+
"4.3 Analysis for Inferred Labels": "The information source labels for system responses are inferred using the classifier trained in Section 4.2. Table 4 shows distributions of information source labels for human and system responses. For a fair comparison, the human responses are also given labels inferred by the classifier (denoted as Human (pred)), although they have gold labels (denoted as Human (gold)). Human (gold) and Human (pred) have similar distributions, indicating that the accuracy of the classifier is sufficiently high. For System (pred), the percentage of database-derived labels increased significantly (66.75%→85.48%) and that\nof speaker-derived information decreased significantly (27.49%→10.66%). This result shows that the response generation model, trained in a standard way, was not able to use speaker-derived information as often as humans do.\nTable 2 shows an example of human and system responses along with the engagingness scores. The system was able to reflect given knowledge in the response appropriately but did not incorporate additional speaker-derived information, such as the information two voice actors also work as singers.\nFor further analysis, we investigated the average ratios of speaker-derived information by knowledge type used. Table 5 shows the result. Significant drops were observed for reviews (31.42%→6.32%) and plots (13.68%→2.32%). This is probably because reviews and plots are relatively long and informative external knowledge, so the system judged there was no need to incorporate additional speaker-derived information.\nCombined with our observation that speakerderived information improves engagingness, the current model is likely to have lower engagingness due to its inability to effectively incorporate speaker-derived information. Such an ability is hardly learned by simply optimizing a model to reduce the perplexity of response generation, suggesting the need for a novel learning framework.",
|
| 16 |
+
"5 Conclusion": "We analyzed the distribution of speaker-derived information in human and system responses in the knowledge-grounded dialogue. The analysis showed that the use of speaker-derived information, as well as external knowledge, made responses more engaging. We also confirmed that the response generation model trained in a standard way generated less speaker-derived information than humans.\nIt is difficult to make good use of speaker-derived information by simply minimizing the perplexity of the model because a wide variety of speakerderived information appears in each dialogue. We hope our published annotated corpus becomes a good launch pad for tackling this issue.",
|
| 17 |
+
"Acknowledgements": "We would like to thank anonymous reviewers for their insightful comments. This work was supported by NII CRIS collaborative research program operated by NII CRIS and LINE Corporation. This work was also supported by JST, CREST Grant Number JPMJCR20D2, Japan and JSPS KAKENHI Grant Number JP22J15317.",
|
| 18 |
+
"A Appendices": "A.1 Example of JMRD Table 6 and 7 show examples of the dialogue and knowledge in JMRD.\nA.2 Implementation Details A.2.1 Response Generator Dialogue contexts, knowledge (knowledge types and contents), and target responses are truncated to the maximum input length of 256, 256, and 128, respectively. The model is trained for up to 50 epochs with a batch size of 512 and 0.5 gradient clipping. We apply early stopping if no improvement of the loss for the development set is observed for three consecutive epochs. We use AdamW optimizer (Loshchilov and Hutter, 2019) with β1 = 0.9, β2 = 0.999, = 1e − 8 and an initial learning rate = 1e − 5. We use an inverse square root learning rate scheduler with the first 1,000 steps allocated for warmup. During decoding, we use the beam search with a beam size of 3.\nA.2.2 Information Source Classifier Target responses and knowledge (knowledge types and contents) are truncated to the maximum input length of 128 and 384, respectively. The model is trained for up to 20 epochs with a batch size of 64 and 0.5 gradient clipping. We apply early stopping if no improvement of the f1 score for the development set is observed for three consecutive epochs. We use AdamW optimizer (Loshchilov and Hutter, 2019) with β1 = 0.9, β2 = 0.999, = 1e−8 and an initial learning rate = 1e−5. We use an inverse square root learning rate scheduler with the first 1,000 steps allocated for warmup."
|
| 19 |
+
}
|
ACL_23_no_limitation/ACL23_1174.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1174",
|
| 3 |
+
"Title": "Distractor Generation for Fill-in-the-Blank Exercises by Question Type",
|
| 4 |
+
"abstractText": "This study addresses the automatic generation of distractors for English fill-in-the-blank exercises in the entrance examinations for Japanese universities. While previous studies applied the same method to all questions, actual entrance examinations have multiple question types that reflect the purpose of the questions. Therefore, we define three types of questions (grammar, function word, and context) and propose a method to generate distractors according to the characteristics of each question type. Experimental results on 500 actual questions show the effectiveness of the proposed method for both automatic and manual evaluation.",
|
| 5 |
+
"1 Introduction": "Fill-in-the-blank questions, also known as cloze tests (Taylor, 1953), are one way to assess learners’ English proficiency and are widely used in examinations such as TOEIC1 and in school education. As shown in Figure 1, the question format generally consists of a four-choice option with one correct answer and three distractors. These require substantial costs because they are manually created by question writers with extensive language teaching experience. This study automatically generates distractors to reduce workload.\nMost of the previous studies on the automatic generation of cloze tests (Mitkov and Ha, 2003; Sumita et al., 2005; Zesch and Melamud, 2014; Jiang and Lee, 2017; Susanti et al., 2018; Panda et al., 2022) have generated words that are semantically similar to the correct words as distractors. Other methods have been proposed, such as those based on co-occurrence with words in the carrier sentence (Liu et al., 2005; Hill and Simha, 2016), considering the whole context (Yeung et al., 2019), and considering the learner’s error tendencies (Sakaguchi et al., 2013). However, these previous studies apply the same method to all questions, which\n1https://www.ets.org/toeic.html",
|
| 6 |
+
"It was certainly _ crowded than I thought it would be. (a) less (b) little (c) least (d) fewer": "((a) is correct )\nleads to bias in the characteristics of the generated distractors. Actual entrance examinations have multiple question types reflecting the purpose of the questions, such as grammatical knowledge and idiomatic expressions. Existing methods have difficulty in flexibly changing the characteristics of distractors for each question type.\nIn this study, we first manually classify English fill-in-the-blank questions in the entrance examinations for Japanese universities2 by an expert. Next, we propose a method for automatic distractor generation according to the characteristics of each question type. Experimental results on 500 actual questions show the effectiveness of the proposed method for both automatic and manual evaluation.",
|
| 7 |
+
"2 Related Work": "Previous studies have generated distractors in the following three steps: (1) candidate generation, (2) reranking, and (3) filtering.\nJiang and Lee (2017) utilized cosine similarity with word embeddings (Mikolov et al., 2013) to identify candidate words that are semantically similar to the correct word. These candidate words were ranked by similarity and filtered by word 3- gram. That is, if a 3-gram containing a candidate word appears in Wikipedia, that candidate is excluded. It filters out expressions that are actually used in a large-scale corpus to exclude appropriate examples from the distractor candidates.\nYeung et al. (2019) reranked the candidates generated from word embeddings by the mask-filling\n2https://jcshop.jp/SHOP/18149/list. html\n276\nprobability with BERT (Devlin et al., 2019). They also utilize BERT for filtering, eliminating candidates with too high and too low probabilities.\nPanda et al. (2022) proposed candidate generation based on round-trip machine translation. That is, the carrier sentence was first translated into a pivot language and back-translated into English. Then, word alignment was used to obtain a candidate for the correct word and its corresponding word. These candidates were reranked using word embeddings and filtered by WordNet (Miller, 1995). Specifically, synonyms of the correct word in WordNet and words with a different part of speech from the correct word were excluded from the candidates.\nThese existing methods have been evaluated in different ways on different datasets, making it difficult to compare their performance. We have comprehensively evaluated them and propose further improvements on top of their combinations.",
|
| 8 |
+
"3 Definition of Question Types": "An experienced English teacher specializing in English education has categorized the question types for English fill-in-the-blank questions. The analysis covers 500 randomly selected questions from the entrance examinations for Japanese universities in the five-year period from 2017 to 2021. As shown in Table 1, the following three question types were defined:\n• Grammar: Questions that mainly use the conjugated form of the same word as choices.\n• Function word: Questions that are choices from a prescribed list of function words.\n• Context: Questions with choices determined by context or idiomatic expressions.\nTable 2 shows the number of occurrences for each question type. Approximately half of the questions were on context, 40% were on function word, and 10% were on grammar. In the next section, we\npropose how to generate distractors according to the characteristics of each question type.",
|
| 9 |
+
"4 Generating Distractors": "Following previous studies (Jiang and Lee, 2017; Yeung et al., 2019; Panda et al., 2022), we also generate distractors through three steps. For candidate generation and reranking, we selected combinations of the existing methods described in Section 2 that maximize performance on the validation dataset3 for each question type. For filtering, we propose methods according to the characteristics of each question type, which are described below.",
|
| 10 |
+
"4.1 Filtering for Questions on Grammar": "For questions on grammar, the conjugated forms of the correct word should be obtained as candidates. Therefore, we apply POS filtering. That is, we exclude candidates that have the same part of speech or the same conjugation as the correct word.\nFurthermore, to avoid unreliable distractors that could be the correct answer, we exclude candidates with a high mask-filling probability by BERT (Devlin et al., 2019). Unlike Yeung et al. (2019), called BERT (static), which used two fixed thresholds to select the top θH to θL, our filter, called BERT (dynamic), dynamically changes the thresholds. Specifically, we exclude candidates that have a higher probability than the correct word. The example of the first sentence in Table 1 shows that “thinks” is eliminated as a candidate for the same\n3For the validation dataset, 500 questions were randomly selected in addition to the evaluation dataset annotated in Section 3. These questions were automatically annotated with question types by BERT (Devlin et al., 2019). The accuracy of BERT was 84.8% in the 10-fold cross-validation.\npart of speech, and “watches” is eliminated as a high probability candidate.",
|
| 11 |
+
"4.2 Filtering for Questions on Function Word": "For questions on function words, only function words such as prepositions and conjunctions are basically used as choices. Therefore, we utilize the list of function words4 for entrance examinations for Japanese universities to exclude candidates not included in this list. The example of the second sentence in Table 1 shows that “time” and “taken” are eliminated.",
|
| 12 |
+
"4.3 Filtering for Questions on Context": "Since the questions on context are designed to test knowledge of collocations or idioms, candidates should be obtained for words that often co-occur with surrounding words in the carrier sentence. However, as with questions on grammar, to avoid unreliable distractors, candidates with a high maskfilling probability by BERT are excluded. The example of the third sentence in Table 1 shows that “comfy” and “cosy” are eliminated.",
|
| 13 |
+
"5 Experiments": "We evaluate the method of distractor generation on the 500 questions constructed in Section 3.",
|
| 14 |
+
"5.1 Setting": "Implementation Details For candidate generation, we implemented methods based on word embeddings (Jiang and Lee, 2017) and round-trip machine translation (Panda et al., 2022). We utilized\n4https://ja.wikibooks.org/wiki/大学受験 英語_英単語/機能語・機能型単語一覧\nfastText (Bojanowski et al., 2017) as word embeddings and Transformer (Vaswani et al., 2017), trained on English-German language pairs5 (Ng et al., 2019; Ott et al., 2019) according to the previous study (Panda et al., 2022), as machine translators. For word alignment, we used Hungarian matching (Kuhn, 1955) based on word embeddings (Song and Roth, 2015).\nFor reranking, we implemented methods based on word embeddings (Jiang and Lee, 2017) and BERT (Yeung et al., 2019). We utilized BERTbase-uncased (Devlin et al., 2019) via HuggingFace Transformers (Wolf et al., 2020). Note that the candidate words are restricted to the intersection of the vocabulary of fastText and BERT.\nFor filtering, NLTK (Bird and Loper, 2004) was used for pos tagging. We used 166 function words.4\nComparative Methods We compared the proposed method with three existing methods described in Section 2: methods based on word embeddings (Jiang and Lee, 2017), masked language models (Yeung et al., 2019), and round-trip machine translations (Panda et al., 2022). For word 3-gram filtering, we used preprocessed English Wikipedia (Guo et al., 2020). For BERT (static) filtering, we used thresholds of θH = 11 and θL = 39 following Yeung et al. (2019).\nAutomatic Evaluation To evaluate whether the generated distractors are matched with the actual entrance examinations, an automatic evaluation is performed. We generated 100 words of candidates for each method and compared the top\n5As a pivot language, we also tried Japanese, the native language of the examinees, but German performed better.\nk ∈ {3, 5, 10, 20} words, after reranking and filtering, to the three gold distractors. Note that if there are fewer than k candidates, the remainder were randomly selected from the vocabulary. We employed the F1-score as the evaluation metric.\nManual Evaluation To assess the correlation of examinee performance between the generated questions and the actual entrance examinations, a manual evaluation is performed. First, distractors are generated for each of the 60 randomly selected questions in each of the proposed and two comparative methods (Jiang and Lee, 2017; Panda et al., 2022). Next, ten university students, who are native Japanese speakers, took 100 English fill-in-theblank questions from the actual entrance examinations, as well as these 180 generated questions. Note that these questions are sampled evenly by question type, with no duplication. Finally, we calculated the correlation of accuracy between the generated and actual questions.",
|
| 15 |
+
"5.2 Results": "Automatic Evaluation Table 3 shows the results of the automatic evaluation. The top three rows show the performance of the comparison method and the bottom row shows the performance of the proposed method for each question type. The proposed method achieved the best performance in 9 out of 12 settings and the second best performance in the remaining 3 settings. This implies the effectiveness of filtering according to the characteristics of question types. The improvement in performance was particularly noticeable for questions on function words, with greater improvement as the number of candidates k increased.\nManual Evaluation Table 5 shows the results of the manual evaluation. The proposed method has the highest correlation with the performance of the actual entrance examinations for all correlation coefficients. This means that the proposed method is most effective in identifying the English proficiency of examinees.\nOutput Examples Table 4 shows examples of generated distractors. In questions on grammar, existing methods without consideration of question types generate candidates that are semantically close to the correct word, but the proposed method correctly generates conjugated forms of the correct word. In questions on function words, the existing methods include candidates other than function words, but the proposed method generates only function words, correctly ranking the gold distractors higher. In questions on context, as shown in Table 3, the proposed method is not much different from the existing method until the top five, but may be followed by good candidates even after that.",
|
| 16 |
+
"6 Conclusion": "To reduce the cost of creating English fill-inthe-blank questions in entrance examinations for Japanese universities, this study addressed automatic distractor generation. First, we identified\nthree question types and constructed a fill-in-theblank corpus annotated by an expert with those question types. Next, we proposed methods to generate distractors that take into account the characteristics of each question type, focusing on candidate filtering. Experimental results based on automatic and manual evaluations demonstrate the effectiveness of the proposed method. Specifically, our method is able to generate candidates that match the gold distractors better than existing methods and has the highest correlation with the examinees’ English proficiency as assessed in actual entrance examinations. For future work, we plan to expand the corpus size by estimating question types, to generate distractors by supervised learning.",
|
| 17 |
+
"Acknowledgements": "We thank anonymous reviewers for valuable comments and suggestions. This work was supported by JSPS KAKENHI Grant Number JP21H03564 and JP22H00677."
|
| 18 |
+
}
|
ACL_23_no_limitation/ACL23_1181.json
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1181",
|
| 3 |
+
"Title": "Use of NLP in the Context of Belief states of Ethnic Minorities in Latin America",
|
| 4 |
+
"abstractText": "The major goal of our study is to test methods in NLP in the domain of health care education related to Covid-19 of vulnerable groups such as indigenous people from Latin America. In order to achieve this goal, we asked participants in a survey questionnaire to provide answers about health related topics. We used these answers to measure the health education status of our participants. In this paper, we summarize the results from our NLP-application on the participants’ answers. In the first experiment, we use embeddings-based tools to measure the semantic similarity between participants’ answers and \"expert\" or \"reference\" answers. In the second experiment, we use synonym-based methods to classify answers under topics. We compare the results from both experiments with human annotations. Our results show that the tested NLP-methods reach a significantly lower accuracy score than human annotations in both experiments. We explain this difference by the assumption that human annotators are much better in pragmatic inferencing necessary to classify the semantic similarity and topic classification of answers.",
|
| 5 |
+
"1 Introduction": "Indigenous people belong to the particularly vulnerable groups in the COVID-19 era and are disproportionally affected by epidemics and other crises, as acknowledged by the United Nations (United Nations and Affairs, 2020). Beyond the general problems related to the socio-economic marginalization and the concomitant inaccessibility of health-care services (in particular in rural regions and remote communities), a major threat for indigenous people arises through miscommunication, either due to the sparsity of information material in indigenous languages or due to cultural differences hindering the interpretation/application of the recommended health measures(García et al., 2020) (Afifi et al., 2020). Dissemination of reliable COVID-19- re-\nlated information, adapted to cultural and linguistic background of indigenous peoples, is a major priority in epidemic crisis; (García et al., 2020) (Afifi et al., 2020) (UN, 13 April 2020). Several initiatives of the European Union (EU) and World Health Organization (WHO) address the problems in communication of health related information (Baccolini, 2021). These initiatives target communication of key health-related terms and concepts underlying them such as understanding of medical instructions. In the recent covid pandemic, it was documented that misconceptions about preventive measures against the spread of covid had a strong impact on the severity of the pandemic (UN, 13 April 2020). In order to reduce health-illiteracy and avoid unnecessary spread of infectious diseases, it is necessary to observe people’s understandings of infectious diseases and their treatments. For instance, some individuals have the perception that antibiotics are a “cure-all” drug and might take antibiotics to cure diseases caused by viruses, which is an improper use of antibiotics and can lead to severe damaging effects(Calderón-Parra J, 2021).\nGiven the urgency of measuring the accuracy of health-related concepts and uses, it is necessary to develop NLP tools that can ease and speed up the process related to health education measurement. The key outcome of our research project is testing NLP methodology targeting measurement of health education related to the COVID-19 pandemics.",
|
| 6 |
+
"2 State-of-the-art": "Accuracy measurement of medical terms uses like antibiotics is currently missing due to two main reasons: a) missing data sources and methodologies that enable researchers to identify, characterize and measure actual uses of health related topics and concepts and b) missing statistical (in)accuracy measures of actual information status related to infectious diseases. It is thus not surprising that the initiative the Social Media Mining 4\n1\nHealth (#SMM4H) is addressing these problems in its agenda(Klein, 2021) (Magge et al., 2021). This initiative uses social media data as a data source for solving health-related tasks and problems such as finding disease mentions and symptoms(Klein, 2021) (Magge et al., 2021) (Weissenbacher et al., 2019). However, this rich data source does not have demographic information necessary for the statistics on social variation in the health literacy study. In addition, social media does not represent all social groups including indigenous population that often has low internet access or uses other tools for communication. As a consequence, data from indigenous communities related to Covid pandemics is very rare (Ojha et al., 2021). In order to address these problems, we used a traditional methodology in social sciences in order to access the information about the health education status, namely the survey methodology. We asked health-related questions such as questions about virus propagation and treatment to our participants. In order to be able to measure the accuracy of health-related concepts and uses of our participants’, it is necessary to compare their information status with \"expert\" knowledge or uses. In recent years, big progress has been made in semantic comparison of linguistic units such as words and sentences due to recent developments in neural language models such as BERT(Devlin et al., 2019) (Giulianelli et al., 2020). BERT is a language model trained on a large amount of natural language data to predict words that have been masked out as shown in Table 1 for the word coach (Devlin et al., 2019).\nBERT has been used to find out which word vectors are responsible for lexical meaning variation such as coach used as ‘trainer’ and ‘vehicle’. A word vector is essentially a mathematical representation of the meaning of a word based on learning or memorizing the frequency at which a word appears in a particular linguistic context. The differences or similarities of word vectors have been used to predict semantic (dis)similarity of words (Giulianelli et al., 2020) and sentences (Reimers and Gurevych, 2020). However, previous approaches mainly focus on meaning differences in Big Data sources such as social media and very few of them address meaning differences in survey questionnaires of ethnic minorities. It is thus not known yet how well these models work in the low resource scenario given the specific topic domain and the specific format\nof answers. This paper presents results from testing vector-based approaches in the measurement of answer similarity in the low resource domain.",
|
| 7 |
+
"3 Methodology": "We carried out a survey study with our cooperation partners from Latin America (Marleen Haboud, Claudia Crespo, Fernando Ortega Pérez), in which indigenous groups speaking Quechua or Kichwa from Peru and Ecuador (around 150 people from each country) answered questions about Covid-19 (10 yes-no questions and 10 open-ended questions). Our task was to measure the accuracy of key concepts related to health. We tested how well the information status of indigenous groups matches the information and suggestions from reliable sources such as the World Health Organization (WHO), henceforth our Reference Corpus. For instance, according to the WHO, the virus COVID-19 is distributed through contact, hence the suggestion to keep social distancing. We asked our participants about how the virus COVID-19 is distributed in order to see how well their answer matches the information from WHO. The answers were collected in rural areas via free interviews by a local person knowing indigenous communities. The method of free interviews was particularly important in order to include individuals who are less accustomed to performing highly controlled tasks such as older and/or illiterate participants. Due to lack of time and resources we did not transcribe the interviews. Instead, the local interviewer summarized the answers to the questions in a digital form in Spanish. Consequently, the answers in this survey study do not directly reflect the information state of indigenous minorities.",
|
| 8 |
+
"4 Experiments and Results": "We ran two experiments. The data and the code for both experiments can be found on GitHub1. In our first experiment, we tested the SBERT Model for measuring the semantic similarity between the participants’ answers and the \"expected\" answers from the reference corpus via cosine similarity (see Sentence Transformers based on Reimers and Gurevych, 2020). The following examples demonstrate some results of cosine similarity from the chosen method:\n1https://github.com/mahmuduzzamanDE/ ACLAmericaNLP\nQuestion : 8. When should a mask be used? Reference text : Especially in closed public places, but it is also useful in outdoor public places.\" Answers by participants: \"[’Whenever we are in contact with another person.’] # participant 1 \"Similarity: tensor([[0.1775]])\", # similarity between reference text and participant 1\n\"[’All the time when leaving home.’] # participant 2 \"Similarity: tensor([[0.0477]])\", # similarity between reference text and participant 2\n\"[’Especially in closed public places, but it is also useful in outdoor public places’]\", \"Similarity: tensor([[0.9961]])\", match between reference text and reference text\n\"[’When we are in public places where social distancing cannot be maintained.’]\", participant 3 \"Similarity: tensor([[0.2265]])\", # similarity between reference text and participant 3\nIn order to evaluate the validity of the similarity measure by SBERT, we asked human annotators to annotate participants’ answers from 0-5 as not similar (0) or similar (5). The annotators were four students of linguistics and one expert in medical anthropology. We divided the human ratings into three categories: similar (4-5), dissimilar (0-2), ambiguous (3) and selected the answers with high inter-speaker agreement. We translated the human ratings into correspondent cosine similarity scores: similar (>0.6), dissimilar (<0.4), ambiguous (> 0.4 and < 0.6). Our results show that the semantic similarity measured by cosine similarity using SBERT is significantly lower (mean 0.2) than the semantic similarity acquired by human annotation (mean 0.7). Our second experiment had the goal to find a computational method to classify a topic of an answer to an open-ended question. Here is an example. Survey question: Why do you not want to be vaccinated? Topics: a) afraid of side effects, b) my own decision, c).... An automatic classification of answers under the correspondent topics can ease the process of survey data analysis and provide a uniform way of measuring answers to open-ended questions. We asked human annotators to create\ntopics for the interview questions and then to annotate answers according to these topics, e.g. “I can get thrombosis” was classified by human annotators as a) afraid of side effects. We tested automatic methods to classify answers under suggested topics. The underlying idea was to look for key words in the answers that semantically correspond to suggested topics. For this aim, we performed a synonym-based similarity task without stemming (Task 1) and with stemming (Task 2). In the first task, if the topic was a synonym of one of the tokens in the given answer, the classification was TRUE. In the second task, if the topic stem was a synonym of the token stem in the given answer, the classification was TRUE. The latter case ignores morphological variation of words and focuses only on the lexical stem. We preprocessed the given answers by tokenization, removing stop words and case lowering. The synonyms were taken from the NLTK wordnet.\nprint(set(synonyms)) {’impinging’,’contact’,’reach’, ’get_through’, ’intergroup_communication’ ,’contact_lens’, ...}\nWe used a Stemmer from NLTK, to stem the synonym words:\nprint(Stem) {contact|saliv|aglomer|tos|segur| mascarill|distanci|comun|familiar| friccion|intim|relacion|roc| tocamient|....}\nTable 2 demonstrates which answers the synonym-based approach by stemming correctly identified and which answers the system did not correctly identify.\nOur results in Table 3 show that stemming gives slightly better results than the absence of stemming, namely a correct classification of additional 10 answers. However, despite this light improvement, the accuracy is still very low, or more precisely, the system could not make a link between a given\nanswer and a topic in around 50 % of the cases.",
|
| 9 |
+
"5 Discussion": "The computational approaches we tested have shown much lower accuracy compared to human annotations. The biggest problem we have identified is the lack of pragmatic inferencing humans are good at, but automatic models we tested are not. For instance, people answered to the question about how the virus distributes by saying “through crowd”. Due to a pragmatic inference human annotators can evaluate this answer as similar to the answer given by the reference corpus. “A crowd” implies pragmatically that social distancing cannot be obtained adequately and this can promote virus infection. However, none of our automatic models was able to predict a high similarity between the reference answer \"through contact\" and the participant’s answer “through crowd”. Another example illustrating problems with pragmatic inferences is the annotation of vaccination side effects. While human annotators had no difficulties to classify “thrombosis” as a possible vaccination side-effect, our automatic methods were not able to do it. To sum up, one of the biggest challenges in our tasks was the lack of Natural Language Understanding and Inferencing (NLI and NLU) by the computational models we tested. Using NLI and NLU in the context of low resource is reserved for future research. In the near future, we will test models trained on health-related topics, fragmented answers that represent the majority of our answers and models trained on NLI-and NLUdatasets (Kochkina et al., 2023).\nFuture Work\nThere are several issues of our methodology that need to be addressed in future research. The absence of good resources for indigenous languages has forced us to work with local translators who digitized the answers the way they perceived them. In future we will use transcribed oral data for our experiments.\nAnother issue is the use of few human annotations that have provided us the human similarity score necessary to evaluate computational models. Even though the inter-speaker agreement was comparatively high in our study due to very explicit training and discussion of annotation guidelines, we suspect that the inter-speaker agreement will show a much higher variation in the perception of semantic similarity if the annotation guidelines are missing as is often the case in crowd-sourced human annotations. The trade-off between expensive human annotators with long training for annotation and cheap crowd-sourced human annotations without any training is an issue that needs to be addressed in the future research.\nEthics Statement\nScientific work carried out in our project complies with the ACL Ethics Policy and with the ethic guidelines from the German Research Foundation (DFG). We have informed our participants about the goals of our project and they signed an agreement with us. In addition, the data acquisition by interviewing indigenous people was approved by Ethic committees at the universities of our cooperation partners.",
|
| 10 |
+
"Acknowledgements": "We acknowledge the funding support from the German Research Foundation (DFG) (Grant number: 468416293)."
|
| 11 |
+
}
|
ACL_23_no_limitation/ACL23_1182.json
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1182",
|
| 3 |
+
"Title": "Neural Machine Translation through Active Learning on low-resource languages: The case of Spanish to Mapudungun",
|
| 4 |
+
"abstractText": "Active learning is an algorithmic approach that strategically selects a subset of examples for labeling, with the goal of reducing workload and required resources. Previous research has applied active learning to Neural Machine Translation (NMT) for high-resource or wellrepresented languages, achieving significant reductions in manual labor. In this study, we explore the application of active learning for NMT in the context of Mapudungun, a lowresource language spoken by the Mapuche community in South America. Mapudungun was chosen due to the limited number of fluent speakers and the pressing need to provide access to content predominantly available in widely represented languages. We assess both model-dependent and model-agnostic active learning strategies for NMT between Spanish and Mapudungun in both directions, demonstrating that we can achieve over 40% reduction in manual translation workload in both cases.",
|
| 5 |
+
"1 Introduction": "Over the course of history, South America has been home to numerous indigenous cultures and languages (Campbell et al., 2012), reflecting the region’s rich linguistic diversity and heritage. Unfortunately, the dominance of the Spanish language in this region has threatened many indigenous languages, often leading to their decline or even extinction. This has resulted in an immeasurable cultural and historical loss for humanity, as language diversity vanishes (Ostler, 1999). Among the last remaining native languages is Mapudungun, spoken in Chile and Argentina by nearly 1.8 million people (Mapuches), but only 10% of them handle the language correctly and barely another 10% understand it. In the same spirit, the Conadi Indigenous Languages Program1 predicts that this\n1https://www.conadi.gob.cl/noticias/conadi-lanzoaplicaciones-y-realizara-cursos-online-de-mapuzungun-paraque-miles-de-indigenas-aprend\nlanguage will become extinct in a few generations, mainly due to the lack of individuals that can speak this language. Despite this, there are still groups within Chile that only speak Mapudungun, leaving them sometimes excluded from the rest of society. Furthermore, the social tension over the past few years has raised native indigenous people to the forefront of discussion, attracting high interest in the community to find ways to include them in society as equals. Unfortunately, the availability of human translators fluent in those languages is minimal, and no automated translators exist today supporting those languages. In this work, we present an active learning setting to improve the efficiency and efficacy of machine translation for low-resource languages, in this case, Mapudungun. In other words, we aim to reduce the effort made by human translators given that the quantity of people fluent in Mapudungun is scarce. Given this, the task of translating and reviewing large amounts of text is unattainable. One of the main tasks of active learning is choosing the appropriate data points (texts) to be translated by human translators to train a neural machine translation (NMT) model with as few examples as possible. To evaluate our approach, we utilized an open-source corpus from the AVENUE project (Levin et al., 2000) and supplemented it by scraping the web for Spanish-Mapudungun sentence pairs. We assembled a dataset of approximately 30,000 pairs, creating a comprehensive corpus for our research. We simulate an offline active learning setting to measure the amount of work that can be reduced by using different active learning strategies. The main contributions of this paper are: (1) Proposing active learning training strategies to reduce low-resource language speaker translators workload by more than 40%, (2) Finetuning a Mapudungun NMT model capable of obtaining competitive results and (3) Sharing our code for research reproducibility2.\n2https://github.com/OpenCENIA/al4mt\n6",
|
| 6 |
+
"Active learning": "Active learning is an effective machine learning training approach where the algorithm actively selects informative data to learn from, resulting in improved performance with fewer labeled instances (Settles, 2009). While initially applied to text classification, information retrieval, classification, and regression tasks (Tong and Koller, 2001; Zhang and Chen, 2002; Carvallo et al., 2020; Carvallo and Parra, 2019; Houlsby et al., 2011), active learning has recently been extended to tasks such as Named Entity Recognition, Text Summarization, and Machine Translation (Shen et al., 2017; Zhang and Fung, 2012; Zhao et al., 2020; Zhang et al., 2018). This study investigates unexplored potential of active learning in machine translation for untranslated examples in Mapudungun, a low-resource language.\nMachine translation for low-resource languages Efforts to overcome resource scarcity in lowresource language translation have proposed pretraining strategies for data generation and performance improvement. Methods include crosslingual language model pretraining on highresource languages data, then finetuning on lowresource languages (Zheng et al., 2021), multilingual sequence-to-sequence pretraining (Song et al., 2019; Xue et al., 2020; Liu et al., 2020), dictionary and monolingual data augmentation (Reid et al., 2021), and back-translation data augmentation (Sugiyama and Yoshinaga, 2019). However, these strategies lack human-in-the-loop components and don’t guarantee human approval of the model’s iterative translations under active learning.",
|
| 7 |
+
"Data selection in NMT": "The data selection problem in NMT has received attention from several authors. Some propose weighted sampling methods to improve performance and accelerate training (Van Der Wees et al., 2017; Wang et al., 2018a), while others focus on filtering noisy data (Wang et al., 2018b; Pham et al., 2018) or selecting domain-specific data for back-translation (Fadaee and Monz, 2018; Poncelas et al., 2023; Dou et al., 2020). Furthermore, Wang et al proposed a method to select relevant sentences from other languages to enhance lowresource NMT performance (Wang and Neubig, 2019). As in using data augmentation the task of\nselecting data for training a NMT model do not include a user in the feedback loop.",
|
| 8 |
+
"3 Methodology": "In this section we describe in detail the active learning framework proposed for NMT on low-resource languages and the type of active learning strategies depending if there is or not a machine learning model involved in the selection of examples for being labeled. In Figure 1, we show the active learning setting used in this work. In the first step, we initialize an NMT model, then given a monolingual corpus in Spanish and an active learning strategy, it chooses examples for being translated by an oracle to Mapudungun. After obtaining the translated sentences, we fine-tune the NMT model, update its parameters, and then use this updated version to select new sentences for labeling. We use four active learning strategies to select sentences for an oracle’s translation: entropy sampling, margin sampling, confidence sampling, and decay logarithm frequency. The strategies chosen are pertinent to both Spanish to Mapudungun and Mapudungun to Spanish translations in low-resource scenarios. They address key issues such as uncertainty, data diversity, and model reliance, thus optimizing translation models and aiding language preservation. The strategy’s reliance on the model varies; model-agnostic strategies don’t need it for selecting sentences, while model-related ones use its certainty level. The number of active learning iterations and oracle translation requests is userdetermined at the start of training.",
|
| 9 |
+
"3.1 Model-related strategies": "These strategies use the model to choose the examples for being labeled and rely on the model’s confidence level in untranslated examples.",
|
| 10 |
+
"Entropy sampling": "In this strategy we consider entropy as a measure of uncertainty, where the higher entropy indicates higher uncertainty and more chaos. Therefore this strategy consists in sampling examples with higher average entropy given by equation 1.\n1\nm\nm∑\ni=1\nentropy(Pθ(.|x, ŷ<i) (1)",
|
| 11 |
+
"Minimum margin sampling": "This strategy calculates the average probability gap between the model’s most confident word (y∗i,1)\nand the second most confident word (y∗i,2). If the margin is small, the model cannot identify the best translation from an inferior one, so we sample sentences with a lower margin as shown in the equation 2.\n1\nm\nm∑\ni=1\n[Pθ(y ∗ i,1|x, ŷ<i)− Pθ(y∗i,2|x, ŷ<i)] (2)",
|
| 12 |
+
"Least Confidence sampling": "This strategy estimates the model uncertainty by averaging the predicted probability of each word the translator generates. We sample those sentences with a lower level of confidence to force the model to learn harder sentences, as shown in equation 3.\n1\nm\nm∑\ni=1\n[1− Pθ(ŷi|x, y<i)] (3)",
|
| 13 |
+
"3.2 Model-agnostic strategy": "In this case, we use the decay logarithm frequency strategy (Zhao et al., 2020) that does not require a NMT model to choose examples for being labeled by an oracle. The intuition behind this strategy is to choose sentences different from the ones that have already been translated in terms of linguistic features.",
|
| 14 |
+
"Decay logarithm frequency": "We define two sets of sentences: U that are untranslated and L translated sentences on the current active learning iteration. In the first step, we define the logarithm frequency of a word w in U , namely F (w|U) shown in equations 4 and 5.\nG(w|U) = log(C(w|U) + 1) (4)\nF (w|U) = G(w|U)∑ w′∈U G(w′|U) (5)\nWhere C(w|.) measures the frequency of a word w in a given sentence set that can be U or L. Then we add a decay factor that favors the diversity of words and includes two hiper-parameters (λ1 and\nλ2) that allow giving more or less importance to words from the labeled (L) or the unlabeled sets (U ). Also, we normalize by dividing the obtained score over the sentence length (K).\nfy(s) =\nK∑ i=1 F (si|U)× e−λ1C(si|L)\nK (6)\nEquation 6 if used as threshold to obtain Û(s) that is the set of all sentences that have a higher lf score than s. In this way, we tend to discard repetitive sentences and filter out insignificant function words. The obtention of the final delfy score is shown in equations 7 and 8.\ndelfy(s) =\nK∑ i=1 F (si|U)×Decay(si)\nK (7)\nDecay(si) = e −λ1C(si|L) × e−λ2C(si|Û(s)) (8)",
|
| 15 |
+
"4.1 Dataset, preprocessing and NMT model": "The dataset consists of 29,829 Spanish to Mapudungun sentence pairs considering only sentences length higher than five words, with 50,840 unique words in Spanish, 67,757 unique words in Mapudungun, and a vocabulary size of 118,597. We do not remove stopwords, lemmatization, or low-case texts, since we aim to capture both languages’ peculiarities, including punctuation and idioms. We used a MarianMT (Junczys-Dowmunt et al., 2018) translation model based on a transformer architecture consisting of 12 encoder layers, 16 encoder attention heads, 12 decoder layers, and 16 attention heads. For training on active learning, we use a learning rate of 0.0002 and a weight decay of 0.01. We train the necessary epochs in each active training round until the validation perplexity remains the same. λ1 and λ2 in the delfy are set to 1.0 each. For training on active learning, we\nfinetune a MarianMT translator from Spanish to Deutsch. Despite the apparent oddity of linking an Indo-European language, Deutsch with Mapudungun, our approach harnesses shared agglutinative traits to enhance translation.",
|
| 16 |
+
"4.2 Active Learning for NMT": "Concerning the active learning setting, we run ten iterations using the 10% of the train set. For evaluating active learning strategies, we used the SacreBLEU3 library and evaluated the model’s outputs with BLEU (Papineni et al., 2002). As we run an offline experiment, we assume the oracle is continuously right, extracting the correct translation each time and adding those examples to the train set. In our offline experiment, we used existing labeled training data to eliminate the need for human annotators. Our goal was to assess which strategy efficiently utilizes a smaller data proportion, reducing manual translation effort while preserving model performance. This approach enables optimization of active learning strategies without added annotation costs.",
|
| 17 |
+
"4.3 Results": "The results of this study suggest that for Spanish to Mapudungun translation, the most effective active learning strategy is Delfy, which achieved a BLEU score of 65.45 when trained on 60% of the corpus. Margin and entropy sampling were also effective strategies, achieving BLEU scores of 62.92 and 62.72, respectively. For Mapudungun to Spanish translation, margin sampling was the most effective active learning strategy, achieving a BLEU\n3https://github.com/mjpost/sacrebleu\nscore of 59.378. Both settings showed benefits of training on active learning, with a reduction in the workload of approximately 40%. However, there is space for improvement in further reducing workload, as other studies on high-resource or well-represented languages have reduced over 80% (Zhao et al., 2020) of manual translation work. This work demonstrated significant progress in translating a low-resource language such as Mapudungun, with both active learning strategies outperforming the baseline strategy of random sampling.",
|
| 18 |
+
"5 Conclusion": "In conclusion, this study revealed that Delfy was the most effective active learning strategy for Spanish to Mapudungun translation, while margin sampling outperformed in Mapudungun to Spanish. In both cases, training with active learning strategies reduced workload by over 40%. Our comparative analysis, driven by the diverse approaches of the chosen strategies, identifies the most efficient methods for low-resource translation tasks. This research is crucial for languages particularly Mapudungun, as it fosters information access and reduces language barriers for indigenous communities. Future work will focus on designing active learning strategies specifically for low-resource languages.",
|
| 19 |
+
"Acknowledgements": "National Center for Artificial Intelligence CENIA FB210017, Basal ANID."
|
| 20 |
+
}
|
ACL_23_no_limitation/ACL23_1184.json
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1184",
|
| 3 |
+
"Title": "Codex to corpus: Exploring annotation and processing for an open and extensible machine-readable edition of the Florentine Codex",
|
| 4 |
+
"abstractText": "This paper describes an ongoing effort to create, from the original hand-written text, a machine-readable, linguistically-annotated, and easily-searchable corpus of the Nahuatl portion of the Florentine Codex, a 16th century Mesoamerican manuscript written in Nahuatl and Spanish. The Codex consists of 12 books and over 300,000 tokens. We describe the process of annotating 3 of these books, the steps of text preprocessing undertaken, our approach to efficient manual processing and annotation, and some of the challenges faced along the way. We also report on a set of experiments evaluating our ability to automate the text processing tasks to aid in the remaining annotation effort, and find the results promising despite the relatively low volume of training data. Finally, we briefly present a real use case from the humanities that would benefit from the searchable, linguistically annotated corpus we describe.",
|
| 5 |
+
"1 Introduction": "The Nahuatl language, an agglutinating and polysynthetic member of the Uto-Aztecan family spoken throughout Mexico by about 1.5 million people today, has a rich literary tradition (Gingerich, 1975; León-Portilla, 1985). With a strong preconquest oral tradition and a hieroglyphic writing system, Nahuatl speakers quickly adopted the Latin alphabet for writing their language after its introduction almost immediately after the Spanish invasion. As a result, the volume of the colonialera Nahuatl literary canon is unrivalled in Latin America (Olko and Sullivan, 2013). These texts are invaluable resources to scholars interested in the history, culture, and language of colonial and pre-invasion Nahua communities.\nPerhaps the most notable Nahuatl text of the early colonial period, the Historia General de las Cosas de Nueva España “General History of the Things of New Spain” (Florentine Codex, FC) is an encyclopaedic work in Nahuatl and Spanish\ncompiled by Indigenous scholars from the Colegio de Santa Cruz de Tlatelolco and Franciscan friar Bernardino de Sahagún.\nThe FC is undoubtedly one of the most valuable manuscripts of the early modern period. However, it was forgotten for centuries until Angelo Maria Bandini described it in 1793. He named it “Codice Fiorentino” after the Biblioteca Medicea Laurenziana in Florence, where it is still kept. But only at the beginning of the 20th century did Francisco del Paso y Troncoso bring it to a wider audience (Martínez, 1982). Charles Dibble and Arthur Anderson published a translation of the books into English throughout the second half of the 20th century. The original manuscript became available in the World Digital Library only ten years ago, thanks to the Library of Congress.\nThe impetus for the present project was the need of the third author, a humanities scholar, to search the text of the FC for specific linguistic constructions and terminology. This proposition is complicated by a number of factors:\nFirst, there are few fully digitised versions of the FC, and those that do exist are under copyright, constraining the ability of a scholar to reproduce, annotate, and/or re-release any part of the text that results from a given research endeavour.\nSecond, the FC, having multiple authors and being written in the early years of Nahuatl alphabetic writing, contains numerous orthographic inconsistencies throughout the 12 books, with many words written in multiple distinct ways and decisions about word tokenisation not being standardised. Furthermore, due to constraints on column width in the original manuscript, words are frequently split by line breaks with no indication of whether the following line continues the word from the end of the previous one. Keyword searching this text is a seemingly-futile process involving determining all possible spellings for a given word and all possible tokenisations of a single syntactic\n19\nword into multiple orthographic words. Finally, Nahuatl is a morphologically complex language with large amounts of inflection and derivation, making querying the surface/inflected form, instead of e.g., a lemma, particularly difficult.\nThe present project attempts to address these issues by creating an open-source, retokenised, and normalised corpus of the FC with queryable linguistic annotations following the Universal Dependencies framework (Nivre et al., 2020a). In the following sections, we describe the corpus, each component involved in its creation, and an investigation into automating the processing. We conclude by outlining a road map for the project’s completion and a vision of future applications.",
|
| 6 |
+
"2 Related work": "The FC has been the subject of a great deal of research in the humanities by scholars interested in the cultural beliefs and practices of the Nahua people during the early colonial period (Sullivan et al., 1966; Gingerich, 1988; Sigal, 2007; McDonough, 2020; Olivier, 2021). It has also served\nas a foundational component for work studying so-called “Classical Nahuatl,” or Nahuatl spoken during the period (Launey, 1986; Lockhart, 1992, 2001). Both Olko et al. (2015) and Olko (2018) leverage corpus-based approaches using a multitude of historical Nahuatl documents, but it is unclear how much linguistic information was available in the corpus, and to our knowledge, this corpus has not been released to the public.\nGutierrez-Vasques et al. (2016) released Axolotl, a large, Spanish-Nahuatl parallel corpus with a focus on machine translation. It includes Nahuatl from multiple variants and time periods, including the early colonial period, but does not include text from the FC. Furthermore, the text in Axolotl is unprocessed and unannotated.\nOther corpora that include Nahuatl texts include the Johns Hopkins University Bible Corpus (McCarthy et al., 2020), a parallel multilingual corpus that includes numerous contemporary Nahuatl variants. This corpus has been used to produce morphosyntactically-annotated resources for a large number of languages (Nicolai and Yarowsky, 2019; Nicolai et al., 2020).\nThe first open morphosyntactically-annotated corpus of Nahuatl was recently released by Pugh et al. (2022) and includes 10,000 tokens of the Western Sierra Puebla variety. Following this work, we also select UD as our annotation schema.\nMarc Eisinger was the first to publish a computerised version of the FC, which is not freely available (Eisinger, 1977). The Universidad Autónoma de México (UNAM) hosts a website, Temoa, containing a large volume of digitised colonial-era Nahuatl texts, with minimal processing (at the very least, tokenisation problems in the FC appear to be corrected (Universidad Nacional Autónoma de México, 2023). However, the copyright and rights to use for annotation and re-release are retained by UNAM,1 making it not possible to create derivative works, such as the annotated corpus described in this paper. Furthermore, the original text (before fixing tokenisation) is not available.\nRelated to the computational processing of colonial Mexican texts, The “Digging into colonial Mexico” project (Murrieta-Flores et al., 2022) involves the creation of a number of processed and machine-readable resources based on colonial Mexican documents, mostlywritten in colonial-era Mexican Spanish. As for colonial texts written in Mexican languages, the Ticha project (Broadwell et al., 2020), a collaboration between members of Zapotec-speaking communities and academics from universities in the United States of America, offers an “online digital text explorer” for colonial Zapotec texts and includes morphological analyses and translations.",
|
| 7 |
+
"3 Corpus": "Our corpus comes from a typed transcription upholding the original layout, published in the openaccess repository Zenodo2 to allow the semantic and computational study of the text from the primary source (de Sahagún, 2022). In Figure 1 we present a folio from the manuscript where the text in Spanish (left) and Nahuatl (right) is seen in two columns, and an example of the transcription output in our corpus.",
|
| 8 |
+
"3.1 Orthography": "There is a great deal of orthographic variation in the FC, in both the Nahuatl and Spanish sections, with multiple characters used inconsistently\n1https://temoa.iib.unam.mx/creditos 2https://zenodo.org/\nthroughout. For example, the letter [v] can represent either /w/, e.g. veue /wewe/ ‘big’ (norm. huehue), or a long /o:/, e.g. vmpa /o:mpa/ ‘there’ (norm. ompa). [j] is used both for the vowel /i/ e.g., jnpilhoan /inpilwa:n/ ‘their (pl) children’ (norm. inpilhuan) and the glide /j/, e.g. jollochicaoac /jol:otSika:wak/ ‘brave’ (norm. yollochicahuac). The letter [i] is also observed in both of these contexts.\nThere are also instances where a single sound, e.g. /S/ can be represented by multiple letters, in this case [x] or [s]. For example, the word axcan /a:Ska:n/ ‘now, today’ can appear as ascan or axcan. But [s] can also be the voiceless alveolar sibilant /s/ in loan words from Spanish visorrej /bisorei/ ‘viceroy’ (norm. visorrey).",
|
| 9 |
+
"4 Processing": "A major theme of the processing of the FC is the use of initial detailed hand-annotation in order to bootstrap automated approaches for the remaining text. Crucially, the resulting corpus should be usable for academic research and, as such, must maintain the utmost quality. In this context, then, we consider automation a strategy to assist in human annotation, but still require manual auditing of the entirety of the annotated corpus.",
|
| 10 |
+
"4.1 Sentence segmentation": "Full stops (or in dialogue, exclamation marks, and question marks) are used as sentence boundaries throughout the corpus, with the colon symbol often used to separate clauses, making sentence segmentation fairly straightforward. There are a number of abbreviations, such as xpo. for Christ and p. for Pedro. Table 5 presents the size of each book in terms of sentences, space-separated tokens, and words. Words are only given for the three books we have processed so far.",
|
| 11 |
+
"4.2 Retokenisation": "There are a number of tokenisation inconsistencies in the original manuscript, resulting from (1) physical constraints, namely the author running out of room on one line and splitting a word across a line boundary (see Figure 1), (2) inconsistent tokenisation practices by the authors, such as sometimes writing the article subordinator in and an adjacent verb together as a single orthographic word, and (3) possible mistakes introduced during the process of manually typing up the manuscript.\nOur first step in processing the codex, after obtaining text files transcribed from the original manuscript, involves “retokenisation”: altering the word boundaries in the text to align them with canonical Nahuatl words.3 An example of the input and output of this process is shown in Table 1, wherein a space is represented by the mid-dot character, ·, and newline is represented by the pilcrow character, ¶.\nAs with the rest of the processing steps, retokenisation starts as a manual process. For each identified case where retokenisation is necessary, we use the left and right contexts to write a rule for handling that case, ensuring that the contexts are large enough to avoid potential ambiguities (for instance, a minimal-context rule such as “n·c →nc” will likely produce many false positive matches). In the event that a rule produces false positives, we expand its contexts (e.g., “qujn·caoa →qujncaoa”). We use a left-to-right longest-match (LRLM) algorithm to apply the approximately 4,000 retokenisation rules.",
|
| 12 |
+
"4.3 Normalisation": "Once the text is correctly tokenised, the next processing step is orthographic normalisation. We use the ACK (Andrews, Campbell, Karttunen) orthographic standard for the target orthography, since it is designed to reflect colonial-era Nahuatl writing (Campbell and Karttunen, 1989; Andrews, 2003; Karttunen, 1992).\nFor Spanish words we use contemporary orthography, so for example, gouernadores is normalised to gobernadores ‘governors.’\nFor proper nouns, we also use modern orthographic conventions where available. For example, tlatilulco is normalised to Tlatelolco, and motecu-\n3Following authoritative resources like Andrews (2003) and Campbell and Karttunen (1989) in identifying “canonical words”, which should include subject, object, and aspectual affixes.\ncoma is normalised to Moctezuma. The process uses a hand-curated dictionary mapping original word forms to their normalised counterparts (e.g. the normalised form yaoyotl ‘war’ is written variably as iaoiotl, iauiotl, iaviotl, iaujutl and iaujotl. Thus, our dictionary has an entry for each of these forms mapping to the normalised form). To build the dictionary, we start with a naïve finite-state transducer (FST) model designed using general patterns of colonial-era Nahuatl writing. We then post-edit the output of the FST, adding all correct word pairs to the dictionary. We update the FST weights as we add forms to the dictionary to improve its performance. After processing three books, the dictionary contains 6,515 entries.\nThe main motivation for performing the normalisation manually is to ensure a high-quality data set with which to train a model for automating the process. We discuss the evaluation of such an approach in §6.2.",
|
| 13 |
+
"4.4 Part-of-speech tagging": "The part-of-speech tags are based on the Universal Part-of-Speech categories (UPOS) defined and used in the Universal Dependencies framework (Nivre et al., 2020b).\nWe accomplish part-of-speech tagging in three steps. We use a lexicon, a morphological analyser (see §4.5) and a set of ordered, regular-expressionbased guessing rules applied to the normalised form, in sequence. We refer to this last component as ‘the guesser.’\nThe lexicon is simply a list of normalised surface forms and their part of speech. Of the 10,959 types presently annotated for part-of-speech, 1,478 (6,916 tokens) received their POS from the lexicon.\nIn the event that a given surface form is not observed in the lexicon, we next run the word through the morphological analyser. This method accounts for 13,762 of the tokens thus far annotated (1,705 types).\nFinally, any word not identified in the previous two steps is passed to the guesser. The guesser consists of 36 rules which use regular expressions to look for particular prefixes and suffixes and assign part-of-speech tags with high precision. For example, words beginning with nimitz-, a combination of the first person subject marker and second person object marker are categorised as verbs, and words ending in -tzitzin, which is the plural reverential marker, are categorised as nouns. These rules are high precision, but low recall: a total of 986 forms out of 10,959 forms (1,471 tokens) in the three processed books receive guessed analyses.\nWe randomly sampled and manually checked 200 of these guesses and found that 198 were correct. In one case the mistake was due to a mistaken normalisation (iehoatin → *yehuatin instead of yehhuantin ‘they, them’), which resulted in the word being tagged as a noun due to the -tin ‘PL’ ending (plural). The second casewas to dowith the same plural rule, which resulted in the word xixitin ‘it crumbled’ (from the verb xixintini ‘to crumble’) being tagged as a noun.",
|
| 14 |
+
"4.5 Morphological analysis": "Morphological analysis is the task of producing, for a given surface form, a lemma and a set of morphosyntactic tags describing that form. For example, given the form tictlamacazque /ti-c-tlamaca-z-que/ ‘We will give something to him’ (or ‘We will make offerings to him’) it would produce,\n<s_pl1><i_sg3><o_nn3>maca<v><dv><fut>\nWhere <s_pl1> stands for 1st person plural subject, <i_sg3> stands for 3rd person singular secondary object, <o_nn3> stands for 3rd person inanimate indefinite object, <v> stands for verb, <dv> stands for ditransitive and <fut> stands for future. Note that there is a long distance dependency between the prefix ti-, which can be 2nd person singular or 1st person plural and the suffix -que which marks a plural subject.\nA given token can produce more than one analysis, so for example, quinchihua ‘They made them’ or ‘He made them’ produces,\n<s_pl3><o_pl3>chihua<v><tv><pres>\n<s_sg3><o_pl3>chihua<v><tv><pres>\nIn this case, because of underspecification in the orthography, the plural subject-marking suffix -h\nis not written, resulting in an ambiguous analysis. The omission of this suffix is quite common in Nahuatl texts.\nFor implementing the morphological analyser we used the Helsinki Finite-State Toolkit (HFST) (Lindén et al., 2009). The analyser was implemented over the normalised forms. Morphotactics and the lexicon were implemented using lexc, while any morphographemic constraints were implemented with twol. A given surface form, for example, omoyollochichili ‘He strove strongly’ (lit. “he waited for himself on behalf of the heart”), consists of three parts, the surface form (1), the morphotactic form (2) and the lexical form/analysis (3).\n1. omoyollochichili 2. o>mo>«yollo»chichi>lia 3. <aug><s_sg3><o_ref>«yollotl<n>»\nchichilia<v><tv><past>\nThemorphotactic form is the combination of the morphs beforemorphographemic rules are applied, it includes symbols to mark segment boundaries, such as ‘>’ for an inflectional boundary, ‘«...»’ for incorporated elements (in this case, the second object), ~ for reduplication and ‘·’ for clitic boundaries. The symbols around the incorporated element allow that part of the surface form to be extracted for use in the representation of incorporation (see §5.1).",
|
| 15 |
+
"5 Representations": "In this section we discuss a number of features of Nahuatl that require special attention in the Universal Dependencies framework.",
|
| 16 |
+
"5.1 Incorporation": "Incorporation is the process by which a verb can incorporate, that is, be syntactically incorporated with one or more of its arguments or adjuncts. Incorporation has been understudied in the field of natural language processing, and there are few articles that describe annotation projects for languages exhibiting this feature.\nIn this project, we follow the proposal laid out by Tyers and Mishchenkova (2020) in which incorporated items are exposed in the enhanced dependency graph annotated with the relation of the slot that they fulfill in the argument structure.\nTable 2 demonstrates this with the verb moyollochichili, where the verb chichilia ‘enbitter’ takes the incorporated object yollo- ‘heart.’",
|
| 17 |
+
"5.2 Relational nouns": "Relational nouns are nouns which express spatial and temporal relations when used with other noun phrases. These may be used as independent words in a possessive structure (1) or compounded to other words (2).\n1. inepantla in ilhuicatl ‘in the midst of the heavens’ (lit. its-midst the heaven)\n2. ilhuicayollotitech ‘in the heart of the heavens’ (lit. heavens-heart-on)\nThe first case is straightforward, each noun is analysed as a separate word, with the relational noun receiving a lexical feature NounType=Relat in addition to the necessary possessive morphology.\nIn the second, we take advantage of the multitoken word encoding in the CoNLL-U format and analyse the compound as consisting of two parts, the head and the compounded relative noun.",
|
| 18 |
+
"5.3 Lemmas": "We also include the lemmas, or the stems, for each word. Lemmas ignore any of the inflectional morphology on the surface form of the word. Lemmatisation is performed first by looking up a surface\nform in the lexicon and, if the word is not in the lexicon, by the morphological analyser.",
|
| 19 |
+
"6 Automated processing": "We experiment with the existing processed FC data to see to what extent we might be able to automate the retokenisation and normalisation steps. Following previous work showing that historical text normalisation can be modelled effectively as a character-based machine translation problem (Bollmann, 2019), we train an encoder-decoder Seq2Seq model with Attention on character sequences for both tasks. While a natural inclination would be to train both retokenisation and spelling normalisation jointly, we are interested in storing each intermediate step for potential future research, and so train a separate model for each task.\nFor the orthography normalisation model, we treat each word as a training instance, and map the unnormalised word (e.g. qujchioa) to its corresponding normalised form (e.g. quichihua).\nFor the retokenisation model, training on each word would not work since the phenomenon we are modelling spans word boundaries. Instead, we split the text on unambiguous punctuation (‘.,:;?!’), creating numerous subsequences from each sentence.\nSince the objective is to evaluate how well we could automate the text processing for future books, we used two of the three already-complete books (Books 1 and 8) for training, and held out\nBook 5 for evaluation. The models used a bidirectional LSTM encoder, and training was done using OpenNMT (Klein et al., 2020). We trained both for 100 epochs.\nResults of the experiments are listed in Table 3. They are generally favourable, though perhaps not quite to the point of being able to completely automate the low-level processing of the remaining books.",
|
| 20 |
+
"6.1 Retokenisation": "A number of the mistakes we see from the retokenisation model involve a type of ‘hallucinations,’ where the output contains characters not in the input. This is an effect of treating this problem as one of translation with a relatively low volume of training data. To remedy this problem, we may try adding an additional auto-encoding or “copying” auxiliary task as discussed in Mager et al. (2019), wherein we add training examples that are already correctly tokenised in order to provide more examples of correct outputs.\nAlternatively, the task of retokenisation can be straightforwardly modelled as a one-to-one sequence tagging problem, where for each input character the model must assign one of three “retokenisation actions”: (1) merge, or remove a token boundary that follows the current character, (2) split, or add a token boundary after the current character, or (3) do nothing. For comparison, we also evaluate this approach, using a bidirectional LSTM also trained for 100 epochs.4 This approach has a slightly worse word error rate compared to the MTbased approach, but has a lower character error rate. The advantage to this approach is that we don’t risk transforming characters or inserting substrings during the tokenisation step.",
|
| 21 |
+
"6.2 Orthographic normalisation": "The orthographic normalisation model correctly normalises 87% of the words in the held out book. The errors suggest a similar issue seen in the retokenisation model, namely the insertion of multiple additional characters not corresponding to the input (e.g. converting input ie, to *yeyecye instead of ye). This issue, as mentioned above, would likely be alleviated with some data aug-\n4Given our limited data volume and the interest to simulate testing on an unseen book, the results we report here do not include a hyper-parameter tuning step using a heldout development set. With an additional held out book we could tune these models’ hyperparameters and improve performance.\nmentation and/or multi-task training to ensure the model sees enough examples of properly formed output strings. We plan to leverage this model as a backup in the case where we are not able to identify a normalisation via our dictionary-lookup approach. For example, by first checking if we have seen a given word in the training data and, if so, using the corresponding output from training and using the model’s prediction on unseen words only, the word error rate drops to 8.3.",
|
| 22 |
+
"7 Use cases": "In this section, we provide descriptions of a few research questions that could be informed by our corpus. The use cases are based on information that is available in the corpus and is not found in other editions of the manuscript.\nThe first use case concerns the status of the tlamatinimeh ‘sages, wise men’ (lit. those who know things). It is widely claimed that there is no philosophy outsideWestern philosophy (Maffie, 2014), but this claim has been contested by scholars, starting from Ángel María Garibay and his student Miguel León-Portilla who identify the tlamatinimeh with philosophers and argue that the precontactMexicans had long philosophical traditions (León-Portilla, 1956). Analysing individual words has since this work been the basis of understanding Nahua thought. However, to date this process is difficult and error-prone as it involves carefully reading through unannotated concordances of surface forms and it is easy to miss examples that appear in forms that are unknown or unfamiliar to the researcher.\nOur corpus will be able to help by allowing scholars to extract examples that are morphologically and syntactically related. For example, by allowing queries based on lemmas (encompassing for example tlamatini ‘sage’, tlamatinimeh ‘sages’, etc.) It will also allow for searching for specific syntactic constructions, such as those where a tlamatini is the subject of a speech verb.\nThe second use case concerns concepts of time. For instance, consider Maffie (2014)’s statement about the Mexica conceiving time and space as a single unit. He argues that the Mexica did not separate time and space but had a perception of a “time-place” . This argument is based on the word cahuitl, which means ‘time’, and the intransitive verb cahui, which means “to stay or end”. However, Maffie argues that cahuitl is also related to\nthe transitive verb cahua, which means “to leave or abandon”, among other senses.\nThe morphological annotations (including lemmas) in our corpus will allow for searching on lemma (to be able to distinguish forms of cahui from forms of cahua). And the syntactic annotations will allow for the extraction of time and place obliques that are dependents of those two verbs.",
|
| 23 |
+
"8 Concluding remarks": "We have outlined the strategies and approaches involved in creating a free and open, linguisticallyannotated corpus of the FC. Having nearly completed retokenisation, orthographic normalisation, lemmatisation, part-of-speech tagging, and morphological analysis for 3 of the 12 books, we have established the key linguistic information to include in the corpus, and have engineered the foundations of the annotation process. Results of our preliminary experiments into automatic annotation suggest that some tasks, like orthographic normalisation, can largely be automated with the existing data, whereas others, e.g., retokenisation, likely still require more labelled data and/or a more powerful architecture.",
|
| 24 |
+
"8.1 Future work": "Our first priority for the future is to continue the annotation process, automating some of the text normalisation, expanding the lexica, and enhancing\nthe morphological analyser. We are optimistic that with each subsequent book, the additional amount of available annotated data will enable faster future annotation via automation. Finally, adding dependency syntax annotations will enable quantitative analysis of colonial Nahuatl syntax, a field with relatively little prior work.\nThe study described in §7 is one of many potential uses of an annotated corpus as described here. We expect that the release of this corpus with complete morphosyntactic annotations and an unambiguous free licence will promote future research from scholars in a variety of fields.\nAdditionally, the tools for automatic processing of the FC will likely be applicable to the numerous additional texts written in Nahuatl during the colonial period, contributing to the advancement of language technology development for Nahuatl.\nFinally, another important project related to the development of this corpus involves the translation of the FC into contemporary Nahuatl variants, making the rich cultural heritage of theNahuatl language more accessible to Nahuatl-speaking communities. It is our hope that the production of this corpus can aid in the translation process.",
|
| 25 |
+
"Acknowledgements": "We would like to thank Maira Cayetano Nemecio and Stephanie Berthoud Frías for their valuable contributions. We are grateful to Mitsuya Sasaki and Joe Campbell for fielding numerous questions about language use in the Florentine Codex, and to the anonymous reviewers for their helpful feedback. Finally, a special thanks to Daniel Swanson, AndrewDavis, Zack Leech, andMaria Lucero Guillen Puon, for stimulating discussions about Nahuatl and the Florentine Codex.",
|
| 26 |
+
"A Books of the Florentine Codex": "Table 5 presents some statistics about the books of the Florentine Codex."
|
| 27 |
+
}
|
ACL_23_no_limitation/ACL23_1185.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1185",
|
| 3 |
+
"Title": "Developing finite-state language technology for Maya",
|
| 4 |
+
"abstractText": "We describe a suite of finite-state language technologies for Maya, a Mayan language spoken in Mexico. At the core is a computational model of Maya morphology and phonology using a finite-state transducer.1 This model results in a morphological analyzer and a morphologically-informed spell-checker. All of these technologies are designed for use as both a pedagogical reading/writing aid for L2 learners and as a general language processing tool capable of supporting much of the natural variation in written Maya. We discuss the relevant features of Maya morphosyntax and orthography, and then outline the implementation details of the analyzer. To conclude, we present a longer-term vision for these tools and their use by both native speakers and learners.",
|
| 5 |
+
"1 Introduction": "Maya2 is a member of the Yucatecan branch of the Mayan language family (Figure 23). It is the second most widely-spoken indigenous language of Mexico, with around 800,000 speakers primarily in the states of Yucatan, Quintana Roo, and Campeche in southern Mexico (Collin, 2010) (See\nFigure 14), including a substantial speaker population in California (Mattiace and de Mola, 2015) and a modest population in Belize.\n1https://github.com/apertium/apertium-yua 2We follow the recommendation of the Open School or Ethnography and Anthropology and the Community Institute of Transcultural Change (see §1.1) with respect to terminology, using the term “Maya,” the autonym of the Mayaspeaking people, when referring to the language or cultural/ethnic group, instead of “Yucatec Maya,” commonly used by linguists, or “Mayan”, which should be reserved for referring to the language family or proto-Mayan (Castañeda and Dzidz Yam, 2014).\n3Figure 2 was created by user Madman2001 (https://commons.wikimedia.org/wiki/File: Mayan_Language_Tree.svg)\n4Figure 1 is based on work by user Kmusser (https://commons.wikimedia.org/wiki/File: Mexico_States_blank_map.svg)\nText-based language technologies, ubiquitous for a small number of “mainstream”, mostly colonial languages such as English or Spanish, facilitate human-computer interaction and to a large extent computer-mediated communication, and can aid in language learning (Shadiev and Yang, 2020). Furthermore, language technology for endangered languages can play a useful role in language maintenance and revitalization efforts (Reyhner, 1999; Ben Slimane, 2008; Zhang et al., 2022). Unfortunately, there is a paucity of such technology for most of the world’s languages, leaving speakers and language learners without potentially valuable resources. Consequently, monolingual speakers face additional barriers to entry in the digital domain, and speakers who are bilingual in a dominant, colonial language for which such technology exists will be more likely to use that language online and on digital devices, further contributing to language shift.\nThis paper outlines the design and implementation of a finite-state morphological analyzer for Maya. Developed in concert with Maya language educators, the analyzer is intended for use as a writing tool for authors, educators, and students\n30\n(to ensure consistent written resources via a spellchecker), and as a reading-aid that can provide students with lexical information (e.g. the root and/or grammatical features) about an unknown word in a text. We focus primarily on the grammar of Maya and the implementation of the analyzer, and present a prototype of a working spell-checker.",
|
| 6 |
+
"1.1 Motivation and OSEA-CITE": "The motivation for the present work stems from a collaboration with the Open School of Ethnography and Anthropology and the Community Institute of Transcultural Change (OSEA-CITE, henceforth OSEA), a Pisté-based organization whose stated focus is “language revitalization, sustainability, cultural ownership, heritage rights, community health and well-being, the innovation of tradition, and the interconnections between local, national, and transnational communities and social forces.” While designed with Maya speakers, learners, linguists, and language activists in mind, the technologies described below are particularly informed by and aligned with OSEA pedagogical materials (Castañeda, 2014) for use in the classroom as reading and writing tools for both learners and educators in OSEA programs.",
|
| 7 |
+
"2 Related work": "The use of finite-state transducers (FSTs) for modeling human language has a long tradition spanning multiple decades (Kornai, 1996) and proving effective in areas such as morphological analysis (Beesley and Karttunen, 2003), spell-checking\n(Pirinen et al., 2014), among others. It is particularly attractive in the low-resource case since it requires significantly less data than popular statistical approaches. Furthermore, finite-state systems can also be leveraged in order to generate data to train better statistical machine-learning models (Moeller et al., 2018).\nThe application of finite-state language technology to indigenous languages of Mesoamerica also has some precedent, with morphological analyzers developed for Nahuatl (Maxwell and Amith, 2005; Pugh and Tyers, 2021; Tona et al., 2023), Zapotec (Washington et al., 2021), Huave (Tyers and Castro, 2023), and K’iche’ (Richardson and Tyers, 2021). Nicolai et al. (2020) present the large-scale development of morphological analyzers and generators for over one thousand languages using the Johns Hopkins University Bible Corpus (McCarthy et al., 2020), including some Mayan languages.\nKuhn and Mateo-Toledo (2004) is perhaps one of the earliest published works focused on the development and application of language technology to assist in documenting a Mayan language, Q’anjob’al (spoken in Guatemala), training a maximum-entropy part-of-speech tagger. Palmer (2009) and Palmer et al. (2010) also apply techniques from machine learning and computational linguistics to the documentation of a Mayan language (Uspanteko, also spoken in Guatemala). More recently, Tyers and Henderson (2021) and Tyers and Howell (2021) developed an annotated linguistic corpus of K’iche’ and explored approaches to automated tagging and parsing. Maya is also included as one of six Mexican languages aligned with Spanish in the Parallel Corpus for Mexican Languages (Sierra Martínez et al., 2020).\nThere has also been interest and some work leveraging computational technology to annotate and analyze Classic Maya heiroglyphic writing (Prager et al., 2018; Vertan and Prager, 2022).\nParticularly relevant to motivation and aims of the present project, Gasser (2011), outlines useful applications of computational morphological analyzers for learners of morphologically-rich indigenous languages of the Americas.",
|
| 8 |
+
"3 Orthography": "The Latin alphabet has been used to write Yucatec Maya since the 16th century, but the first organized efforts to standardize the orthography took place in the mid-20th century (Brody, 2004). The colonial-\nera writing practices are described thoroughly in Shigeto (2011), and a variant of this orthographic approach is also used in Bolles and Bolles (2001). Many linguistic resources for Maya also use an orthography inspired by theAmericanist Phonetic Alphabet (Bricker et al., 1998; Blair and VermontSalas, 1965), (e.g. using P for the glottal stop). Today, the commonly (though by no means unanimously) adopted “contemporary orthography” is laid out in the publication Normas de Escritura Para la Lengua Maya (SEP & INALI, 2014).\nIn the classroom, OSEA teaches a writing system similar to the contemporary one, with a few pedagogically-motivated changes, like the explicit marking of low tone on long low vowels. Additionally, there are some differences related to the spelling of specific words. In order to offer students a consistent source for spelling questions (primarily with respect to vowel quantity and tone), OSEA uses Bricker et al. (1998) as an authoritative reference. This is not to say that alternative spellings are incorrect from OSEA’s perspective, but rather that it is valuable for students to have a thorough and consistent guide to reference when making spelling decisions5.\nSince the project presented here is intended to be used by students and teachers in the OSEA Maya language program, we follow these orthographic norms while still supporting both the colonial and contemporary orthographies. Details about this are provided in section 6.5.",
|
| 9 |
+
"4 A brief overview of Maya morphosyntax": "An important linguistic property of Maya worth mentioning at the outset is that it does not have tenses, per se. Instead, it inflects verbs for aspect to reflect whether a given action has been completed, or how long ago it began (Bricker et al., 1998). Details about this system are explored in greater depth in section 4.2.\nMaya is a split-ergative language, i.e. it follows ergative-absolutive alignment in all but the imperfective aspect, where it follows nominativeaccusative alignment.\nAs will become obvious in the discussion below, Maya has a complex derivational system. Most word classes can be derived from other word\n5It should be noted that our implementation is also flexible and can be easily-updated to be applied to other writing conventions and pedagogical environments.\nclasses, and the transitivity and voice of a verb is derived morphologically as well.",
|
| 10 |
+
"4.1 Pronouns": "Maya has three sets of pronouns: one set (the “independent pronouns”) is syntactically independent of verbs while two, called “Dependent Pronouns” are affixes or clitics on the verb.\nIndependent pronouns, as the name suggests, are independent words (e.g. not affixes or clitics). They may be used to emphasize (Example 1) or topicalize (Example 2) a verbal argument, or after prepositions to express indirect objects.\n(1) k’abéet OBLIG a S2 bin-e’ex go-S2PL te’ex PRON2PL\n‘You all (emph.) must go’.\n(2) te’ex-e PRON2PL-TOP k’abéet OBLIG a S2 bin-e’ex go-S2PL\n‘As for you all, you must go’.\nSet A pronouns (a in examples 1 and 2) which come before the verb, typically written separated from the verb, and are sometimes written as merged or contracted with a preceding aspectual auxiliary. With respect to case, Set A pronouns correspond to the A argument (as defined in Dixon andDixon (1994)) except when in the imperfective, in which case they are the subject of both transitive and intransitive verbs, except in copular clauses where a Set B pronoun is used to mark the subject. Set A pronouns are also the possessive pronouns.\nSet B pronouns are suffixes used to express the S and O arguments of the verb, i.e. the subject of an intransitive verb and the object of a transitive verb, except in the imperfective. They are also used as the subject in copular clauses.",
|
| 11 |
+
"4.2 Verbs": "Verbs are by far the most morphologically complex words in Maya. The specific components of the “verb compound” depend on the verb’s transitivity and the aspectual class of the conjugation. The aspectual auxiliaries and Set A pronouns are often written as separate orthographic words from the verb itself.\nIn the imperfective, verbs typically must be preceded by an aspectual auxiliary followed by a Set A pronoun. For example, k (habitual), táan (progressive aspect), laili’...e’ (“still doing X”), etc. Note that some of these auxiliaries, such as laili’ above,\nhave a corresponding terminal enclitic that is attached to the end of the verb (Example 3). The aspectual auxiliaries often combine with the adjacent Set A pronoun to form a contraction, e.g. táan+in →tin.\n(3) laili’ still u S3 xòok-o’ob-e’ study.APS-3PL-CONT.\n‘They (pl.) are still studying’.\nThere are three important features of verbs that determine how they are inflected: transitivity, the derivational processes undergone to achieve that transitivity (e.g. is the verb a transitive root or an intransitive/nominal/adjectival root that has become transitive via derivation), and voice (Maya has four distinct voice categories: active, passive, antipassive, and middle).\nIntransitive verb stems often take one of a set of aspectual “status” suffixes6 depending on the as-\n6Bohnemeyer (1998), Brody (2004), and others have re-\npect and/or mood: -Vl suffix in the imperfective, where V matches the vowel in the root, a null suffix in the perfective, -a’an in the present perfect, and -Vk in the subjunctive.\nTransitive verb stems in the active voice take aspectual status suffixes -ik, -ah, and -mah in the imperfective, perfective, and present perfect aspects, respectively. In the subjunctive mood, no suffix is added, unless the verb is phrase final, in which case it takes -eh.\nThe majority of root transitive verbs follow a CVC phonological template, which changes systematically to produce changes in voice: CV̀VC for\nferred to these suffixes as “status suffixes”, and they go by various other names in the literature. In the OSEA-CITE pedagogical literature, these suffixes are referred to as “primary suffixes”. We use the term “status” in this paper for the sake of consistency with previous linguistic work.\nantipassive, CV’VC for passive, and CV́VC for the middle voice. The status suffixes for these verbs are listed in Table 5. Transitive verbs can become reflexive with the addition of a suffix of the formula ‘Set A + bah’ (Example 4).\n(4) táan PROG in S1SG.A wil-ik-in-bah see-STATUS-S1SG.A-REFL\n‘I am seeing myself.’\nIntransitive roots can be transitivized with either the -t suffix or the causative -s suffix. They typically use the same status suffixes as transitive roots.\nA third class of verbs with a distinct morphological pattern is that of Positional verbs. These verbs take status suffixes -tal, -lah, -la’an, and -lak in the imperfective, perfective, present perfect, and subjunctive, respectively.\nNote that the discussion here is limited only to regular intransitive roots, regular transitive roots, and positionals. There are other verb root classes that follow slightly different inflectional patterns, but a complete description of them is beyond the scope of this paper.",
|
| 12 |
+
"4.3 Nouns and adjectives": "Nouns and adjectives have notably less morphologically-complex than Maya verbs. They inflect for number, with the suffixes -o’ob and -tak (the latter for expressing a plurality of types vs. simply plural in number). Both Nouns and Adjectives can also behave as intransitive predicates, taking a Set B pronoun as the subject (Example 5. Commonly, Nouns that are core arguments of the verb can be topicalized by placing them at the front of the sentence with the topic suffix -e. Deixis can also be expressed using nominal morphology. Gender, while not a required feature of Nouns, can be indicated with the prefixes x- and h- (x- is also used as an instrumental nominalizer on verbs). Verbs can be derived from either nouns or adjectives using -tal / -chahal for intransitives (e.g. ma’alob “good” →ma’alobtal “to improve”) and -kuns / -kins for transitives (e.g. wíinik “man” →wíinikkunsik “make someone into a man/human”).\n(5) kòolnáal-o’on farmer-S1PL ‘We are farmers’.",
|
| 13 |
+
"4.4 Phrase-level morphology": "There are a number of cases of words in Maya which require a corresponding terminal suffix at\nsome point later in the phrase. These include the negation marker ma’a, which typically requires that the end of the negated word or phrase have a -i suffix, certain aspectual auxiliaries like laili’ which has a corresponding -e at the end of the verb phrase, and numerous other cases. Deictic suffixes -a “this”, -o “that”, and -e “this right here” also correspond to a prenominal article le (See Example 6).\n(6) ti’ ADP le ART yáax first k’ìino’ day-DEM3\n‘At the beginning of that day’.",
|
| 14 |
+
"5 Data": "For development, we use a small corpus consisting primarily of pedagogical texts used in the classroom by OSEA. They include lists of sentences and a number of tsikbalo’ob (dialogues). We also include four short stories from Bolles and Bolles (2001), for which we changed the orthography to reflect the writing norms of OSEA-CITE (with permission from the author). Sentence and token counts are listed in table 4.",
|
| 15 |
+
"6 Implementation": "The morphological analyzer is developed within the Apertium project (Forcada et al., 2011; Khanna et al., 2021), and is made up of three principle components: a model of Maya morphotactics, a model of phonological processes, and an analysis disambiguation step. A sample of the type of analysis that is produced can be seen in Table 6.\nOnemajor advantage of using the Apertium platform is that a single morphological model can trivially be extended to additional applications, such as spell-checking and machine translation. Here, we describe the development of the morphological analyzer, and briefly discuss a spell-checking application prototype.",
|
| 16 |
+
"6.1 Morphotactics": "Morphotactics are defined using lexc. For verbs, we separate intransitive roots, transitive roots, and positionals. We encode lexical information about the root, e.g. whether an intransitive root takes the -Vl ending in the imperfective, in the lexicon entry. When a word undergoes derivation, we maintain the original lemma. For example, the CVC transitive root xok has in its lexicon entry the two additional voice derivations:\n! Study, read xok<v><tv>:xok TransActive; xok<v><iv><aps>:xòok TransAps; xok<v><iv><pss>:xo'ok TransPss; xok<v><iv><mv >:xóok TransMed;\nEach continuation lexicon reflects the specific set of status suffixes for the given root, aspect, and mood.\nThe lexicon entries for intransitive verbs also include lexical information, e.g. whether a given verb’s transitive derivation takes the transitivizer - t, the causative -s, or nothing.\nNoun stems are optionally preceded by the gender/agentive prefixes h- or x-, and are followed by either the nominal inflections (e.g. diminutive, plural, possessive suffixes) or by denominalizing verbal morphology (e.g. the -tal / -chahal status suffixes).\nSince the aforementioned terminal clitics can be appended to most words, each word optionally ends with them.",
|
| 17 |
+
"6.2 Phonology": "Phonological processes are modeled with twol rules (Karttunen et al., 1987). As an example, take vowel harmony, a common process in Maya. In cases where a morpheme’s vowel harmonizes with that of the previous morpheme (e.g. the -Vl suffix for many intransitive roots), we represent these vowels as archiphonemes, and define the harmony process in twol as follows:\n\"Vowel harmony\" V:Vx <=> Vx [Cns | >:0 | ']+ >:0 _ ; where Vx in UnaccVow ;\nThis component is also where we handle common contractions. For example, the intransitive verb tàal “come”, when transitivized with the causative -s, usually drops the last consonant in the root (tàal-s-ik →tàasik). There are a number of verbs for which this is the case, irrespective of which transitivizer they take. For these verbs, we represent the root with an archiphoneme (e.g. {l} as the last consonant of the root, which is surfaced as either ‘l’ or ‘Ø’).",
|
| 18 |
+
"6.3 Analysis disambiguation": "Given the complexity of Maya morphology, our model of morphotactics often produces a number of potential analyses for the same form. As a simple example, take the second-person Set A pronoun a. This is used for both singular and plural subjects/possessors, and the number of the sub-\nject is determined by the presence or absence of the second-person plural suffix on the adjacent verb/noun. Similarly, the phrasal terminal suffixes -i and -e on a verb could signify negation, agreement with one of a subset of aspectual auxiliaries, or a locative analysis.\nWe use Constraint Grammar (Karlsson et al., 2011) to disambiguate analyses using the analyses and lemmas of words in the surrounding context. For example, to disambiguate the a Set A pronoun, we use the following rules:\nREMOVE PRO + 2Sg IF (1 VN + SPl2); REMOVE PRO + 2Pl IF (1 VN - SPl2);\nAny time the Set A pronoun a is seen, it will include both plural and singular analyses. The first rule above removes the singular analysis if the following (right-adjacent) word is a verb with a second-person plural subject analysis. The second rule removes the plural analysis if the rightadjacent word is a verb without a second-person plural analysis.\nThe example above is one of a large number of Constraint Grammar rules needed to effectively narrow-down themorphological analyses using the surrounding words as context.",
|
| 19 |
+
"6.4 Spell checking": "While the ability to automatically provide a morphological analysis is both interesting and valuable in itself, our system, thanks to the infrastructure set up by the Apertium project, is also easily extensible to a number of other applications. Here, we briefly discuss how we integrated the morphological analyzer to make a spell-checker and spellingcorrector for a word processor.\nThe use of finite-state models for efficient spellchecking of morphologically-rich languages has a long history (Beesley andKarttunen, 2003; Pirinen et al., 2014). As a prototype spell-checker and corrector, we use an FSTwhich transduces incorrectlyspelled words within a fixed edit-distance to the words in our model. This FST can then be integrated with a spelling and grammar extension developed by the Voikko7 project to be used with LibreOffice Writer8, a free and open source, multiplatformword processor that is part of the LibreOf-\n7https://voikko.puimula.org/ 8https://www.libreoffice.org/discover/\nwriter/\nfice suite of software9. Figure 3 shows a screenshot of the spell-checker in action. Its current status is a working prototype, but we plan to improve it by adding common misspellings to the model and weighting it using proofread written text.",
|
| 20 |
+
"6.5 Supporting variation in written Maya: normative and descriptive models": "An important intended feature of our model is the ability to simultaneously support a normative model for pedagogical purposes, and a descriptive model for other natural language processing tasks. Specifically, the spell-checker, insofar as it is used by a language teacher to write pedagogical material or to encourage uniformity in writing practices among students, should adhere to the principles taught and followed by the educators. The morphological analyzer on the other hand, which can be used to help understand, analyze, or segment a Maya text from a number of potential sources/authors, should be flexible to common written variation in the language.\nThe Apertium platform allows for precisely this flexibility via “Direction” flags in our morphotactics file, and a spellrelax file. The “Direction” flags are simply commented annotations on a specific line in the lexc file that specify which direction that line should be included in at compile time. As an example, take the case of the nominal classifier. It is commonplace to see the number, such as hun “one”, and the following nominal classifier, e.g. p’éel for inanimate nouns, written as a single orthographic word (in this case with nasal place assimilation): hump’éel. The OSEA program teaches its students to write these as two separate words: hun p’éel. Thus, we would like for our spell-checker to identify hump’éel as “incorrectly” spelled, while still recognizing this form in the analyzer so as to cover common variation in contemporary Maya writing. We can achieve this by including the annotation Dir/LR on the entry for this variant. This is a very minor example, but is one of many, and is illustrative of the type of flexibility we want to maintain in our system.\nThe spellrelax file allows for orthographic variation in the input of the morphological analyzer, and the ability to map it to the canonical written forms used in our lexicon. We use this file to support the large amount of orthographic variation that is characteristic of Maya writing.\n9https://www.libreoffice.org/\nThe following three lines illustrate how we handle (1) the common use of [j] where the OSEA orthography uses [h], (2) the omission of tone marking on long low vowels also characteristic of the contemporary INALI orthography but dispreferred for pedagogical purposes by OSEA, and (3) the use of [dz] for [ts’] in texts using the colonial style:",
|
| 21 |
+
"7 Coverage": "On our modest-sized corpus, the morphological analyzer’s coverage is about 96% on tokens and 85% on types. Of the forms currently not covered by the analyzer, many are interjections that may be author-specific (e.g. “kikiriki”, the sound of a rooster crowing), and foreign loans (e.g. “cinco”, “greedy”). Currently, all of the missed words by our analyzer are hapax legomena.",
|
| 22 |
+
"8 Concluding remarks and future work": "We have described in detail a finite-state morphological analyzer for Maya, and demonstrated its utility outside of merely performing morphologi-\ncal analysis by using the model to build a spellchecker.\nFor the near future, our first priority is growing the corpus. We are in the process of normalizing the orthography for a number of additional texts which we will then add and use to update the analyzer lexicon. Outside of simply improving the vocabulary and coverage of the analyzer, we plan to explore the numerous ways this tool can be of use to students by incorporating it into a browser-extension that aids the user’s understanding of Maya texts read in the browser.\nWe also hope to improve the spell-checker by adding a better-informed error model that takes into consideration common spelling mistakes. Adding support for the spell-checker in other popular word processors is a longer-term goal, as this would greatly improve accessibility of the tool for teachers and students.",
|
| 23 |
+
"9 Acknowledgements": "We are grateful to David Bolles for his permission to use, modify, and release his texts. We also extend a sincere thank you to Meesum Alam, Laura Merino Hernández, Héctor Figueroa, Matthew Fort, and Alex O’neil for their careful reading and thoughtful feedback of drafts of the present paper, and to the anonymous reviewers for their helpful comments."
|
| 24 |
+
}
|
ACL_23_no_limitation/ACL23_1192.json
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1192",
|
| 3 |
+
"Title": "A finite-state morphological analyser for Highland Puebla Nahuatl",
|
| 4 |
+
"abstractText": "This paper describes the development of a free/open-source finite-state morphological transducer for Highland Puebla Nahuatl, a Uto-Aztecan language spoken in the state of Puebla in Mexico.1 The finite-state toolkit used for the work is the Helsinki Finite-State Toolkit (HFST); we use the lexc formalism for modelling the morphotactics and twol formalism for modelling morphophonological alternations. An evaluation is presented which shows that the transducer has a reasonable coverage—around 90%—on freely-available corpora of the language, and high precision— over 95%—on a manually verified test set.",
|
| 5 |
+
"1 Introduction": "This paper describes a new morphological analyser for Highland Puebla Nahuatl, an endangered language spoken in the state of Puebla in Mexico (see Figure 12). The analyser is based on finitestate technology, which means that it can be used for both the analysis and the generation of forms — a finite-state morphological transducer maps between surface forms and lexical forms (lemmas and morphosyntactic tags).\nAn analyser of this sort has a wide variety of uses, including for automating the process of corpus annotation for linguistic research as well as for creating proofing tools (such as spellcheckers) and for lemmatising for electronic dictionary lookup for language learners — in a language with heavy prefixing and suffixing morphology, determining the stem is not a simple matter.\nOur approach is based on the Helsinki FiniteState Toolkit (HFST, Lindén et al. (2011)).\n1https://github.com/apertium/apertium-azz 2Figure 1 is based on work by users TUBS (https://commons.wikimedia.org/wiki/File: Puebla_in_Mexico_(location_map_scheme).svg) and Battroid (https://commons.wikimedia.org/wiki/ File:Mexico_Puebla_Puebla_location_map.svg)",
|
| 6 |
+
"2 Prior art": "Finite state transducers (FST) for modeling morphology has a long history within the field of computational linguistics (Kornai, 1996; Beesley and Karttunen, 2003).\nWork on morphological analysers for Nahuatl languages includes an effort, inspired by literate programming, to use the code for the transducer as a descriptive grammar of a Nahuatl variety spoken in the state of Guerrero (Maxwell, 2015), and morphological analysers specifically targeting colonial-era Nahuatl, either for the exploration of colonial texts (Thouvenot, 2009), or as a means to evaluate similarity between written Nahuatl varieties (Farfan, 2019). One drawback of these projects is that they are not to our knowledge freelyavailable or easily-accessible.\nNicolai et al. (2020) describe the development of morphological analysers and generators for more than one thousand languages using the Johns Hopkins University Bible Corpus (McCarthy et al., 2020), including some variants of Nahuatl (however, not Highland Puebla Nahuatl).\nPugh et al. (2021) presents the first open-source morphological analyser for the Western Sierra Puebla Nahuatl variant group. Tona et al. (2023) expand on that system, extending it to support Huasteca Nahuatl. This latter work, however, has not been released.",
|
| 7 |
+
"3 Highland Puebla Nahuatl": "Nahuatl (or Nahuat, Nahual) is a polysynthetic, agglutinating Uto-Aztecan language continuum spoken throughout Mexico and Mesoamerica. The Mexican Government’s Instituto Nacional de Lenguas Indígenas (INALI) recognizes 30 distinct variants (INALI, 2009).\nHighland Puebla Nahuatl, (or Sierra Puebla Nahuatl, also referred to by INALI as Náhuatl del noreste central, ISO-639-3 azz) is a Nahuatl vari-\n103\nant group spoken in the Northeastern Sierra region of the state of Puebla, Mexico, mainly in the municipalities of Tetela de Ocampo, Zacapoaxtla, and Cuetzalan. According to Ethnologue’s 2007 estimate, it is spoken by an estimated 70,000 speakers.\nThis particular Nahuatl variant has been the subject of a number of descriptive works (Key, 1960; Robinson, 1970; Key and Key, 1953) and dictionaries (Key and Richie de Key, 1953; Cortez Ocotlán, 2017).",
|
| 8 |
+
"4 Data": "The source data used to develop the FST comes from three sources: (1) A dataset of transcribed recordings of interviews and conversations, mainly about plants (Amith et al.), (2) a subset of texts in the azz variant from the multi-variant parallel corpus Axolotl (Gutierrez-Vasques et al., 2016), and (3) technical publications by the Sociedad Mexicana de Física3, which consist of translations of various scientific texts. The breakdown of volume for each of these sources is presented in Table 1.",
|
| 9 |
+
"5 Orthography": "Writing practices in Nahuatl vary and are characterized by multiple competing views (de la Cruz Cruz, 2014). The most well-known and widelydisseminated orthographic standards for Nahuatl are ACK, a colonial-inspired orthography named after scholars Anderson, Campbell, and Karttunen, who popularized it in their work, the standard from the Instituto Nacional de Lenguas Indígenas (INALI) (INALI, 2018), and that used by the Secretaría de Educación Pública (SEP). In practice, Nahuatl writing contains a great deal of ortho-\n3https://site.inali.gob.mx/SMF/Libros2.0/ nhtl/index.html\ngraphic variation, often even within the writing of a single author.\nThe orthography used for building the analyser follows what was taught in the Nahuatl course for adult learners given in the municipality of Tetela de Ocampo, Puebla in the summer of 2022 (TO). This broadly follows the SEP, but with the addition of the letter h which is used before u for /w/ after vowels or at the beginning of words. For example SEP ueueyi, TO huehueyi ‘big’, SEP mochiua, TO mochihua “it is made”.\nWe maintain a separate finite-state transducer to account for orthographic and spelling variation. This includes rules for orthographic changes like ts (SEP, INALI) → tz (ACK) (e.g. tejuatsin ‘youHON’ → tehhuatzin), spelling changes, such as w$ → j$ and abbreviations that are found in the transcriptions from the spoken corpora, such as ^t’ → ^tik.",
|
| 10 |
+
"6 Methodology": "In this section, we outline some of the implementation details of the analyzer, including a description of relevant linguistic features.",
|
| 11 |
+
"6.1 Lexicon": "The lexicon consists of around 5,000 lexemes which were added in frequency order (calculated using the corpora described in §4) and with reference to the two available dictionaries (Key and Richie de Key, 1953; Cortez Ocotlán, 2017) for part-of-speech classification. The lexicon was created in the lexc formalism, which is standard in HFST.\nClosed categories (pronouns, conjunctions, etc.) were added manually based on class notes and on existing grammatical descriptions (Key, 1960; Robinson, 1970; Cortez Ocotlán, 2017).",
|
| 12 |
+
"6.2 Tagset": "The tagset is based on the tagset of the Apertium project (Forcada et al., 2011), each tag is encased in greater than ‘<’ and less than ‘>’ symbols. The tag names are mnemonic, some of them coming from other analysers in the Apertium project and being based on English, Spanish, or Catalan terms, and some are based on Nahuatl terms. We include a conversion from this Apertium-based tagset to one based on Universal Dependencies (Nivre et al., 2020).",
|
| 13 |
+
"6.3 Morphotactics": "The morphotactics of Highland Sierra Nahuatl is very similar to that of other Nahuatl varieties. It is characterised by a concatenative affixing morphology with a large number of inflectional and derivational morphemes. It also features long-distance dependencies between prefixes and suffixes.",
|
| 14 |
+
"6.3.1 Nouns": "Nouns inflect for number and possession. They also have very productive derived forms, such as the reverential -tsin (1) and less productive derivations, such as -k(o) for locative, and can appear as predicates with the addition of subject prefixes. We implement the morphotactics for inflection and for the most frequent subset of the derived forms. Nouns are therefore split into separate continuation classes for their different combinatorial possibilities.\n(1) kikouaj ki-koua-j O.SG3-buy-S.PL in in the\ntokniuantsitsin to-kni-uan-tsi~tsin POSS.PL1-person-PL-PL.HON “People buy it.” (lit. “Our brethren buy it”)\nIn (1), the noun (i)kni ‘sibling’ appears with the first person plural possessive prefix to-, the\npossessed plural marker -uan, and the reverential marker tsi~tsin, where plurality is further marked with partial reduplication of the -tsin morpheme.\nRelational nouns: There is also a subcategory of nouns, called “relational nouns,” used for expressing spatial and temporal relations, as well as other non-core semantic roles. Unlike common nouns, these nouns have obligatory possession.\n(2) In In mochiua mo-chiua O.REFL-make kuoujtaj, kuoujtaj, mountains, in in eua eua born\ntalixko, tal-ix-ko, ground-RELN-LOC, amo amo NEG itech i-tech POSS.SG3-on kuapalak. kuapalak. tree.trunks “It grows in the mountains, it comes up from the ground, it doesn’t grow in tree trunks.”\nIn (2) we see two methods in which relational nouns can be used. The first is talixkowhere the the relational noun -ixko ‘in front of / on the surface of’ is compounded with the noun tali ‘ground/earth’. This relational noun itself is composed of ix ‘face’ and ko a locative morpheme.\nThe second method is using a free-standing relational noun with a complement, itech kuapalak ‘in rotten tree trunks’, is composed of a possessive form of the relational noun -tech ‘on’ and the noun compliment kuapalak ‘tree trunk’.\nThese relational nouns can also appear separated from their complement, as in (3, where the complement of iuan ‘with’ is emol ‘beans’, but it appears to the right of the verbal complex se kikua “it is eaten”.\n(3) uan uan and iuan i-uan POSS.SG3-with se se one kikua ki-kua O.SG3-eat emol emol beans\n“... and it is eaten with beans”\nThey can also receive reverential morphology as in one of the typical ways of expressing goodbye, mohuantsin ‘with you’ (4).\n(4) mohuantsin mo-huan-tsin POSS.2SG-with-HON “with you”\nLocatives: In addition to compounding with relational nouns there is also a locative derivational suffix -k(o) which forms locative nouns from places. For example ima ‘her hand’, imako ‘in her hands’.4",
|
| 15 |
+
"6.3.2 Verbs": "Verbs inflect for number and person of subject and object(s), and for tense, aspect and mood. They also can be compounded with auxiliary verbs and can have incorporated adverbial items for both direction of movement and for manner of action. Additionally there is reverential agreement for the second person.\n(5) Xe Xe QST ma ma OPT nimitsonchiya ni-mits-on-chiya S.SG1-O.SG2-HON-wait huan huan and\ntisentakuaskej? ti-sen-ta-kua-s-kej S.PL1-TOGETHER-O.NN3-eat-FUT-S.PL “Shall I wait for you and we’ll eat together?”\nIn (5) we see examples of incorporated adverbials, tisentakuaskej “wewill eat together”, affixal agreement, ti-[...]-kej for the first person plural subject and ta- for the indefinite object and the future tense suffix -s. The verb nimitsonchiya has the on- prefix, indicating reverentiality towards the addressee.\n(6) se se one mokouilia mo-kou-ilia O.REF-buy-APP komo komo if se se one\nkikuasneki. ki-kua-s-neki O.SG3-eat-FUT-want “One goes and buys it if one wants to eat it.”\n4Although the name is the same, these locatives are unlike those found in other languages as inflection because: (1) not every word can take a locative suffix, (2) they are not selected for by argument structure, (3) the resulting meaning can be idiosyncratic. For this reason we categorise them as derivation as opposed to inflection.\n(7) se se one kiualkui ki-ual-kui O.3SG-VEN-bring\n“It is brought.” (lit. One brings it (here))",
|
| 16 |
+
"6.4 Morphophonology": "Phonological processes are implemented via twol rules. There are relatively few of these, and they include degemination (/kk/ →[k]) and nasal assimilation (/n/ →[m] // m).",
|
| 17 |
+
"7 Results": "To evaluate the analyzer, we calculate the naïve coverage for both tokens and types. The naïve coverage is reported for each data source in Table 1. Naïve coverage is the percentage of surface forms in a given corpus that receive at least one morphological analysis. Forms counted by this measure may have other analyses which are not delivered by the transducer.",
|
| 18 |
+
"7.1 Evaluation": "Since we don’t have a large, annotated dataset for evaluation, we performed a manual inspection of two random samples of data to get a sense of the the system’s precision and to understand the reasons for any missed words.\nFirst, we sampled 100 random analyses from the corpora and identified any mistakes. The precision on this sample was 95%. Next, in order to find out where the most work remains to be done with respect to coverage, we randomly sampled 100 types that are currently not recognised by the system. These words were categorised by part of speech, and in addition we marked each with one or more of the following seven error categories: (1)missing morphotactics, (2) missing orthographic normalisation, (3) missing compound word, (4) reduplication, (5) loan word / code-switching, (6) tokenisation error, and (7) missing lexicon entry.\nOver half of all unknownwords were verb forms. Of these, five were caused bymissing orthographic normalisation rules, for example t’titipitstoti is an abbreviated form of tiktitipitstoti ‘you will be blowing the fire’, and 10 were due to missing stems in the lexicon.\nAround ten percent of the sampled unknown words were caused by errors in tokenisation. The speech corpus contains false starts, for example amo nike..., amo nikmati “I don’t kn..., I don’t know”, and these do not currently receive any analysis.",
|
| 19 |
+
"8 Concluding remarks": "We have described a robust finite-state morphological analyser for Highland Puebla Nahuatl. This work contributes to the recent increased focus in language technologies for Nahuatl, and may play an important role in supporting further Nahuatl language technology in the future.\nIn future work we would like to expand the lexicon to include more stems, to increase the coverage of all of the corpora, and to obtain new corpora for testing. We intend to include support for compounding and incorporation and for weighting the transducer. We already have 10,000 tokens manually disambiguated and will use these to weight more probable analyses.",
|
| 20 |
+
"Acknowledgements": "We would like to thank Patricia Aguilar Romero, don Pedro Rivera, and Mitsuya Sasaki for their help with the work described in this manuscript. In addition we would like to thank the anonymous reviewers for their helpful comments."
|
| 21 |
+
}
|
ACL_23_no_limitation/ACL23_1196.json
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1196",
|
| 3 |
+
"Title": "Enhancing Spanish-Quechua Machine Translation with Pre-Trained Models and Diverse Data Sources: LCT-EHU at AmericasNLP Shared Task",
|
| 4 |
+
"abstractText": "We present the LCT-EHU submission to the AmericasNLP 2023 low-resource machine translation shared task. We focus on the Spanish-Quechua language pair and explore the usage of different approaches: (1) Obtain new parallel corpora from the literature and legal domains, (2) Compare a high-resource SpanishEnglish pre-trained MT model with a SpanishFinnish pre-trained model (with Finnish being chosen as a target language due to its morphological similarity to Quechua), and (3) Explore additional techniques such as copied corpus and back-translation. Overall, we show that the Spanish-Finnish pre-trained model outperforms other setups, while low-quality synthetic data reduces the performance.",
|
| 5 |
+
"1 Introduction": "The LCT-EHU team participated in the AmericasNLP 2023 low-resource machine translation shared task. The task involved machine translation from Spanish to 11 different indigenous languages. The languages in question are very much low-resource, with the number of speakers spanning from a few tens of thousands to a few million and with limited availability of parallel data. Monolingual data is not easily obtained either - Wikipedia is available only in a few of these languages, with the number of articles not being very high. Our team focused on Spanish-Quechua language pair with the approach consisting in:\n• Finding and aligning new parallel data. We obtained bilingual legal documents of the Government of Ecuador (the constitution and some laws); the novel \"The Little Prince\", and the UN Declaration of Human Rights.\n• Using pre-trained machine translation models trained on other language pairs. We experimented with Spanish-English, as a highresource language pair, and Spanish-Finnish,\nwith the linguistic intuition that using an agglutinative language on the target side would provide a closer set-up to the problem we were working on, as previously explored by Ortega and Pillaipakkamnatt (2018) and Ortega et al. (2020).\n• Synthetic and monolingual data. We experimented with a copied corpus approach and synthetic parallel corpus creation from monolingual Spanish data.\nThe official metric used in the shared task is chrF++ (Popović, 2017). In the previous edition of the AmericasNLP shared task, the chrF score of 34.6 was obtained by the REPUcs team (Moreno, 2021) for the Spanish-Quechua language pair. However, this year’s shared task takes the second-best result of 34.3 as a baseline.\nAll of the source code and newly collected data are available in the Github repository 1.",
|
| 6 |
+
"2 Related Work": "Some previous work and approaches that were important for our experiments are explained in the following sub-sections.",
|
| 7 |
+
"2.1 AmericasNLP 2021 Shared Task": "In the first edition of the AmericasNLP lowresource MT shared task, various contributions to the field of machine translation of American indigenous languages were published. The organizers provided training data collected from various sources, alongside manually translated development and test data. Two tracks were available: (1) development set used for training, and (2) development set not used for training.\nHelsinki team (Vázquez et al., 2021) won the task in the majority of language pairs in both tracks,\n1https://github.com/nouman-10/ MT-SharedTask\n156\nusing a two-phase transformer training. They also obtained additional parallel and monolingual data for Spanish-Quechua. Their Model A was a multilingual model with 11 languages, trained for 200 000 steps, which was then trained independently for each of the target indigenous languages for additional 2 500 steps. Model B was a multilingual model with Spanish as the only source language, and with 11 target languages (10 indigenous languages + English). The two-phase training was performed again. In the first phase, they trained the model with 90% of Spanish-English data, while the remaining 10% was divided between 10 indigenous languages, each taking 1% . In the second phase, the proportion of Spanish-English data is reduced to 50%, while including backtranslated data as well. Different versions of both Model A and Model B were trained, depending on whether the development data was used during training or not.",
|
| 8 |
+
"2.2 Synthetic translations and copied corpus": "The use of synthetic translation approaches is born out of a common concern in machine translation: the lack of high-quality parallel data for many language pairs. To solve this, various solutions have been proposed. One of the most common ones is known as back-translation (Sennrich et al., 2016), which involves creating a synthetic parallel corpus by translating monolingual data from the target language into the source language (or source to target, in other approaches) and using this to augment the existing parallel data for training models. Another approach (Currey et al., 2017) involves using monolingual data from the target and aligning it with itself, to mimic parallel data (this is known as copied corpus). The authors try to explain the success of this approach by stating that there might be an improved accuracy on named entities and words that are identical in both source and target texts.",
|
| 9 |
+
"3 Data": "In this section, we will describe the data used in the experiments.",
|
| 10 |
+
"3.1 Original parallel data": "The following corpora were provided by the organizers of the competition (Agić and Vulić (2019) and Tiedemann (2012):\n• JW300 (quz & quy) A collection of Jehovah’s Witnesses Texts, both in Cuzco and Ay-\nacucho Quechua.\n• MINEDU (quy): Sentences extracted from the official dictionary of the Ministry of Education (MINEDU) in Peru for Quechua Ayacucho.\n• Dict_misc (quy): Dictionary entries and samples collected by Diego Huarcaya.\nThe counts of sentences and domain information are presented in Table 1. The column Count refers to the number of sentences in this table and all subsequent ones.",
|
| 11 |
+
"3.2 Additional resources": "We also used resources that were introduced by some of the teams that participated in the 2021 competition. Details of the data introduced by the Helsinki-NLP team (Vázquez et al., 2021) are presented in Table 2.\nIn Table 3 the details of the corpora used by the REPUcs-AmericasNLP2021 (Moreno, 2021) team are shown.\nIn addition to the data collected in the previous AmericasNLP task, we found some parallel data that was used to build A Basic Language Technology Toolkit for Quechua (Rios, 2016) 2. The parallel data was used to create a multilingual treebank in the three languages of the machine translation systems, Spanish-German and Spanish-Cuzco Quechua. The majority of the corpus was SpanishGerman, with the Quechua counterpart being translated by several native speakers in Peru. There were multiple aligned documents available here but most of them needed further cleaning and alignment. The three documents that were selected are:\n• Strategy paper of the Swiss Agency for Development and Cooperation on the cooperation with Peru 3\n• 2009 Annual report of the Deutsche Welle Academy about Development and the Media 4\n• 2008 Annual report of a private foundation dedicated to education 5\nThe sentence count of the documents is also shown in Table 4.",
|
| 12 |
+
"3.3 New resources": "Apart from using the already existing resources, we have gathered, processed, and aligned publicly available documents found around the web. The summary of these resources is shown in Table 5. It is important to emphasize that, theoretically, Quechua should be regarded as a linguistic family rather than a single language, given that its various varieties exhibit limited mutual intelligibility when they are geographically distant. Within the specialized literature, the term \"Quechua\" is employed to refer to the varieties spoken in Bolivia and Peru, while the term \"Quichua\" is preferred\n2https://github.com/a-rios/squoia 3https://www.cooperacionsuiza.pe/\ncosude/ 4http://www.dw.de/ 5http://www.fundeducation.org/\nfor those spoken in Ecuador and Argentina, as indicated by Avellana (Avellana). For the sake of simplicity, when uncertainty arises regarding the specific Quechua variety being discussed, we adopt the que code as a macrolanguage identifier.\nThe documents were found in pdf format and were transformed into plain text using the pdftotext 6 tool, trying to keep the layout of the original pdf as intact as possible. Since most of the documents contained word wrapping to keep the fixed width of the document, we performed the unwrapping in such cases by joining the words at the ends of the lines which ended with the - sign. In this step, we made an effort to preserve the original document structure whenever feasible. For instance, with \"The Little Prince,\" we maintained the chapter arrangement of the novel. Similarly, when dealing with the Ecuadorian constitution and laws 7, we retained the individual article divisions.\nIn the subsequent stage, we performed sentence segmentation at the chapter level while preserving the chapter boundaries. Our team experimented with several sentence segmenters such as NLTK, spaCy, and stanza. Following careful consideration, we ultimately chose stanza based on a higher alignment score, as explained in the next paragraph. For stanza, we opted for the Spanish sentence segmentation model for both Spanish and Quechua texts.\nThe HunAlign (Varga et al., 2007) tool was utilized to align the sentences. Additionally, we used a dictionary provided by AmericasNLP organizers as an input to the tool to improve the alignments. Overall, the legal document alignments were quite accurate, whereas the alignments of \"The Little Prince\" were slightly less precise. This could be attributed to the greater freedom often allowed in translations of literary works compared to the strict and rigid translations necessary in legal contexts. Even though HunAlign gives a confidence score for each alignment, we did not perform any filtering of the aligned sentences and decided to use all obtained alignments.",
|
| 13 |
+
"3.4 Synthetic translations": "We collected three history books in Spanish. Specifically, old Chronicles of the Indies about the Incan empire and the subsequent colonial period. We hypothesized that because these books have plenty\n6https://www.xpdfreader.com/ 7https://www.asambleanacional.gob.ec/\nes/contenido/publicaciones\nof words in Quechua language, they would be from a suitable domain. The three books were turned into plain text files and their sentences were segmented in the way described in the previous section. After that, the texts were translated into Quechua with the Spanish-Finnish model we fine-tuned on the original datasets and the additional resources introduced by participating teams in the 2021 competition (train + extra). Table 6 shows the final sentence counts of these books after being processed.",
|
| 14 |
+
"3.5 Monolingual (Copied Corpus)": "Following the approach in (Currey et al., 2017), we decided to add some monolingual Quechua data and copy it as is to create a parallel corpus. We used publicly available datasets on Huggingface and segmented the sentences based on line breaks, without any post-processing. The datasets included data cc100 (Conneau et al. (2020) and Wenzek et al. (2020) which was an attempt to recreate the dataset used for training XLM-R, and data from (Zevallos et al., 2022), which is a monolingual corpus of Southern Quechua and includes the Wiki and OSCAR corpora. Table 7 shows the sentence counts of these datasets",
|
| 15 |
+
"4 Models & Results": "We experimented with 2 major model setups and 5 different kinds of dataset combinations. The two setups were based on fine-tuned machine translation models of Spanish-English and Spanish-Finnish (Tiedemann and Thottingal, 2020). On the one hand, the reason behind using a fine-tuned SpanishEnglish model was that both of them are highresource languages, and thus the model has been trained on large amounts of data. This probably means that the model has learned a good Spanish encoder, and thus could be useful for further finetuning. On the other hand, the reasoning behind choosing a Spanish-Finnish model and fine-tuning on Spanish-Quechua was the similarity between Finnish and Quechua (specifically the agglutinative morphology of both languages), and Finnish having comparably more data than Quechua. All models were trained for 20 epochs, with evaluation being done after every 1000 steps. The best model was selected based on the chrF score on the development set. Here, we will define the different combinations of datasets used for our experiments:\n• train : The original parallel-data provided in the AmericasNLP-2023 Shared Task (as mentioned in Table 1).\n• train + extra: This includes the combination of original parallel data and extra Ayacucho Quechua (Quy) data gathered from different sources.\n• train + extra + aligned: This includes the data above plus our newly gathered parallel data (as mentioned in Table 5).\n• train + extra + aligned + copied: In addition to the above data, it also includes the monolingual copied corpus, (Table 7).\n• train + extra + aligned + quz: It includes all the data above excluding copied corpus, but also includes the additional data gathered from different sources pertaining to Cuzco Quechua (Quz). The reason for removing copied corpus was that it resulted in a decrease of the chrF score in all the experiments.\n• all: It includes all the data above excluding the copied corpus, but includes the synthetic translations, as mentioned in Section 3.4.",
|
| 16 |
+
"4.1 Fine-tuned Spanish to English": "Following (Vázquez et al., 2021), where including a majority of Spanish-English parallel data while building an MT system for low-resource languages improved the performance across all the languages, we decided to use an already finetuned Spanish-English MT model and fine-tune it again on our Spanish-Quechua parallel corpus. Concretely, we used the opus-mt-es-en model available at Huggingface 8. As expected, we can see that these models perform quite close to the baseline system. Including more data seems to help as well, with the exception of copied corpus. The reason for this, we suspect, is due to the quantity of the data being higher than our total Spanish-Quechua parallel corpora (no analysis was done on the quality of the data). The best model in this case was fine-tuned on train + extra + aligned achieving a chrF score of 36.96 and\n8https://huggingface.co/Helsinki-NLP/ opus-mt-es-en\n37.71 on the development and test set respectively with the train + extra + quz performing quite similarly as well.",
|
| 17 |
+
"4.2 Fine-tuned Spanish to Finnish": "Lastly, we tried using a fine-tuned version of the Spanish-Finnish MT model. The model we used was opus-mt-es-fi, available at Huggingface 9. The reason for choosing this specific model was firstly because of the similarity between Finnish and Quechua, i.e, both being agglutinative languages, and secondly, Finnish being a relatively high-resource language as compared to Quechua. This proved to be the best model among our experiments, which we believe is due to the reasons mentioned above. We can see in Table 8 that adding aligned data from Ayacucho Quechua seems to help more than adding Cuzco Quechua parallel sources. The best model among the experiments was trained on train + extra + aligned and achieved a chrF score of 37.34 and 38.59 on the development and test set respectively.\nOne final experiment was conducted on all of the collected data meaning train + extra + aligned + quz + bcktr. The model was able to achieve a chrF score of 36.40 and 37.26 on the Spanish-Quechua development and test set respectively. All the models are available on Huggingface 10\n9https://huggingface.co/Helsinki-NLP/ opus-mt-es-fi\n10https://huggingface.co/ americasnlp-lct-ehu",
|
| 18 |
+
"5 Conclusion": "To summarize our findings, in our submission to the AmericasNLP 2023 low-resource machine translation shared task for the Spanish-Quechua language pair, we have explored fine-tuning existing models in different language pairs, combining them with different data setups. We have collected and aligned new parallel data, created synthetic translations, and made use of copied corpus approach. The highest-performing model on the development data achieved 37.70 chrF. This model was obtained by fine-tuning OPUS MT’s Spanish-Finnish model on the original training data, augmented with additional data presented by previous year’s teams, both for Ayacucho and Cuzco Quechua. In the test set, however, the highest performing model was different, obtaining a chrF score of 38.59. This model was the same as the previous one, but the data consisted of the original training data, data from previous year’s submissions (excluding Cuzco Quechua) and the novel alignments introduced in this work.",
|
| 19 |
+
"6 Acknowledgements": "The authors would like to thank Erasmus Mundus European Masters Program in Language and Communication Technologies for its support. We would also like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Hábrók high performance computing cluster."
|
| 20 |
+
}
|
ACL_23_no_limitation/ACL23_1198.json
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1198",
|
| 3 |
+
"Title": "Few-shot Spanish-Aymara Machine Translation Using English-Aymara Lexicon",
|
| 4 |
+
"abstractText": "This paper presents the experiments to train a Spanish-Aymara machine translation model for the AmericasNLP 2023 Machine Translation shared task. We included the English-Aymara GlobalVoices corpus and an English-Aymara lexicon to train the model and limit our training resources to train the model in a few-shot manner.",
|
| 5 |
+
"1 Introduction": "Aymara is a language spoken in Bolivia, Peru and Chile. It is one of the larger languages in the Americas, and has more than 2 million speakers1, yet it has received worryingly little attention from NLP researchers. The development of language technologies encourage potential work in the documentation, promotion, preservation and revitalization of the languages (Galla, 2016; Mager et al., 2018). Recent initiatives to promote research on languages of the Americas brings NLP researchers closer to the Americas languages communities and activists (Fernández et al., 2013; Coler and Homola, 2014; Hois and Ruiz, 2018; Kann et al., 2018; Zhang et al., 2020; Ortega et al., 2020). Particularly, machine translation is a useful tool that encourages more research in the languages as it bridges the communication gaps in NLP researchers’ understanding of the models’ capabilities and limitations.\nThe AmericasNLP 2021 workshop hosted the Open Machine Translation (OMT) shared task focusing on indigenous and endangered Americas languages (Mager et al., 2021). The organizers provided a seed collection of publicly available corpora and highlighted the various nuances and variability of the translations due to the geographical and linguistic diversity between the language varieties. The Spanish data for development and test sets created in the AmericasNLP 2021 shared task\n1Statistics retrieved from Catalogue of Endangered Languages (2023)\nare translated into the Aymara La Paz jilata variant, which is the same variant used in the Global Voices corpus (Tiedemann, 2012; Prokopidis et al., 2016). While Aymara is mutually intelligible across different dialects, they might differ in specific terminologies and minor grammatical preferences.\nThis paper presents our submission to the AmericasNLP 2023 machine translation shared task (Ebrahimi et al., 2023). We submitted our system that focuses only on translating from Spanish into Aymara. We fine-tuned a multilingual T5 model (Xue et al., 2021) by adding an AymaraEnglish lexicon2 to the existing Spanish-Aymara and English-Aymara Global Voices corpus and the Spanish-Aymara shared task training data (Conneau et al., 2018; Ebrahimi et al., 2022).\nOther than presenting the results of our AmericasNLP shared task submission, parts of this paper will also serve as a demonstration of how the model was modified from typical model training using HuggingFace suite of libraries (Wolf et al., 2020; Lhoest et al., 2021; McMillan-Major et al., 2021), this is especially useful for low-resource sequence-to-sequence tasks.",
|
| 6 |
+
"2 Pre-trained Tokenizer and New Languages": "While the current state of vogue in using massively multilingual pre-trained models on low-resource languages allows researchers to extend the models’ sub-word tokenizers, the models implicitly re-use the tokens from how it was previously pre-trained and simply ignore the new tokens by labelling them as [UNKNOWN]. In cases where the character set of the low-resource languages’ orthography matches the languages that the models were pre-trained on,\n2The lexicon is created from the notes of a student learning Aymara as a foreign-language, it is hosted on HuggingFace dataset hub. The original sources of the lexicon attributes to Parker (2008) Webster Aymara-English thesaurus and Peace Corps (1967) Beginning Aymara book.\n168\nit is possible that the models repurpose the subwords to learn new parameter behaviors given sufficient computes and hyperparameter tuning experiments.\nfrom transformers import AutoTokenizer from datasets import load_dataset\nlexicon_dataset = load_dataset( \"alvations/aymara-english\", on_bad_lines='skip')\ntokenizer = AutoTokenizer.from_pretrained('google/mt5-base')\n# Train a new tokenizer using the new dataset # and the old tokenizer object. new_tokenizer = tokenizer.train_new_from_iterator(\nlexicon_dataset, vocab_size=50_000) new_tokens = set(new_tokenizer.vocab).difference(tokenizer.vocab)\n# Before: 250100 print('Before:', len(tokenizer)) tokenizer.add_tokens(list(new_tokens))\n# After (adding vocab): 251152 tokenizer.add_tokens(\nlexicon_dataset['train']['Aymara'] + lexicon_dataset['train']['English'])\nprint('After (adding vocab):', len(tokenizer))\nTo preserve the learned model parameters, a researcher using the multilingual model can extend its tokenizer’s sub-word vocabulary by relearning the sub-word tokenizer from scratch, then apply it to dataset with the new language and finally extending the new sub-words to the pre-trained vocabulary. To assign new parameters in the model for these new sub-words tokens, the embedding layer of the model needs to be extended. The code snippet above demonstrates the function to extend the new language’s vocabulary to existing pre-trained mT5 model.\nThe following snippet below presents the differences of the input token indices depending on how the tokenizer was extended for a new language.\nfrom transformers import AutoTokenizer\ntokenizer_old = AutoTokenizer.from_pretrained('google/mt5-base') tokenizer_new = AutoTokenizer.from_pretrained('alvations/mt5-aym-lex')\nsent = \"1899n ahuicha yuriwayi\"\ntokenized_old_ids = tokenizer_old(sent)['input_ids'] tokenized_new_ids = tokenizer_new(sent)['input_ids']\ntokens_old = [tokenizer.decode([s]) for s in tokenized_old_ids] tokens_new = [tokenizer.decode([s]) for s in tokenized_new_ids]\nprint(tokens_old) # Ouputs: ['1899', 'n', '', 'ahu', 'icha', 'yuri', 'way', 'i', '</s>']\nprint(tokens_new) # Outputs: ['1899', 'n', '', 'ahuicha', 'yuri', 'way', 'i', '</s>']\nInstead of using the subword tokenizer, users can pre-tokenize the new language data using a linguistic motivated rule-based tokenizer and add the tokens without further splitting these tokens into subwords to the models’ vocabulary. However the tokenizer does not automatically recognize/determine spelling variants, e.g. \"ahuicha\"\n(i.e. \"grandma\" in English and \"abuela\" in Spanish) can also be spelled as \"awichajax\" in Aymara.",
|
| 7 |
+
"3 Experimental Setup": "All models fine-tuned in this paper uses the mT5 architecture using A100 GPUs with 40GB RAM. We use the all default hyperparameters of the HuggingFace’s Seq2SeqTrainingArguments except:\n• warmup_steps3 was set to 500, instead of the default 0\n• auto_find_batch_size is enabled with the default algorithm to determine batch size automatically\n• max_steps is set at 200,000. We cap the maximum number of model updates to 200K to limit the computing resources used for our experiments to approximately 24 hours per model, vis-a-vis ‘few-shot’ training.\nWe fine-tuned a zero-shot lexicon-enriched system mT5 model with Aymara-English lexicon, the Spanish-Aymara and English-Aymara Global Voices corpus and Spanish-Aymara XNLI training data split for the training data. And we use the Spanish-Aymara XNLI development data split provided by the shared task organizers to select the best performing model.\nOur official submission to the shared task is selected from the best-performing system that scored the lowest perplexity loss and highest BLEU score. Other than the best performing zeroshot lexicon-enriched system (mT5-lex), we experimented and a baseline model that only finetuned Spanish-Aymara Global Voice and XNLI\n3This hyperparameter is used to gradually increased the learning rate to make training more stable (Huang et al., 2020). The original transformer (Vaswani et al., 2017) set the warmup to 4,000.\ndataset (mT5-base) and a second baseline that adds on the English-Aymara Global Voices data to Spanish-Aymara Global Voices and XNLI dataset (mT5-zero). Table 1 summarizes the datasets used to train the corresponding mT5 models.",
|
| 8 |
+
"4 Results": "Our official submission to the shared-task scored a measly 0.12 BLEU (Papineni et al., 2002) and 9.22 ChrF score (Popović, 2015) on the AmericasNLP 2023 shared task test set. The best performing team in the shared task achieved 4.45 BLEU and 36.24 ChrF. The target Aymara text from the test set was not released publicly, hence we present the results of our model variants on the development set.\nWe note the oracle effect of selecting the best model during training based on the development set, thus the results from Table 2 might be inflated.\nAs a sanity check, we translated the lexicon used to train mt5-lex from English into Spanish using the NLLB machine translation model (Costa-jussà et al., 2022) and count the tokens from the lexicon that matches the development texts. We found that the lexicon has little matches to the tokens in the development sets, see Appendix A for more details.",
|
| 9 |
+
"5 Conclusion": "In this paper we present our participation in the AmericasNLP 2023 Spanish-Aymara machine translation shared task. We experimented with adding an English-Aymara lexicon and training\nWe share the follow resources created in our participation for future researchers to improve English/Spanish-Aymara translations.\n• English-Aymara Lexicon • mt-base model • mt-zero model • mt-lex model • Model training script",
|
| 10 |
+
"A Lexicon Matches in Development Set": "There are 81 unique words that matches the Spanish translated lexicon to the tokens in the development set. The matches sum up to a frequency of 373 out of a total number of 53,135 in the development set on the Spanish source. However when we match the target Aymara text with the lexicon and we find only 4 unique words matches that occurred 9 times in the development set. Looking at the sentences that contains the Aymara word matches to the lexicon, the Aymara sentences from the development set contains loan words either from Spanish or English,\nThe 4 unique Spanish - Aymara lexicon matches are:\n• el vuelo -> fly • mayo -> may • firme -> firm • hijo -> son\nThe sentences that contains the target side matches are:\n• The firm Uk ullartatï. • Tamax may maya temanakanw yatiñ munapxchixa. • Aka jan walt’awix may may lup’iy-\npachatamxa, ukampis samart’awim suyt’am.\n• Jichhurux awkixan nayra jakawipat arst’awaya ukatx kunawsatix Estados Unidos markar sarawayjix may may kast sarawinak utjirinakaw uñicht’ayätani • I’ll fly away uk ajlliristxa. • Aruskipt’aw Hilbert, Las mariposas son libres,\nEl mago de Oz, Tierra de juguetes y Vuelos ukanakatx purt’anirinakax uñjtawayapxaniwa.\n• Ukampirus, niyapunix may uñjiristwa, uh, V6 inas.\nWe note that the underlined loan phrases matches contributes to the matching counts in the lexicon. And when it comes to the Aymara lexicon entry ‘may’, it is a false-friend match, in both development setences that contains ‘may may’, it phrase seems to be a grammatical/syntactic construct.\nWith the above anecdote, we find that lexicon effects in machine translation might not be evident in metrics scores if the lexicon matches in the test set is low, unlike previous studies of using lexicon in high resource languages (Tan et al., 2015; Yvon and Rauf, 2020)."
|
| 11 |
+
}
|
ACL_23_no_limitation/ACL23_1199.json
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1199",
|
| 3 |
+
"Title": "PlayGround Low Resource Machine Translation System for the 2023 AmericasNLP Shared Task",
|
| 4 |
+
"abstractText": "This paper presents PlayGround’s submission to the AmericasNLP 2023 shared task on machine translation (MT) into indigenous languages. We finetuned NLLB-600M, a multilingual MT model pre-trained on Flores-200, on 10 low-resource language directions and examined the effectiveness of weight averaging and back translation. Our experiments showed that weight averaging, on average, led to a 0.0169 improvement in the ChrF++ score. Additionally, we found that back translation resulted in a 0.008 improvement in the ChrF++ score.",
|
| 5 |
+
"1 Introduction": "We participated in the AmericasNLP 2023 (Ebrahimi et al., 2023) shared task with the goal of advancing previous studies (Mager et al., 2021) on indigenous American languages. The task is to translate Spanish into 10 indigenous languages, including Ashaninka, Aymara, Bribri, Guarani, Hñähñu, Nahuatl, Quechua, Raramuri, ShipiboKonibo, and Wixarika. Additionally, there was another language, Chatino1, for which we did not participate in.\nWe started with the monolingual and bilingual data from Mager et al. (2021) and finetuned NLLB600M, a multilingual pre-trained MT model from Meta’s No Language Left Behind (NLLB) project (NLLBTeam et al., 2022) both bilingually and multilingually. On top of that, we employed weight averaging and back translation. For back translation, we additionally filtered the back translated sentence pairs to improve the data quality.\nWe demonstrate that training on model weights averaged from multiple checkpoints improves translation quality, as indicated by a 0.0169 increase in the ChrF++ score on average, without requiring additional computation resources. Additionally, we found that back translation can enhance translation\n1https://scholarworks.iu.edu/dspace/handle/ 2022/21028\nquality for low-resource languages, although it is sensitive to the quality of synthetic data. To address this, we introduced a data filtering technique to improve the quality of synthetic data. With filtered back translation, our system achieved an average improvement of 0.008 in the ChrF++ score. Furthermore, our study reveals that multilingual finetuning achieves comparable translation quality to bilingual fine-tuning for low-resource languages.\nWe selected the bilingual model with weight averaging and back translation as our final submission. The implementation of this study is available in our Git repository2.",
|
| 6 |
+
"2.1 Data": "We adopted the data preparation method described by the University of Helsinki’s submission to AmericasNLP 2021 (Vázquez et al., 2021) for our system. The details of the dataset can be found in Table 1. Our model training utilized the filtered parallel data (referred to as parallel data), which consisted of the training data provided by the organizers as well as additional data collected by the University of Helsinki (Vázquez et al., 2021). In order to generate synthetic parallel data (referred to as synthetic data), we employed monolingual data and applied back translation techniques (refer to Section 2.3). The development data was used for model selection purposes.",
|
| 7 |
+
"2.2 Pre-trained Model": "Our models are based on the NLLB-600M Seq2Seq pre-training scheme introduced by the NLLB team (NLLBTeam et al., 2022). For tokenization, we utilize the SentencePiece tokenizer (Kudo and Richardson, 2018), following the NLLB configuration. The NLLB model was initially trained on\n2https://github.com/KaieChen/ameircasnlp2023\n173\nthe Flores-200 dataset, which consists of Aymara, Guarani, Quechua, and Spanish.",
|
| 8 |
+
"2.3 Fine-tuned Models": "We fine-tune NLLB-600M using the data mentioned in Table 1. For both X-to-Spanish and Spanish-to-X directions, we fine-tune NLLB-600M using filtered parallel data in both bilingual and multilingual way. This produces 20 bilingual models and 2 multilingual models.\nWe leverage the above X-to-Spanish models to generate back translated data to enrich the training corpus. Then we further fine-tune the Spanish-to-X models with parallel dataset extended with back translated sentence pairs.\nThe final models are obtained with weight averaging since the training can be unstable with insufficient data.",
|
| 9 |
+
"2.3.1 Back Translation": "In order to make use of monolingual data in indigenous languages, we employed back translation. Specifically, we froze the decoder layers of NLLB model and performed fine-tuning of an Xto-Spanish model using parallel data. Then, we utilized this model to generate synthetic sentences.\nData filtering: Synthetic sentences may contain noise. To address this issue, we implement a data filter to select a subset of synthetic sentences that will expand the original parallel dataset (Ranathunga et al., 2023). In our task, we initially fine-tuned a Spanish-to-X model using the parallel data. Subsequently, we evaluated this model on the synthetic sentences and selected the top N samples with the lowest cross-entropy loss. The value of N\nis determined by the following:\nN = min(|Ypar|, |Ysyn|) (1)\nwhere |Ypar| represents the number of segments in the parallel dataset, and |Ysyn| represents the number of segments in the synthetic dataset.\nFinally, we combined the selected synthetic data with the parallel data and proceeded to perform additional fine-tuning of the NLLB model.",
|
| 10 |
+
"2.3.2 Weight Averaging": "Studies have shown that averaging the weights of multiple finetuned models can enhance accuracy (Wortsman et al., 2022). In our training approach, the weights of the next epoch are trained based on the average of the model weights from the previous K epochs. For inference, we compute the final model by averaging the model weights from the last K epochs. The model can be defined as follows:\nNLLB(x; Θt) = NLLB(x; 1\nK\nK∑\nk=1\nΘt−k) (2)\nwhere Θt represents the model parameters at epoch t.\nThis technique shares similarities with training different models using various hyperparameters (Wortsman et al., 2022; Xu et al., 2020). However, as we only need to train a single model, this technique can be particularly efficient for large language models. The effectiveness of this approach is further discussed in Section 3.",
|
| 11 |
+
"2.3.3 Hyperparameters": "In the fine-tuning process, we froze the encoder layers of the NLLB model, considering its prior training on a vast amount of Spanish sentences. We optimized the model using AdamW (Loshchilov and Hutter, 2017) with hyperparameters β = (0.9, 0.999), ϵ = 10−6. We employed a learning rate of 3×10−4 for a total of 10, 000 iterations. For regularization, we utilized the same dropout rate as the original NLLB model and a weight decay of 0.01. Furthermore, for weight averaging, we set the value of K to be 5.",
|
| 12 |
+
"2.4 Evaluation": "We report the results using ChrF++ (Popović, 2017), following the evaluation script3 provided by the AmericasNLP 2023 shared task. ChrF++\n3https://github.com/AmericasNLP/ americasnlp2023\ncaptures the character-level performance, making it particularly suitable for evaluating the polysynthetic properties observed in many indigenous languages (Zheng et al., 2021).",
|
| 13 |
+
"3 Results": "The results are presented in Table 2 for both the development and test datasets. Our Bi++ model demonstrates improvements in four languages: Hñähñu, Aymara, Asháninka, and Quechua, compared to the Baseline model provided by the organizer. In general, the trends in results for the development and training datasets are similar, except for Rarámuri and Bribri. This discrepancy may be attributed to the test dataset containing more unknown tokens, to which our model is sensitive.\nPrevious study (Mager et al., 2021) has primarily focused on fine-tuning bilingual machine translation models. However, the results from our Multi++ and Bi++ models demonstrate the promising potential of multilingual fine-tuning (Tang et al., 2020). On average, the ChrF++ score for Multi++ is only 0.0012 lower than that of Bi++.\nWe also compared the effectiveness of weight averaging and back translation. Weight averaging improved translations for all target languages. On average, Multi+ achieved a ChrF++ score that was 0.0169 higher than Multi. These results indicate that our simple technique can enhance low-resource machine translation without requiring additional computational resources.\nHowever, the impact of back translation varied across languages, as observed in the results for Multi+ and Multi++. On average, the implementation of back translation resulted in a 0.008 im-\nprovement in the ChrF++ metric. For Wixarika and Aymara, there was a slight drop in the ChrF++ scores after back translation. Despite performing data filtering, the quality of synthetic data largely depends on the performance of the X-to-Spanish model.\nIn summary, our fine-tuning technique has shown improvements in performance. However, with further refinements and design enhancements, there is potential for our model to achieve higher levels of performance.",
|
| 14 |
+
"4 Conclusion": "In this paper, we presented our submission to the AmericasNLP 2023 shared task. Our system utilized the NLLB-600M pre-trained model to translate Spanish into 10 indigenous languages. We also investigated the potential of multilingual translation models, which showed promising results. Additionally, we found that averaging model weights from previous epochs proved to be an efficient and effective approach. While back translation demonstrated performance improvements, further methods are necessary to address noisy data. These findings highlight the positive outcomes of our study and provide valuable insights for future advancements in low-resource machine translation techniques."
|
| 15 |
+
}
|
ACL_23_no_limitation/ACL23_1200.json
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1200",
|
| 3 |
+
"Title": "Four Approaches to Low-Resource Multilingual NMT: The Helsinki Submission to the AmericasNLP 2023 Shared Task",
|
| 4 |
+
"abstractText": "The Helsinki-NLP team participated in the AmericasNLP 2023 Shared Task with 6 submissions for all 11 language pairs arising from 4 different multilingual systems. We provide a detailed look at the work that went into collecting and preprocessing the data that led to our submissions. We explore various setups for multilingual Neural Machine Translation (NMT), namely knowledge distillation and transfer learning, multilingual NMT including a high-resource language (English), languagespecific fine-tuning, and a system with a modular architecture. Our multilingual Model B ranks first in 4 out of the 11 language pairs.",
|
| 5 |
+
"1 Introduction": "This paper presents the submission of the HelsinkiNLP team to the AmericasNLP 2023 Shared Task. The task consisted in developing Machine Translation (MT) systems for 11 indigenous languages of the Americas: Aymara (aym), Bribri (bzd), Asháninka (cni), Chatino (czn), Guarani (gn), Wixarika (hch), Nahuatl (nah), Hñähñu (oto), Quechua (Quy), Shipibo-Konibo (shp), and Rarámuri (tar). The AmericasNLP task has been running for two years: in 2021 (Mager et al., 2021) it was first introduced, and in 2022 it consisted of Speech-to-Text Translation (STT).1 This year’s task is similar to the one held in 2021, but it includes an additional language (Chatino) and the use of the development set in training is not allowed. Our 2021 submission (Vázquez et al., 2021) reached the first rank in nine out of ten languages and serves as the baseline for this year’s task.\nThe 11 target languages involved in the task vary a lot in terms of “resourcedness”. On one side of the spectrum, there are languages like Quechua and Guarani with millions of native speakers, whereas on the other end, the variety of Hñähñu\n1http://turing.iimas.unam.mx/americasnlp/st. html\nused in the development and test sets only has about 100 elder speakers.2 Many of the target languages show dialectal variation, and some have different spelling norms and conventions. Furthermore, some datasets contain instances of codeswitching with Spanish, and some of the languages are polysynthetic. All these factors make the task at hand particularly challenging.\nA large part of our effort focuses on increasing the amount of parallel data for training. Building on our work for the 2021 shared task, we employ several strategies: mining, extraction and alignment of publicly available parallel resources, backtranslation of monolingual data (Sennrich et al., 2016), and data augmentation by pivoting through English (Xia et al., 2019).\nOn the modelling side, our winning 2021 submission was based on a multilingual (one-to-many) model that was pretrained mostly on the Spanishto-English task and later fine-tuned on the lowresource indigenous languages. We keep this general approach in most of this year’s submissions, but provide some variations to this theme:\nModel A uses knowledge distillation and transfer learning instead of training from scratch. In this context, we also experiment with different data labeling schemes.\nModel B reproduces our 2021 setup with updated data.\nModel C reimplements Model B’s strategy using OpusTrainer3 and introduces a languagespecific fine-tuning step.\nModel D uses a modular architecture in a multilingual setting with language-specific decoder modules.\n2https://github.com/AmericasNLP/ americasnlp2023/blob/main/data/information_ datasets.pdf\n3https://github.com/hplt-project/OpusTrainer\n177\nOur best-performing model is Model B. The collected data and our code are publicly available on our fork of the organizers’ Git repository.4\nThe rest of the paper is organised as follows. Section 2 provides a detailed description of our data collection and preparation efforts. Section 3 describes in detail the models presented. Section 4 outlines the results and, finally, section 5 concludes our work.",
|
| 6 |
+
"2 Data collection and preparation": "Similar to our 2021 submission, we worked on finding relevant corpora from additional sources and cleaning and filtering them. We utilised the OpusFilter toolbox5 (Aulamo et al., 2020), which provides both ready-made and extensible methods for combining, cleaning, and filtering parallel and monolingual corpora. OpusFilter uses a configuration file that lists all the steps for processing the data; in order to make quick changes and extensions programmatically, we generated the configuration file with a Python script.",
|
| 7 |
+
"2.1 Data collection": "We combined the data previously collected for our 2021 participation with some new resources. An overview of the resources, including references and URLs, is given in Table 4 in the appendix.\nOrganizer-provided resources The shared task organizers provided parallel datasets for training for all 11 languages. These datasets are referred to as train in this paper. For some of the languages (e.g., Ashaninka, Wixarika and Shipibo-Konibo), the organizers pointed participants to repositories containing additional data. We refer to these resources as extra. Furthermore, the organizers provided development (dev) and test (test) sets for all 11 language pairs of the shared task (Ebrahimi et al., 2023).\nOPUS The OPUS corpus collection (Tiedemann, 2012) provides only few datasets for the relevant languages. We utilized the GNOME, MozillaI10n and Ubuntu corpora, which consist of localization files. Additionally, we made use of the Tatoeba and Wikimedia corpora, which have been recently updated on the OPUS website.6 These bitexts contain\n4https://github.com/Helsinki-NLP/ americasnlp2023-st\n5https://github.com/Helsinki-NLP/OpusFilter, version 2.6.\n6https://opus.nlpl.eu/\n384 sentence pairs for Aymara, 25233 for Guarani, 169 for Nahuatl and 1187 for Quechua parallel with Spanish.\nTo ensure collecting data only for the relevant languages, we ran language detection on the corpora. For language identification we used HeLIOTS (Jauhiainen et al., 2022), which includes language models for Guarani, Nahuatl and Quechua. We kept only pairs where both the source and the target sentences are detected to be in the correct language. For the Spanish side, we also accepted sentences identified as other Romance languages, namely Catalan, Galician, French, Portuguese, Extremaduran and Occitan. For Aymara and Nahuatl, we chose to accept sentences where the detected language is not English or Spanish, as Aymara is not included in the language model and only a small proportion of sentences were detected to be Nahuatl. The language identification filtering leaves 320 sentence pairs for Aymara, 19751 for Guarani, 153 for Nahuatl and 718 for Quechua.\nFLORES The FLORES-200 development and test sets (NLLB Team et al., 2022) cover Aymara, Guarani and Quechua. Since this is a multiparallel dataset, we paired the indigenous languages with their corresponding Spanish sentences. We concatenated the development and test sets and added them to our training data.\nBibles The JHU Bible corpus (McCarthy et al., 2020) covers all languages of the shared task with at least one Bible translation. When several Bibles were available for a given indigenous language, we scored them with a character 6-gram language model trained on the development sets and chose the Bible(s) with the lowest average cross-entropy scores. We paired them with the available Spanish Bibles using the product method in OpusFilter to randomly take at most 3 different versions of the same sentence (skipping empty and duplicate lines).7\nLegal texts, educational material and news In 2021, we collected constitutions and laws of various Latin American countries with their translations into indigenous languages. We expanded this collection by adding the Chatino–Spanish Mexican constitution. We also added the Universal Declaration of Human Rights (UDHR) where avail-\n7We sampled three Spanish sentences when there was a single Bible version for the the indigenous language, two for 2–3 versions, and one for more than three versions.\nable in the Universal Declaration of Human Rights Translation Project.8 Furthermore, we extracted Nahuatl and Bribri educational material as well as Guaraní parallel news items from PDF documents and websites. The document and sentence alignment was done semi-automatically using sourcespecific heuristics and the hunalign9 (Varga et al., 2005) tool. We provide a script in our repository to replicate these data gathering and alignment procedures.10\nSpanish–English data All submitted models take advantage of abundant parallel data for Spanish–English. The resources come from OPUS (Tiedemann, 2012) and include the following sources: OpenSubtitles, Europarl, GlobalVoices, News-Commentary, TED2020, Tatoeba, bible-uedin. The Spanish–English WMT-News corpus, also from OPUS, is used for validation.",
|
| 8 |
+
"2.2 Back-translations of monolingual data": "The organizers also provided some monolingual resources for some indigenous languages. We also obtained monolingual Wikipedia dumps for some languages through the Tatoeba Translation Challenge project (Tiedemann, 2020). We used the 2021 reverse Model B to translate these resources to Spanish (thereby fixing the processing for Quechua reported in the 2021 paper).",
|
| 9 |
+
"2.3 Pivot translations of English-aligned data": "Some parallel datasets provided by the organizers or available on OPUS were aligned with English. Furthermore, the No Language Left Behind (Costajussà et al., 2022) project released training data for Aymara–English and Guarani–English. We used a publicly available English-to-Spanish MT system from the OPUS-MT project11 to translate the English side to Spanish in order to constitute additional Spanish–Indigenous data.",
|
| 10 |
+
"2.4 Data normalization, cleaning and filtering": "We noticed that some of the corpora in the same language used different orthographic conventions\n8https://www.ohchr.org/en/human-rights/ universal-declaration/universal-declarationhuman-rights/about-universal-declaration-humanrights-translation-project\n9https://github.com/danielvarga/hunalign 10under data/getdata2023.py 11We used the opusTCv20210807+bt_transformer-big_ 2022-03-13 model from https://github.com/HelsinkiNLP/Tatoeba-Challenge/tree/master/models/engspa.\nand had other issues that would hinder NMT model training. We applied various data normalization and cleaning steps to improve the quality of the data, with the goal of making the training data more similar to the development data (which we expected to be similar to the test data).\nFor Bribri, Raramuri and Wixarika, we found normalization scripts or guidelines on the organizers’ Github page or sources referenced therein (cf. Ô entries in Table 4). We reimplemented them as custom OpusFilter preprocessors. For Chatino, we implemented a preprocessor that normalized the tone characters variations in the different datasets.\nThe organizer-provided training sets for Bribri, Hñähñu, Nahuatl, and Raramuri were originally tokenized. We detokenized these corpora with the Moses detokenizer supported by OpusFilter, using the English patterns. Finally, for all datasets, we applied OpusFilter’s WhitespaceNormalizer preprocessor, which replaces all sequences of whitespace characters with a single space.\nWe filtered some of the datasets using predefined filters from OpusFilter. Not all filters were applied to all languages; instead, we selected the appropriate filters based on manual observation of the data and the proportion of sentences removed by the filter. Appendix A describes the filters in detail.",
|
| 11 |
+
"2.5 Data tagging": "Since all our models are multilingual models with several target languages, we include a target language tag at the beginning of the source sentence. Furthermore, we add two more tags: variant tags and quality tags.\nVariant tags represent the different variants of a particular language and they were inferred either from the documentation of the data source or from a manual inspection focusing on the character set of the specific text. In the end, we only used variant tags for two languages: Chatino and Quechua. The <default> variant is always the variant of the development and test sets. Besides the <default> variant, for Chatino we define the <plain> variant, which does not use tones. It is important to mention that 95% of our training data for Chatino belongs to the <plain> variant. For Quechua, the development and test data is in Ayacucho Quechua (quy), whereas other data are in Cuzco Quechua or a Bolivian variety of Quechua. We define the variant labels <quz> and <quh> for the latter two.\nQuality tags refer to the origin of the data:\n<default> for relatively clean data sources, <noisy> for unreliable data sources or with noisy sentence alignment, <bt> for back-translations, and <bible> for Bibles. The statistics of the quality tags for the training corpora are provided in subsection 2.8.\nIf not specified otherwise, all tags are used during the training phase. When generating test translations, we use the language tag, followed by the default variant and quality tags.",
|
| 12 |
+
"2.6 Concatenation and deduplication": "After tagging, the different training sets were concatenated, and all exact duplicates were removed from the data using OpusFilter’s duplicate removal step. Note that because of the language variant tags, some duplicates marked as different variants may have remained.\nFor the Spanish–English data, duplicates were removed separately from the OpenSubtitles part and the rest of the data.",
|
| 13 |
+
"2.7 Data postprocessing": "We apply data postprocessing steps for two target languages: Chatino and Hñähñu.\nChatino has a tonal structure, where each word is tagged at the end with a superscript tone character (ABcEfGHIJK), for example: KyqyaA noA shtyaH renqJ 2/2022-CC qoE 4/2022-CC. Sometimes, the character J can also be found within a word. A manual inspection of the results allowed us to see that our models were not producing the superscript characters, presumably due to Unicode normalization performed during subword segmentation with SentencePiece. Therefore, we opted for substituting the characters in the character set mentioned above by their superscript counterparts if they were found at the end of a token. For J, we replaced all occurrences regardless of their position.\nRegarding Hñähñu, organizers already acknowledge that the training variant (Valle del Mezquital) is a different one from the development and test sets (Ñûhmû de Itxenco), a severely endangered variant spoken by less than 100 people. The training data did not contain any sample from the development and test set variant, having some characters in the training data that never appear in the development set. In consequence, we chose to substitute all occurrences of the character set that only appear in the training data, by their non-diacritic counterpart. For example, ë becomes e, è becomes e and ě becomes e. The full character substitution can be\nconsulted in our GitHub repository.",
|
| 14 |
+
"2.8 Data sizes": "Table 1 shows the sizes of the used datasets. train refers to the official training data and extra to all other datasets except the Bibles. The data sizes are listed separately before and after filtering, as well as after concatenation and duplicate removal (combined). There is a difference of almost two orders of magnitude between the smallest (czn) and largest (quy) combined training data sets. Including the Bibles data (bibles) evens out the situation a bit, but Quechua has still significantly more data than any of the other languages. The development sets comprise 500–1000 sentences for each of the languages.\nAs discussed in subsection 2.5, we use different quality tags for different data sources. Table 1 also shows the amount of the different tags in the combined set. In addition, <bible> was used always for bibles.\nFinally, Table 2 shows the sizes of the Spanish– English datasets before and after filtering. Model A uses different data than models B, C and D; see section 3 for details.",
|
| 15 |
+
"3 Models": "We tested four major model configurations, which we refer to as A, B, C and D. All models are multilingual neural MT (NMT) models and include the Spanish–English translation task in some form. Models B and C also include language-specific finetuning steps. All models are based on the Transformer architecture (Vaswani et al., 2017). Models A and C are trained using the MarianNMT Toolkit (Junczys-Dowmunt et al., 2018), while B and D are implemented with OpenNMT-py 2.0 (Klein et al., 2020). All models were trained on a single GPU, except Model D, which was trained on 4 GPUs.\nWe use subword SentencePiece segmentation (Kudo and Richardson, 2018) for the training data. We train a shared vocabulary for all languages with size 32k that is used in all the models. Further details of the configurations are listed in Appendix B.",
|
| 16 |
+
"3.1 Model A": "Model A is a multilingual one-to-many model based on knowledge distillation (Kim and Rush, 2016), where you distill a smaller student model from a powerful teacher; and transfer learning (Zoph et al., 2016), where you train a parent model\non a high-resource pair and then continue training a child model on the low-resource data.\nRegarding transfer learning, we train a parent model on a high-resource language pair (es–en) and then we continue training on the indigenous languages’ data. Furthermore, for the es–en parent model, we apply knowledge distillation. We distill a es–en system from the No Language Left Behind (NLLB) model12 (Costa-jussà et al., 2022) by simply training a new model on NLLB translated data from Spanish into English. The rationale behind this decision is to benefit from the advantages of a large pretrained NMT model while optimizing its size to enable effective fine-tuning.\nIn contrast to the other models, we exclusively use the OpenSubtitles dataset for Spanish–English training. This dataset consists of relatively brief sentences discussing general subjects. The motivation to use only this dataset was based on an examination of the development sets, which exhibited similar content characteristics. For development, we translate the source Spanish counterpart of the development sets provided by the organizers into English with the NLLB model with the hope that the distilled model will overfit to its teacher’s distributions.\nFor the child model, we experiment with different data labeling schemes and submit three different versions:\n• A.1: Parent model fine-tuned on indigenous data with all tags.\n• A.2: Parent model fine-tuned on indigenous data without quality tags (keeping only the language and variant tags)\n• A.3: Ensemble model of A.1 and A.2",
|
| 17 |
+
"3.2 Model B": "Model B is a multilingual one-to-many model that reproduces the Model B setup from 2021 with updated training data.\nThe training takes place in three phases. In the first phase, the model is trained on 91% of Spanish– English data and 9% of data coming from the indigenous languages. The two English sets, news and opensubs, were assigned the same weight to avoid overfitting on subtitle data. In the second phase, the proportion of Spanish–English data is\n12We use the NLLB-200’s 3.3B variant as the teacher. https:/huggingface.co/facebook/nllb-200-3.3B\nreduced to 37%, with the remainder sampled to equal amounts from the indigenous languages.\nWe train the first phase for 100k steps and pick the best intermediate savepoint according to the English validation set, which occurred after 80k steps. We initialize phase 2 with this savepoint and continue training until 200k steps. We then pick the five most promising savepoints based on the accuracy of the concatenated development sets, and select the best out of these five for each target language separately.\nStarting from these savepoints, we added a third phase with language-specific finetuning, using 40% of English data and 60% of the individual targetlanguage data. We trained these models for an additional 12k steps and selected the best intermediate savepoint. However, language-specific finetuning only increased the results for Ashaninka, Guarani and Raramuri. For the other languages, we used the best model savepoint from the second phase.",
|
| 18 |
+
"3.3 Model C": "Model C is a set of 11 different language-specific models following the same strategy as Model B, trained with OpusTrainer.13 OpusTrainer is a tool for curriculum learning, especially designed for multilingual scenarios, since it allows to specify the desired mixture of datasets from different language sources.\nSimilarly to Model B, the training takes place in three phases. We train our models with all the available data for all language pairs with the following configuration: (1) First, we train for one epoch with 90% of the es–en data and 10% of indigenous data, coming from each of the 11 indigenous languages. (2) Then, we train two epochs with a 50/50 distribution. Finally, (3) we add a language-specific fine-tuning step, where we train with a distribution of 10% of es–en data, 10% of es–indigenous and 80% of the desired language until convergence with early-stopping.\nFor inference, we ensemble the last four checkpoints with different combinations (1, 1-2, 1-2-3, 1-2-3-4) for each model. We select the best ensemble approach for each language pair based on the development set scores.",
|
| 19 |
+
"3.4 Model D": "Model D is a multilingual modular sequence-tosequence Transformer model (Vázquez et al., 2020;\n13https://github.com/hplt-project/OpusTrainer\nEscolano et al., 2021). It is trained to perform Spanish-to-many translation, as well as a denoising auto-encoding objective (Lewis et al., 2020) for each of the 11 indigenous languages as well as English. Each model consists of 12 layers: a 6- layer Spanish encoder and decoders that share s layers followed by 6− s language-specific layers. We trained distinct models with s = 1, 2, 3. Model D is set to s = 1 since it outperformed the others with respect to ChrF scores in the development set. Training details are given in Appendix B.",
|
| 20 |
+
"4 Results": "Our results are shown in Table 3 with the official automatic evaluation metric, ChrF (Popović, 2015). We also include the results of this year’s baseline and the best of the contenders for each of the target languages.\nThe baseline turned out to be quite hard to beat: for five languages (hch, nah, oto, shp, tar), the best submission was less than 2 ChrF points above the baseline. The competition among participants was also very tight this year: for the same five languages, there is less than 1 ChrF point difference between the first and second participant. Differences of less than 2 ChrF points can be observed for two additional languages (cni, gn). We believe that conducting significance testing to compare the participants’ results would be beneficial in this scenario.\nRegarding our models, Model B is our clear bestperforming system. It reached first rank on 4 out of the 11 language pairs and third rank on two other occasions. Model B consistently outperformed all our other models. Its good performance can be attributed to its pre-training phase on SpanishEnglish data including a small percentage of the indigenous data. For this model, we also focused our efforts in checkpoints’ selection. Further analysis will be required to investigate the performance differences between our models B and C, which used the same overall setup but show various minor differences in terms of toolkits, hyperparameters and curriculum definition.\nThe variants of Model A perform very similarly to each other, although removing the quality tags (A.2) leads to a significant increase for es–shp. Comparing models A and model B, our results indicate that training a multilingual model jointly from scratch is more beneficial than transfer learning approaches.\nModel C seems to be on par with models A, although it works particularly well for es–czn. With Model C, we expected that language-specific finetuning would boost results. If we compare models B and C, our results match previous research, where it is stated that low-resource translation benefits from jointly-trained multilingual models (Johnson et al., 2017).\nFinally, while Model D works well for es–shp, outperforming models C and A.2, we observe that in general it yields poor results. Nonetheless, we decided to use it anyway to test it in a real use case. Specifically for Model D, we were interested in testing the knowledge transfer capabilities of modular systems in low-resource multilingual scenarios. Indeed, these systems have demonstrated efficient transfer learning properties (Escolano Peinado, 2022). However, in this set of experiments, Model D lags behind our other nonmodular systems for all other languages, indicating that perhaps the data available to train the languagespecific modules was insufficient or that the parameter sharing strategies we chose were not optimal. In our experiments we also noticed that the modular systems ignore the variant and quality tags, which hampers their performance due to the imbalance of training resources. This can be seen in the case of es-czn, where the model is unable to learn the variant of the test set due to the unbalanced amount of that variant in the training data (only 5%).",
|
| 21 |
+
"5 Conclusions": "In this paper, we have presented our contribution to the AmericasNLP 2023 Shared Task. We have described our efforts in terms of data collection and processing. We presented our 6 submissions to the task for all language pairs. We explore various setups for multilingual NMT, including knowledge distillation, transfer learning, multilingual NMT with English, language-specific fine-tuning, and a multilingual modular system.\nOur strongest system follows the same architecture as our winning submission in 2021, which was used as the baseline for this year. There are two main differences between our current submission and the baseline:\n• Additional training data: the amount of added resources varies across the languages, and not all of our collection efforts seem to have paid off. While results improved substantially for Guarani, no significant improvements could\nbe observed for Nahuatl and Quechua. For Bribri, the model generalizes better to the test set than in 2021, but is still far behind the best contender.\n• Inclusion of variant and quality tags: the experiments with Model A suggest that variant and quality tags can help, but that our current attribution of tags was not optimal. It could be promising to base the tags on more objective criteria like character and word overlap or alignment quality.\nThese two additions have allowed us to beat our own baseline.",
|
| 22 |
+
"Acknowledgements": "This work was supported by the FoTran project, funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 771113.\nThis work was also supported by the HPLT project which has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350.",
|
| 23 |
+
"A OpusFilter settings": "The following filters were used for the training data except for back-translated data, Bibles and the OpenSubtitles data for Model A:\n• LengthFilter: Remove sentences longer than 1000 characters. Applied to Aymara, Chatino, Nahuatl, Quechua, Raramuri.\n• LengthRatioFilter: Remove sentences with character length ratio of 4 or more. Applied to Ashaninka, Aymara, Chatino, Guarani, Hñähñu, Nahuatl, Quechua, Raramuri, Wixarika.\n• CharacterScoreFilter: Remove sentences for which less than 90% characters are from the Latin alphabet. Applied to Aymara, Quechua, Raramuri.\n• TerminalPunctuationFilter: Remove sentences with dissimilar punctuation; threshold -2 (Vázquez et al., 2019). Applied to Aymara, Quechua.\n• NonZeroNumeralsFilter: Remove sentences with dissimilar numerals; threshold 0.5 (Vázquez et al., 2019). Applied to Aymara, Quechua, Raramuri, Wixarika.\nThe Bribri and Shipibo-Konibo corpora seemed clean enough that we did not apply any filters for them.\nAfter generating the Bible data, we noticed that some of the lines contained only a single ’BLANK’ string. The segments with these lines were removed afterwards.\nFrom the provided monolingual datasets, we filtered out sentences with more than 500 words.\nThe back-translated data was filtered with the following filters:\n• LengthRatioFilter with threshold 2 and word units\n• CharacterScoreFilter with Latin script and threshold 0.9 on the Spanish side and 0.7 on the other side\n• LanguageIDFilter with a threshold of 0.8 for the Spanish side only.\nThe OpenSubtitles data for Model A was filtered with the following filters:\n• LengthRatioFilter with threshold of 3 and word units.\n• CharacterScoreFilter with Latin script and threshold 0.75 on both sides.\n• AlphabetRatioFilter with a default threshold of 0.75.\n• LongWordFilter with a default maximum length of 40.\n• AverageWordLengthFilter with default values of minimum length of 2 and maximum length of 20.",
|
| 24 |
+
"B Hyperparameters": "Models A use a 6-layered Transformer with 8 heads, 512 dimensions in the embeddings and 2,048 dimensions in the feed-forward layers. The batch size is 1,000 sentence-pairs. The Adam optimizer is used with β1=0.9 and β2=0.98. The models are trained until convergence with earlystopping on development data after ChrF has stalled 10 times.\nModel B uses a 8-layered Transformer with 16 heads, 1,024 dimensions in the embeddings and 4,096 dimensions in the feed-forward layers. The batch size is 9,200 tokens in phase 1 and 4,600 tokens in phase 2, with an accumulation count of 4. The Adam optimizer is used with beta1=0.9 and β2=0.997. The Noam decay method is used with a learning rate of 2.0 and 16000 warm-up steps. Subword sampling is applied during training (20 samples, α = 0.1). As a post-processing step, we removed the <unk> tokens from the outputs of Model B.\nModel C uses a 6-layered Transformer with 8 heads, 512 dimensions in the embeddings and 2,048 dimensions in the feed-forward layers. The batch size is 1,000 sentence-pairs. The Adam optimizer is used with β1=0.9 and β2=0.98.\nModel D was trained for a total of 150K steps to minimize the negative log-likelihood of the target translation. We accumulate gradients over all translation directions before back-propagation, using AdaFactor (Shazeer and Stern, 2018) with learning rate of 3.0. We trained the model on 4 AMD MI100 GPUs for ∼48hrs. The 8-headed Transformer layers have 512 dimensions in the self attention and 2,048 in the feed forward sub-layers.\nAymara aym\nGlobalVoices (Tiedemann, 2012; Prokopidis et al., 2016)\n⋆ BOconst: https://www.kas.de/c/document_library/get_file?uuid= 8b51d469-63d2-f001-ef6f-9b561eb65ed4&groupId=288373\n⋆ FLORES-200: https://github.com/facebookresearch/flores\n⋆¹ NLLB-MD: https://github.com/facebookresearch/flores\n⋆ OPUS: Mozilla-I10n, wikimedia (Tiedemann, 2012)\n⋆ UDHR: https://searchlibrary.ohchr.org/search?ln=en&cc=UDHR+ Translation+Collection\n⋆¹ GlobalVoices (en-aym) (Tiedemann, 2012; Prokopidis et al., 2016)\n⋆ OPUS: Wikipedia (Tiedemann, 2020)\n[ ayr-x-bible-2011-v1\nBribri bzd\n(Feldman and Coto-Solano, 2020)\n⋆ MEP: https://mep.go.cr/educatico/minienciclopedias-pueblosindigenas\n⋆ IUCN: https://portals.iucn.org/library/sites/library/files/ documents/2016-071.pdf\n[ bzd-x-bible-bzd-v1\nÔ https://github.com/AmericasNLP/americasnlp2021/blob/main/ data/bribri-spanish/orthographic-conversion.csv\nAshaninka cni\nhttps://github.com/hinantin/AshaninkaMT (Ortega et al., 2020; Cushimariano Romano and Sebastián Q., 2008; Mihas, 2011)\n⋆ ShaShiYaYi (Bustamante et al., 2020): https://github.com/iapucp/ multilingual-data-peru\n[ cni-x-bible-cni-v1\nChatino czn\nhttps://scholarworks.iu.edu/dspace/handle/2022/21028\n⋆ MXconst: https://constitucionenlenguas.inali.gob.mx/\n⋆¹ CTP-ENG: https://github.com/AmericasNLP/americasnlp2023\n[ cta-x-bible-cta-v1, ctp-x-bible-ctp-v1, cya-x-bible-cya-v1\nGuarani gn\n(Chiruzzo et al., 2020)\n⋆ PYconst: http://ej.org.py/principal/constitucion-nacional-enguarani/\n⋆ News: https://spl.gov.py/es/index.php/noticias & https://www. spl.gov.py/gn/index.php/marandukuera\n⋆ Jojajovai: https://github.com/pln-fing-udelar/jojajovai\n⋆ FLORES-200: https://github.com/facebookresearch/flores\n⋆¹ NLLB-seed: https://github.com/facebookresearch/flores\n⋆ UDHR: https://searchlibrary.ohchr.org/search?ln=en&cc=UDHR+ Translation+Collection\n(Continues on next page)\nGuarani (cont.)\n⋆ OPUS: GNOME, Mozilla-I10n, Tatoeba, Ubuntu, wikimedia (Tiedemann, 2012)\n⋆ OPUS: Wikipedia (Tiedemann, 2020)\n[ gug-x-bible-gug-v1\nWixarika hch\nhttps://github.com/pywirrarika/wixarikacorpora (Mager et al., 2018)\n⋆ MXconst: https://constitucionenlenguas.inali.gob.mx/\n⋆ corpora.wixes, paral_own, segcorpus.wixes: https://github.com/ pywirrarika/wixarikacorpora ⋆ social.wix: https://github.com/pywirrarika/wixarikacorpora\n[ hch-x-bible-hch-v1\nÔ https://github.com/pywirrarika/wixnlp/blob/master/normwix.py (Mager Hois et al., 2016)\nNahuatl nah\nAxolotl (Gutierrez-Vasques et al., 2016)\n⋆ MXConst: https://constitucionenlenguas.inali.gob.mx/\n⋆ Educational: https://nawatl.com/category/textos/\n⋆ Dict: https://nahuatl.wired-humanities.org/\n⋆ Short stories: https://nahuatl.org.mx/cuentos-nahuatl-14ejemplares-para-descargar/\n⋆ INPImonograph: https://www.gob.mx/inpi/documentos/monografianacional-los-pueblos-indigenas-de-mexico & https://www.gob. mx/inpi/documentos/libros-en-lenguas-indigenas\n⋆ UDHR: https://searchlibrary.ohchr.org/search?ln=en&cc=UDHR+ Translation+Collection\n⋆ OPUS: Tatoeba, wikimedia (Tiedemann, 2012)\n⋆ OPUS: Wikipedia (Tiedemann, 2020)\n[ azz-x-bible-azz-v1, ncj-x-bible-ncj-v1, nhi-x-bible-nhi-v1\nHnähñu oto\nTsunkua: https://tsunkua.elotl.mx/about/\n⋆ MXConst: https://constitucionenlenguas.inali.gob.mx/\n⋆ Dictionary: http://xixona.dlsi.ua.es/~fran/ote-spa.tsv\n⋆ UDHR: https://searchlibrary.ohchr.org/search?ln=en&cc=UDHR+ Translation+Collection\n[ ote-x-bible-ote-v1\nQuechua quy\nJW300 (quy+quz) (Agić and Vulić, 2019)\n⋆ MINEDU, dict_misc: https://github.com/AmericasNLP/ americasnlp2021/tree/main/data/quechua-spanish\n⋆ PEconst: https://www.wipo.int/edocs/lexdocs/laws/qu/pe/pe035qu. pdf\n(Continues on next page)"
|
| 25 |
+
}
|
ACL_23_no_limitation/ACL23_1201.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1201",
|
| 3 |
+
"Title": "Sheffield’s Submission to the AmericasNLP Shared Task on Machine Translation into Indigenous Languages",
|
| 4 |
+
"abstractText": "In this paper we describe the University of Sheffield’s submission to the AmericasNLP 2023 Shared Task on Machine Translation into Indigenous Languages which comprises the translation from Spanish to eleven indigenous languages. Our approach consists of extending, training, and ensembling different variations of NLLB-200. We use data provided by the organizers and data from various other sources such as constitutions, handbooks, news articles, and backtranslations generated from monolingual data. On the dev set, our best submission outperforms the baseline by 11% average chrF across all languages, with substantial improvements particularly for Aymara, Guarani and Quechua. On the test set, we achieve the highest average chrF of all the submissions, we rank first in four of the eleven languages, and at least one of our submissions ranks in the top 3 for all languages.1",
|
| 5 |
+
"1 Introduction": "The 2023 AmericasNLP Shared Task (Ebrahimi et al., 2023) involves developing machine translation systems for translating from Spanish to eleven low resource indigenous languages: Aymara (aym), Bribri (bzd), Asháninka (cni), Chatino (czn), Guarani (gn), Wixarika (hch), Nahuatl (nah), Hñähñu (oto), Quechua (quy), Shipibo-Konibo (shp), and Rarámuri (tar). Developing machine translation systems for these languages is challenging since many of them are polysynthetic (i.e., words are composed of several morphemes) and word boundaries are not standardized; they present different orthographic variations (e.g., classical vs. modern Nahuatl variations); presence of codeswitching is common, among other difficulties of low resource settings.\n1We release code for training our models here: https: //github.com/edwardgowsmith/americasnlp-2023-she ffield\nPrevious work has explored the effectiveness of pretrained machine translation models in low resource settings (Haddow et al., 2022) showing their impact on improving translation quality and addressing data scarcity challenges. Following this approach, our submissions to the 2023 AmericasNLP shared task consist of extending and finetuning various versions of NLLB-200 (Costa-jussà et al., 2022), a state-of-the-art machine translation model specifically designed for low resource settings. NLLB-200 is trained on 202 languages across 1 220 language pairs, including three of the languages present in the AmericasNLP shared task: aym, gn, and quy.2 We further train our models on data from various sources such as constitutions and news articles, and we leverage multilingual training and ensembling to improve their performance. Models are evaluated using chrF (Popović, 2015), the official metric of the task. On the test set, we achieve the highest average chrF across all languages, and the best chrF for four of the languages.\nThe rest of the paper is organised as follows: Section 2 describes the data sources for training our models, Section 3 explains our three submissions in detail, Section 4 presents the results on the dev and test sets, Section 5 analyses the impact of different factors to the model’s performance, Section 6 looks at zero-shot capabilities, and we draw conclusions in Section 7.",
|
| 6 |
+
"2.1 Data Collection": "We collect data from a variety of data sources, including training data provided by the organisers (AmericasNLP 2023), data from prior submissions to the AmericasNLP shared task (Helsinski and REPUcs) and relevant datasets specific to the in-\n2We present inference results on the dev set for these models in Table 4.\n192\ndigenous languages included in the task (NLLB). Table 1 shows the size of the training data for each language. The total amount of training data is unevenly distributed among datasets, with Quechua (557 277), Aymara (173 620), and Guarani (33 938) having the greatest amount of training data.\nAmericasNLP 2023 Data provided by the organisers of the 2023 AmericasNLP Shared Task includes parallel datasets for training the eleven languages. Table 8 contains all datasets and references.\nHelsinski We take data from OPUS (Tiedemann, 2012) and other sources (including constitutions) provided by the University of Helsinski’s submission (Vázquez et al., 2021) to the AmericasNLP 2021 Shared Task (Mager et al., 2021). The collected data from constitutions includes translations of the Mexican constitution into Hñähñu, Nahuatl, Raramuri and Wixarika, of the Bolivian constitution into Aymara and Quechua, and of the Peruvian constitution into Quechua.\nREPUcs We use data collected for the REPUcs’ submission to the 2021 AmericasNLP shared task (Moreno, 2021). They introduce a new parallel corpus with Quechua data from three sources: (1) Duran (2010), which contains poems, stories, riddles, songs, phrases and a vocabulary for Quechua; (2) Lyrics translate (2008) which provides different lyrics of poems and songs; and (3) a Quechua handbook (Iter and Ortiz Cárdenas, 2019).\nNLLB We use two datasets introduced by Costajussà et al. (2022) as part of the training and evaluation for NLLB-200: (1) the NLLB Multi-Domain dataset, which provides 8 809 English-Aymara ex-\namples in the news, health, and unscripted chat domains and (2) the NLLB Seed dataset, which contains 6 193 English-Guarani examples consisting of professionally-translated sentences.\nBibles We also collect translations from the JHU Bible corpus (McCarthy et al., 2020), which provides translations of the bible for all languages of the Shared Task except for Chatino. However, we do not observe performance improvements from using this data in our experiments (Section 5).",
|
| 7 |
+
"2.2 Backtranslations": "We generate backtranslations using the monolingual data sourced by Vázquez et al. (2021) for seven languages. This data comes from Bustamante et al. (2020), Tiedemann (2020), Mager et al. (2018), Tiedemann (2012), and Agić and Vulić (2019). We train NLLB-200 3.3B on X-es for all 11 languages, X, in the task. We take two checkpoints of this model at different stages of training (backtrans 1 and backtrans 2). We find this data improve performance for two of the languages in the task (gn and shp, see Section 4).",
|
| 8 |
+
"2.3 Data Overlap": "We note that NLLB-200, the pretrained machine translation model we base our experiments on (see Section 3) is trained on a portion of the collected data. Specifically, Spanish-Aymara and EnglishAymara data from GlobalVoices, and SpanishQuechua data from Tatoeba, both as part of OPUS. We believe that the inclusion of this data will still be beneficial to the model, since NLLB-200 is not optimised for the languages we are interested in as part of this task.",
|
| 9 |
+
"2.4 Data Processing": "The training data provided by the organisers is tokenised for nah and oto. We detokenise it to put it in line with the rest of the training data. We replace punctuation not included in NLLB-200’s vocabulary. For oto, we find that 7% of the dev set contains characters not in the vocabulary, since these characters do not occur in the training sets, we don’t take steps to handle them. For czn, we replace all superscript tone markings at the end of words with their standard counterparts, and then replace them naively back at inference.",
|
| 10 |
+
"3 Models": "To tackle the 2023 AmericasNLP task on automatic translation of eleven low resource indigenous languages, we use NLLB-200 (Costa-jussà et al., 2022), a state-of-the-art machine translation model specifically designed for low resource settings. We experiment with different distilled versions of NLLB-200 with 600M and 1.3B parameters, and the version with 3.3B parameters. Although inference results on three languages3 show that the largest version, NLLB-3.3B, performs better than smaller versions (see Table 4), due to the large computational cost of using NLLB-3.3B we run most of our experiments with the 1.3B distilled version. Models are fine-tuned on all the training data (Train Total), i.e. all data sources in Section 2 excluding Bibles and backtranslations, unless indicated. Moreover, we look at ensembling as an approach to improve the overall performance.\n3NLLB-200 training data includes aym, gn and quy.\nSubmission 3 We train NLLB-200 1.3B distilled on the training data4 and we choose the best checkpoint based on average chrF across all languages. We submit translations for all languages using this model (NLLB-1.3B (single best)).\nSubmission 2 We take the best-performing single model per language, excluding ensembles. We find that for the majority of languages, the best single model (by dev chrF) is the same as Submission 3, so we only submit additional translations for five languages:\n• NLLB-1.3B (- NLLB Seed) - aym NLLB1.3B trained on all data (Train Total) except for NLLB Seed.\n• NLLB-1.3B (best per lang) - bzd NLLB1.3B trained on all data.\n• NLLB-1.3B (+ backtrans 1) - gn NLLB1.3B trained on all data plus backtranslations from checkpoint 1.\n• NLLB-3.3B - quy NLLB-3.3B trained on all data.\n• NLLB-1.3B (+ backtrans 2) - shp NLLB1.3B trained on all data plus backtranslations from checkpoint 2.\nSubmission 1 We experiment with various ensembles of models in attempt to improve performance further – we only find improvements over Submission 2 through ensembling for five of the\n4We exclude Bibles data and backtranslations.\nlanguages in the task. These selected ensembles are as follows:\n• Ensemble 1 - bzd The best NLLB-1.3B model for bzd and an NLLB-600M model trained on all languages.\n• Ensemble 2 - czn The best average NLLB1.3B model and an NLLB-3.3B model trained on all languages.\n• Ensemble 3 - hch The best average NLLB1.3B model and an NLLB-600M model trained on all languages.\n• Ensemble 4 - quy NLLB-3.3B trained on all languages, NLLB-3.3B trained on just the three supported languages (aym, gn, and quy), and NLLB-1.3B trained on all languages.\n• Ensemble 5 - tar NLLB-1.3B trained on all languages, NLLB-600M trained on all languages, and NLLB-1.3B trained on all languages with a label smoothing of 0.2 (rather than 0.1).",
|
| 11 |
+
"3.1 Experimental Setup": "We train the models in a multilingual fashion across all 11 language pairs present in the task, extending the embedding matrix to cover the tags for the new languages. We experiment with freezing various\nparameters, but find best results from training everything. We run our experiments on a single A100 GPU with batch sizes of 64, 16, and 2 for the 600M, 1.3B-, and 3.3B-parameter models, respectively. We run our experiments in fairseq (Ott et al., 2019). Full hyperparameters for all of our runs are provided in Table 7. To evaluate our models, following the official evaluation, we use chrF (Popović, 2015) computed using SacreBLEU (Post, 2018) with signature: nrefs:1|case:mixed|eff:yes|nc:6|nw:0|space:no|version:2.1.0.",
|
| 12 |
+
"4.1 Dev Set Results": "Table 2 presents the results of our models on the dev set. We observe that for all languages, at least one of our models outperforms the baseline (Vázquez et al., 2021), with the exception of oto where we obtain comparable performance. The greatest improvements over the baseline model are on the three NLLB supported languages: aym (41.1 compared to 32.7), gn (36.9 compared to 31.1) and quy (39.1 compared to 33.8). We note that backtranslations only lead to improved performance on gn and shp, which are the two languages with the greatest amount of available monolingual data.\nInference results NLLB-200 is trained on data from three of the languages in this shared task: quy, aym, gn. Table 4 shows the inference results for these languages on the dev set for different variations of NLLB-200 models, along with our submissions. We observe a considerable improvement from the distilled 600M to 1.3B distilled models, with the greatest improvement over the baseline model for gn. We note that the 1.3B and 3.3B models outperform the baseline model for aym and gn. For quy, the inference results are worse than the baseline, likely due to the large amount of training data available in the task. We are able to improve substantially upon the inference results for quy and aym, but much less so for gn – this may be due to much less training data being available for gn compared to the other two languages.",
|
| 13 |
+
"4.2 Test Set Results": "Results on the test set are shown in Table 3. Overall, our best submission achieves the highest average chrF across all languages from all submissions to the task (the second-best average is 29.4, compared to our 30.5). We also rank first for four of the eleven languages: aym, czn, quy, and shp. Our biggest improvement upon the second-place team is for czn, where we achieve 40.0 compared to 36.6. Submissions 1 and 2 rank in the top 3 for all languages. Surprisingly, the best chrF score was obtained on czn (40.0), the language with the least amount of training data (3 118 examples), followed by quy (39.5), and aym (36.5).",
|
| 14 |
+
"5 Additional Experiments": "We provide the results of additional experiments to better understand the impact of various factors to our model’s performance. The results of these experiments are shown in Table 5.\nMultilingual training We look into whether multilingual training is beneficial to the model. For this, we train a 3.3B-parameter model on the quy data only, and compare this version (NLLB-3.3B only quy) to the one trained on all languages (NLLB-3.3B all langs) at the same number of updates (480 000). We find that multilingual training greatly improves the performance on quy, suggesting the model benefits from transfer learning across the languages. We suspect the benefit of the multilingual approach is related to the fact that although the languages included in the task are from different linguistic families, they share linguistic properties (e.g., polysynthetic or agglutinative).\nRandom initialization To analyse the benefit of starting from NLLB-200, we train an equivalent model to the 1.3B parameter version with randomly-initialised parameters. We see that this model performs much worse than the equivalent NLLB-200 model. As expected, we observe the\ngreatest differences on the languages supported by NLLB-200 (aym, gn, quy).\nBibles data Similar to findings of Vázquez et al. (2021), we observe a drop in average performance through training on the Bibles data for the majority of languages except for gn and oto, where we obtain comparable performance.",
|
| 15 |
+
"6 Zero-shot Performance": "We investigate whether our models have any zeroshot capabilities, i.e. translating a language pair for which the model has not seen any training data. For this, we take the best-performing model for es-shp (NLLB-1.3B + backtrans 2), and evaluate it on translating quy-shp, aym-shp, and gn-shp.5 The results of these experiments are shown in Table 6. We find that our model is able to retain decent performance for these three zero-shot directions (maximum 25% drop in chrF), despite training all of the parameters of the machine translation model.",
|
| 16 |
+
"7 Conclusions": "In this paper we describe our submissions to the AmericasNLP 2023 Shared Task. We participated with three submissions which consist of training different versions of the NLLB-200 model on publicly available data from different sources. Models are trained in a multilingual fashion and we experiment with different ensembles of models to further improve performance. We improve upon the inference scores for NLLB-200 3.3B for its three supported languages, and our best submission achieved the highest average chrF across all languages of any submission to the task.\n5This is possible due to multiparallel dev sets across all languages.",
|
| 17 |
+
"Acknowledgments": "This work is supported by the Centre for Doctoral Training in Speech and Language Technologies (SLT) and their Applications funded by the UK Research and Innovation grant EP/S023062/1."
|
| 18 |
+
}
|
ACL_23_no_limitation/ACL23_1202.json
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1202",
|
| 3 |
+
"Title": "Enhancing Translation for Indigenous Languages: Experiments with Multilingual Models",
|
| 4 |
+
"abstractText": "This paper describes CIC NLP’s submission to the AmericasNLP 2023 Shared Task on machine translation systems for indigenous languages of the Americas. We present the system descriptions for three methods. We used two multilingual models, namely M2M-100 and mBART50, and one bilingual (one-to-one) — Helsinki NLP Spanish-English translation model, and experimented with different transfer learning setups. We experimented with 11 languages from America and report the setups we used as well as the results we achieved. Overall, the mBART setup was able to improve upon the baseline for three out of the eleven languages.",
|
| 5 |
+
"1 Introduction": "While machine translation systems have shown commendable performance in recent years, the performance is lagging for low-resource languages (Hadgu et al., 2022; Tonja et al., 2023). Since lowresource languages suffer from a lack of sufficient data (Siddhant et al., 2022; Haddow et al., 2022), most models and methods that are developed for high-resource languages do not work well in lowresource settings. Additionally, low-resource languages are linguistically diverse and have divergent properties from the mainstream languages in NLP studies (Zheng et al., 2021).\nThough low-resource languages lack sufficient data to train large models, some such languages still have a large number of native speakers (Zheng et al., 2021). While the availability of language technologies such as machine translation systems can be helpful for such linguistic communities, they could also bring harm and exposure to exploitation (Hovy and Spruit, 2016). Borrowing from human-computer interaction (HCI) studies (Schneider et al., 2018), we want to acknowledge our belief that low-resource language speakers should be empowered to create technologies that benefit their communities. Many indigenous communi-\nties have community-rooted efforts for preserving their languages and building language technologies for their communities 1 and we hope that methods from Shared Tasks like this will contribute to their efforts.\nImproving machine translation systems for lowresource languages is an active research area and different approaches (Zoph et al., 2016; Karakanta et al., 2018; Ortega et al., 2020a; Goyal et al., 2020; Tonja et al., 2022; Imankulova et al., 2017) have been to improve the performance of systems geared forward low-resource languages. We participated in the AmericasNLP 2023 Shared Task in hopes of contributing new approaches for low-resource machine translation that are likely to be helpful for community members interested in developing and adapting these technologies for their languages.\nIn recent years, large pre-trained models have been used for downstream NLP tasks, including machine translation (Brants et al., 2007) because of the higher performance in downstream tasks compared to traditional approaches (Han et al., 2021). One trend is to use these pre-trained models and fine-tune them on smaller data sets for specific tasks (Sun et al., 2019). This method has shown promising results in downstream NLP tasks for languages with low or limited resources (Tars et al., 2022; Zhao and Zhang, 2022). In our experiments, we used multilingual and bilingual models and employed different fine-tuning strategies for the eleven languages in the 2023 Shared Task (Ebrahimi et al., 2023).\nIn this paper, we describe the system setups we used and the results we obtained from our experiments. One of our systems improves upon the baseline for three languages. We also reflect on the setups we experimented with but ended up not submitting in hopes that future work could improve upon them.\n1https://papareo.nz/\n200",
|
| 6 |
+
"2 Languages and Datasets": "In this section, we present the languages and datasets used in our shared task submission. Table 1 provides an overview of the languages, their linguistic families, and the numbers of parallel sentences.\nAymara is an Aymaran language spoken by the Aymara people of the Bolivian Andes. It is one of only a handful of Native American languages with over one million speakers (Homola, 2012). Aymara, along with Spanish and Quechua, is an official language in Bolivia and Peru. The data for the Aymara-Spanish come from the Global Voices (Tiedemann, 2012).\nBribri The Bribri language is spoken in Southern Costa Rica. Bribri has two major orthographies: Jara2 and Constenla3 and the writing is not standardized which results in spelling variations across documents. In this case, the sentences use an intermediate representation to unify existing orthographies. The Bribri-Spanish data (Feldman, 2020) came from six different sources.\nAsháninka Asháninka is an Arawakan language spoken by the Asháninka people of Peru and Acre, Brazil4. It is primarily spoken in the Satipo Province located in the Amazon forest. The parallel data for Asháninka-Spanish come mainly from three sources (Cushimariano Romano and Sebastián Q., 2008; Ortega et al., 2020b; Mihas, 2011) and translations by Richard Castro.\n2https://www.lenguabribri.com/se-tt%C3%B6-bribri-iehablemos-en-bribri\n3https://editorial.ucr.ac.cr/index.php 4https://www.everyculture.com/wc/Norway-to-\nRussia/Ash-ninka.html\nChatino Chatino is a group of indigenous Mesoamerican languages. These languages are a branch of the Zapotecan family within the OtoManguean language family. They are natively spoken by 45,000 Chatino people (Cruz and Woodbury, 2006) whose communities are located in the southern portion of the Mexican state of Oaxaca. The parallel data for Chatino-Spanish can be accessed here5.\nGuarani Guarani is a South American language that belongs to the Tupi-Guarani family (Britton, 2005) of the Tupian languages. It is one of the official languages of Paraguay (along with Spanish), where it is spoken by the majority of the population, and where half of the rural population are monolingual speakers of the language (Mortimer, 2006).\nWixarika Wixarika is an indigenous language of Mexico that belongs to the Uto-Aztecan language family (de la Federación, 2003). It is spoken by the ethnic group widely known as the Huichol (selfdesignation Wixaritari), whose mountainous territory extends over portions of the Mexican states of Jalisco, San Luis Potosí, Nayarit, Zacatecas, and Durango, but mostly in Jalisco. United States: La Habra, California; Houston, Texas.\nNahuatl Nahuatl is a Uto-Aztecan language and was spoken by the Aztec and Toltec civilizations of Mexico6. The Nahuatl language has no standard orthography and has wide dialectical variations (Zheng et al., 2021).\nHñähñu Hñähñu, also known as Otomí, belongs to the Oto-Pamean family and lived in central Mexico for many centuries (Lastra, 2001). Otomí is a tonal language with a Subject-Verb-Object (SVO) word order (Ebrahimi et al., 2022). It is spoken in several states across Mexico.\nQuechua The Quechua-Spanish data (Agić and Vulić, 2019; Tiedemann, 2012) has three different sources: the Jehova’s Witnesses texts, the Peru Minister of Education, and dictionary entries and samples collected by Diego Huarcaya. The Quechua language, also known as Runasimi is spoken in Peru and is the most widely spoken pre-Columbian language family of the Americas (Ebrahimi et al., 2022).\n5https://scholarworks.iu.edu/dspace/handle/ 2022/21028\n6www.elalliance.org/languages/nahuatl\nShipibo-Konibo Shipibo-Konibo - Spanish data (Montoya et al., 2019; Galarreta et al., 2017) come from three different sources: samples from flashcards translated to Shipibo-Konibo, sentences translated from books for bilingual education, and dictionary entries.\nRarámuri Rarámuri, also known as Tarahumara is a Uto-Azetcan language spoken in Northern Mexico (Caballero, 2017). Rarámuri is a polysynthetic and agglutinative language spoken mainly in the Sierra Madre Occidental region of Mexico (Ebrahimi et al., 2022).",
|
| 7 |
+
"3 Models": "We experimented with two multilingual and one bilingual translation model with different transfer learning setups. We used M2M-100 and mBART50 for the multilingual experiment and the HelsinkiNLP Spanish-English model for the bilingual experiment. Figure 1 shows the models used in this experiment.",
|
| 8 |
+
"3.1 Bilingual models": "For the bilingual model, as shown in Figure 1a, we use a publicly available Spanish - English7 pre-trained model from Huggingface8 trained by Helsinki-NLP. The pre-trained MT models released by Helsinki-NLP are trained on OPUS, an open-source parallel corpus for covering 500 languages (Tiedemann and Thottingal, 2020; Tiedemann, 2020). This model is trained using the framework of Marian NMT (Junczys-Dowmunt et al., 2018). Each model has six self-attention layers in the encoder and decoder parts, and each layer has eight attention heads.\n7https://huggingface.co/Helsinki-NLP/opus-mt-es-en 8https://huggingface.co/\nWe used this model with the intention that the model trained with high-resource languages will improve the translation performance of lowresource indigenous languages when using a model trained with high-resource languages. We finetuned the Spanish-English model for each of the Spanish-to-Indigenous language pairs.",
|
| 9 |
+
"3.2 Multilingual models": "For multilingual models, we used the Many-toMany multilingual translation model that can translate directly between any pair of 100 languages (M2M100) (Fan et al., 2021) with 48M parameters and a sequence-to-sequence denoising autoencoder pre-trained on large-scale monolingual corpora in 50 languages (mBART50) (Tang et al., 2020). We fine-tuned multilingual models in two ways:\n1. We fine-tuned two multilingual models on each Spanish-Indigenous language pair for 5 epochs and evaluated their performance using the development data before training the final submission system. As shown in Figure 1b, for the final system, we only finetuned mBART50 on Spanish-indigenous data based on the development set evaluation performance.\n2. Fine-tuning multilingual models first on the Spanish - All (mixture of all indigenous language data) dataset to produce an intermediate model and then fine-tuning the intermediate model for each of the Spanish-Indigenous language pairs as shown in Figure 1c. For this experiment, we combined all language pairs’ training data to form a Spanish - all parallel corpus, and then we first fine-tuned m2m100-48 using a combined dataset for five\nepochs and saved the model, here referred to as m2m100-48inter model. We fine-tuned the m2m100-48inter model again on each Spanish-Indigenous language pair for another 5 epochs and evaluated the performance on the development set before training the final submission system.\nEvaluation We used chrF2 (Popović, 2017) evaluation metric to evaluate our MT systems.",
|
| 10 |
+
"4 Results": "We submitted three (two multilingual and one bilingual) systems, as shown in Table 2, namely m2m100-48inter, mBART50, and Helsinki-NLP. We included the dev set performance for all the models we trained before the final model to compare the results with the final model evaluated by using test set data. From the dev set result, it can be seen that fine-tuning the multilingual model on the Spanish-Indigenous language pair outperforms the fine-tuned result of the bilingual and m2m10048inter models. From all the models evaluated using the dev set, mBART50 outperformed the others on average.\nOur test results show comparable results when compared to the strongest baseline shared by the AmericasNLP 2023, and our model outperformed the baseline for Spanish-Bribri (es-bzd), SpanishAsháninka (es-cni), and Spanish-Quechua (es-quy) pairs. Similarly, mBART50 outperformed the other models on average on the test set.",
|
| 11 |
+
"5 Conclusion": "In this work, we present the system descriptions and results for our submission to the 2023 AmericasNLP Shared Task on Machine Translation into\nIndigenous Languages. We used pre-trained models and tested different fine-tuning strategies for the eleven languages provided for the shared task. We used one bilingual (Helsinki NLP EnglishSpanish model) and two multilingual (M2M-100 and mBART50) models for our experiments. In addition to fine-tuning the individual languages’ data, we concatenated the data from all eleven languages to create a Spanish-All dataset and fine-tuned the M2M-100 model before fine-tuning for the individual languages. Our mBAERT50 model beat the strong baseline in three languages."
|
| 12 |
+
}
|
ACL_23_no_limitation/ACL23_1203.json
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1203",
|
| 3 |
+
"Title": "Findings of the AmericasNLP 2023 Shared Task on Machine Translation into Indigenous Languages",
|
| 4 |
+
"abstractText": "In this work, we present the results of the AmericasNLP 2023 Shared Task on Machine Translation into Indigenous Languages. This edition of the shared task features eleven language pairs, one of which – Chatino–Spanish – uses a newly collected evaluation dataset, consisting of professionally translated text from the legal domain. Seven teams participated in the shared task, with a total of 181 submissions. Additionally, we conduct a human evaluation of the best system outputs and compare them to the best submissions from the 2021 shared task. We find that this analysis agrees with the quantitative measure we use to rank submissions, ChrF, which itself shows an improvement of 9.64 points on average across all languages, compared to the prior winning system.",
|
| 5 |
+
"1 Introduction": "The majority of Indigenous languages, including those native to the Americas, are under-represented in modern natural language processing (NLP), as technological advances are often concentrated on the small set of languages that have large amounts of easily available data (Joshi et al., 2020). Beyond the lack of data, linguistic factors like morphological complexity, non-standard orthographies, and language isolates make it even more challenging to adapt existing NLP methods to Indigenous languages (Mager et al., 2018; Schwartz et al., 2020).\nHowever, there are multiple benefits of developing technologies that support Indigenous languages – building NLP models for under-represented languages can bring equitable access to information and technology to speakers of these languages (Mager et al., 2018). Additionally, several Indigenous languages in the Americas are endangered, and language technologies have proven to be beneficial to Indigenous communities and linguistic researchers in the documentation, preservation, and revitalization of endangered languages (Galla,\n2016; Anastasopoulos, 2019; Zhang et al., 2022; Rijhwani, 2023). The AmericasNLP workshop seeks to highlight NLP and linguistic research on Indigenous languages spoken across the Americas, and promote the development of computational approaches which work well for these languages. The AmericasNLP Shared Task on Machine Translation into Indigenous Languages is hosted as part of the workshop to specifically focus on improvements in machine translation (MT) systems for these languages. In this work, we describe the third edition of the shared task. For this year, a new goldstandard parallel dataset for translation evaluation, between Spanish and Chatino, was developed. This dataset uses text from the legal domain, with source sentences taken from press releases of the Supreme Court of Mexico. This allows for evaluation on technical and challenging text, which are likely to be relevant to speakers of the language.\nThis work is structured as follows: in Section 2, we present a brief overview of related work on MT and Indigenous languages; in Section 3 and 4, we provide details on the shared task rules, and newly collected data; in Section 5, we summarize the submitted systems; and, in Sections 6 and 7, we provide an analysis of the main results and further\n206\nexperiments.",
|
| 6 |
+
"2.1 NLP for Indigenous Languages": "Low-resource languages are often referred to as ‘less studied’, ‘resource-scarce’, ‘less computerized’, ‘less privileged’, ‘less commonly taught’, or ‘low-density’ (Magueresse et al., 2020). Indigenous languages are largely included under this umbrella term, and they represent a unique challenge when dealing with NLP tasks.\nFirst, most of the Indigenous languages worldwide are generally understudied, which means that even though we can grasp some of their general grammatical features based on other previously studied languages from the same linguistic families, there are still particular traits which haven’t been described. Second, Indigenous languages are typologically different: some of them are polysynthetic, such as the languages belonging to Uto-Aztecan family (e.g. Nahuatl, Wixarika) with rich morphophonemics and a large number of inflections (Mithun, 2001). Other languages are highly ana-\nlytic with simpler morphology, but with complex tonal systems such as Chatino and Chinantec, from the Oto-Manguean family. Due to the lack of prior study, it becomes challenging to even define what constitutes a language versus a language variety among Indigenous languages.\nFinally, another major challenge is the diversification of orthographies and the scarcity of written corpora in such languages. However, in lieu of these challenges, there has been a substantial increase in NLP applications for Indigenous languages (Mohanty et al., 2023). For example, Hedderich et al. (2020) survey common methods used in low-resource scenarios, such as data augmentation, distant supervision, and cross-lingual language models. Mager et al. (2018) provide an overview of research in NLP related to the Indigenous languages of the Americas, with an accompanying, and continually-updated,repository of research works and other resources for Indigenous languages. Recently, ACL 2022 featured a theme track on Language Diversity: from Low-Resource to Endangered Languages, which highlights papers\nfocusing on Indigenous languages, and featured a keynote discussion on how to best support linguistic diversity (Muresan et al., 2022).",
|
| 7 |
+
"2.2 Low-Resource MT": "Low-Resource MT (LRMT) tackles the challenge of developing translation systems for language pairs with limited parallel data. Traditional neural machine translation approaches struggle in such scenarios due to data scarcity.\nMultilingual transfer learning has been successful in enhancing translation quality in LRMT by leveraging knowledge from related languages (Zoph et al., 2016; Nguyen and Chiang, 2017; Aharoni et al., 2019). By utilizing shared representations across languages, multilingual models can generalize well to unseen language pairs with limited data.\nOne effective LRMT approach using transfer learning is finetuning large multilingual language models on specific language pairs. This involves adapting pretrained models like mBART, M2M100, and NLLB-200 to target specific language pairs or domains of interest (Liu et al., 2020; Fan et al., 2020; Team et al., 2022). Refining the model’s parameters through this technique enhances translation quality for low-resource languages (Thillainathan et al., 2021; Liu et al., 2020).\nBack-translation is another effective technique\nemployed in LRMT, which generates synthetic parallel data by translating and re-translating monolingual data (Sennrich et al., 2016; Feldman and Coto-Solano, 2020; Lample et al., 2018). By incorporating this technique, LRMT systems can benefit from additional training examples, leading to improved translation performance.",
|
| 8 |
+
"3 Task and Evaluation": "The shared task focuses on open machine translation: outside of the development set and any prohibited datasets, teams are allowed to collect and train on an unlimited amount of external data. As translation performance for low-resource Indigenous languages is generally low, we choose this setting to allow models to achieve the best possible performance, in hopes that usable translation models become more quickly developed.\nMetrics Translation evaluation is done with ChrF (Popović, 2015), as implemented in SCAREBLEU (Post, 2018), as the target languages are morphologically rich. While teams are not required to submit a system for all languages, the final score for each submission is calculated by taking an average over all eleven languages; if there is no model output for a given language, the score is taken as 0.",
|
| 9 |
+
"4 Languages and Data": "For development and evaluation, the AmericasNLP 2021 shared task used multi-way parallel translations of the Spanish XNLI test set across 10 languages: Asháninka, Aymara, Bribri, Guarani, Nahuatl, Otomí, Quechua, Rarámuri, ShipiboKonibo and Wixarika (Ebrahimi et al., 2022). For this edition of the shared task, we use the same evaluation set and additionally introduce a new evaluation dataset, created from Mexican court proceedings, for Spanish–Chatino. This set was released as a surprise language near the end of the competition, along with a small amount of Spanish–Chatino and English–Chatino data for training. In this section, we describe the Chatino language, Spanish source data, and translation process. For a detailed overview of the ten other evaluation languages, we refer the reader to Ebrahimi et al. (2022) and Mager et al. (2021).",
|
| 10 |
+
"4.1 Chatino": "San Juan Quiahije Chatino (SJQ, ISO 639-3 ctp), spoken by about 5000 people, is an Oto-Manguean language spoken in Oaxaca, Mexico and by Chatinos who live in many cities throughout the United States, with a high concentration in the Southeastern United States in the states of North Carolina, Alabama, and Georgia. The Chatino languages are some of the most complex tonal languages in the world. SJQ has 10 tonemes and 15 morphological tonal categories. In the created corpus, tones are represented as superscripts.",
|
| 11 |
+
"4.2 Evaluation Dataset": "Source Data A main motivation for this dataset is to create a resource which could be more directly applicable to the real life needs of the communities involved, while at the same time limiting negative ethical implications (Mager et al., 2023). As such, we choose to use legal text as the source domain. The Mexican Constitution and the General Law of Linguistic Rights of Indigenous Peoples (Ley General De Derechos Lingüísticos de los Pueblos Indígenas1) states that the 68 Indigenous languages spoken in the country before the Spanish conquest are National Languages. This gives all people the right to perform bureaucratic and legal actions in their native language. As a first approximation of this text, we gather press releases from\n1https://www.diputados.gob.mx/ LeyesBiblio/pdf/LGDLPI.pdf\nthe Mexican Supreme Court.2 This allows us to avoid the potential harms of directly generating low-quality translations of written laws and court decisions, while still allowing for insights into the issues and challenges of translating legal terms and text. Furthermore, the text generated by the Mexican Supreme Court is public domain, allowing for free usage.\nTranslation Process To create the dataset, we crawl 10,000 instances from the Supreme Court press releases, and randomly select a subset for translation. Translations are jointly done by two professional translators, who are native San Juan Quiahije Chatino speakers. Legal terms in Spanish are translated into Chatino, in order to reduce codeswitching and borrowed words. This translation of domain-specific terms represents the most challenging aspect of the translation process, with translators investigating the context and meaning of specific words in order to create accurate translations. For more difficult cases, translators consulted with lawyers to clarify the meaning of certain texts. For all translations, both translators worked together to reach an agreement on the translated text. Examples of difficult to translate words and entities include “dismissal, approval, jurisprudence, regulations among others and Chamber of Deputies, the nation’s Supreme Court of Justice and Magistrate.”",
|
| 12 |
+
"5 Baseline and Submitted Systems": "In this section, we describe the 2023 baseline system and each team’s approach. We present a summary of all approaches in Table 2.",
|
| 13 |
+
"5.1 Baseline": "The AmericasNLP 2021 shared task used a transformer encoder–decoder model (Vaswani et al., 2017) along with hyperparameters shown to work well for low-resource settings (Guzmán et al., 2019). For this year’s edition of the shared task, we use the winning 2021 system (Vázquez et al., 2021) as the baseline, as it greatly outperformed the previous baseline and other submissions on all languages.",
|
| 14 |
+
"5.2 Andes": "The Andes team (Gillin and Gummibaerhausen, 2023) submitted a translation system for Spanish– Aymara. The system is based on mT5 (Xue et al.,\n2https://www.scjn.gob.mx/multimedia/ comunicados\n2021) and is further finetuned on English–Aymara data, in addition to the provided Spanish–Aymara data. The English parallel data consists of a lexicon, collected from books meant for language learning (Wexler and Programs, 1967; Parker, 2008)",
|
| 15 |
+
"5.3 CIC-NLP": "The CIC-NLP team (Tonja et al., 2023) submitted three different models across all languages, based on either mBART50 (Tang et al., 2021) and M2M100 (Fan et al., 2020) or a publicly released English–Spanish translation model.3 The multilingual models were first optionally finetuned on a concatenation of the es-XX training data across all languages. Language-specific models were then created by further finetuning on data for a specific target language. The English–Spanish model was only finetuned on data for a specific language pair.",
|
| 16 |
+
"5.4 Helsinki-NLP": "The Helsinki-NLP team (Vázquez et al., 2023) submitted six different models across all languages, following four main modeling approaches. Model B is a copy of the team’s winning multilingual one-to-many 2021 model, and Model C is a reimplementation of this approach using OpusTrainer and a language specific-finetuning step. Model A focuses on knowledge distillation and transfer learning: a parent English–Spanish model is distilled from the NLLB model, and is then further finetuned on target-language data. Model D uses language-specific decoders as part of a modular architecture: a specified number of decoder layers are\n3https:huggingface.co/Helsinki-NLP/ opus-mt-es-en\nshared across languages, while others are trained separately per language. The team also focused heavily on data collection and cleaning. In addition to the data provided by the shared task, the team collected data from OPUS (Tiedemann, 2012), the FLORES-200 (Team et al., 2022) evaluation sets, the Bible (McCarthy et al., 2020), the Universal Declaration of Human Rights, and various texts extracted from websites or PDFs of educational materials and news. MT was also used to leverage monolingual Wikipedia data as well as parallel data between the target languages and English. Texts were detokenized and whitespace normalized if necessary. Data from all sources was concatenated and deduplicated to create the final training data, and special tags denoting the quality and language variety of the source material were added to each example.",
|
| 17 |
+
"5.5 LCT-EHU": "The LCT-EHU team (Ahmed et al., 2023) focused on the Spanish–Quechua language pair and submitted five different models to the competition. Among their contributions, they collected new parallel corpora, experimented with high-resource bilingual systems as pretrained models, such as Spanish–English and Spanish–Finnish, and generated synthetic parallel data from monolingual texts using back-translation and the copied corpus technique (Currey et al., 2017). The best result on the test set was obtained by using a model pretrained on Spanish–Finnish and by including new parallel data from the literature and legal domains, despite originating from different variants of Quechua Ayacucho.",
|
| 18 |
+
"5.6 LTLAmsterdam": "The LTLAmsterdam team (Stap and Araabi, 2023) submitted four different models for all language pairs. Their approaches included a bilingual system, an off-the-shelf commercial large language model used for translation, and a finetuned multilingual model with additional adaptation. The bilingual systems were trained using transformer models with parameters specifically tailored for low-resource languages (Araabi and Monz, 2020). For the large language model, they utilized the ChatGPT API4 and followed the prompts proposed by Jiao et al. (2023). Additionally, they finetuned the M2M100 multilingual model (Fan et al., 2021), specifically choosing the 418M parameter version and training a model for each language pair. It is important to highlight that none of the target languages in the shared task were originally included in the set of languages of M2M100. Finally, they augmented the finetuned M2M100 model with a k-nearest neighbor (kNN) datastore for inference (Khandelwal et al., 2021), effectively creating a semi-parametric model that combines the parametric M2M100 model with a nearest neighbor retrieval mechanism.",
|
| 19 |
+
"5.7 PlayGround": "The PlayGround team (Gu et al., 2023) submitted one model for each language pair, except for Spanish–Chatino. Their approach focused on utiliz-\n4https://platform.openai.com/docs/ api-reference/chat\ning the pretrained NLBB-200 model (Team et al., 2022), which they finetuned using the available monolingual and parallel data for the shared task. They conducted a comparison between bilingual and multilingual finetuned models, incorporating back-translated data through finetuning the NLBB200 model with Spanish as the target language. Additionally, they adopted a weight-averaging approach (Wortsman et al., 2022).",
|
| 20 |
+
"5.8 Sheffield": "The Sheffield team (Gow-Smith and Villegas) submitted three models for all languages. Approaches were based off various versions of the NLLB-200 model (Team et al., 2022). In addition to the provided training data, the team used data from teams which participated in prior editions of the shared task (Moreno, 2021; Vázquez et al., 2021). Data from other sources, such as the Bible (McCarthy et al., 2020) and NLLB project were also considered, however the authors found that Bible data did not improve performance on the development set, and did not include it in the final systems. Backtranslation was also used to create additional parallel data. The submissions include specific preprocessing steps to prepare the data, such as detokenization and replacement of tone markings for Chatino. The team experimented with the distilled 600M, 1.3B and 3.3B versions of NLLB, and models were first finetuned on a concatenation of all available training data. The checkpoint with best average ChrF across all languages was considered as Submission 3. For Submission 2, the best check-\npoint per language was used. Submission 1 consists of ensembles of the various NLLB models. As NLLB relies on specific tags to denote the target languages, the embedding matrix was extended and new languages tags were created for the shared task languages which are unsupported.",
|
| 21 |
+
"6 Results": "We present the overall ranking of submissions to the shared task in Table 3 and the best score per language for each team across all submissions in Table 4.\nThe overall winner of the shared task, the Sheffield Submission 1, achieves the best performance for 7 languages: Aymara, Bribri, Asháninka, Chatino, Nahuatl, Quechua, and Shipibo-Konibo. The Helsinki Submission 6 (i.e., Model B) has the highest performance for 4 languages: Guarani, Wixarika, Otomí, and Rarámuri. Systems are much more competitive than prior competitions, achieving extremely close ChrF scores for many languages, such as Asháninka, Guarani, Wixarika, and Shipibo-Konibo. The Sheffield and Helsinki teams both collect additional data, and train models in a multilingual and multi-stage fashion. Both also mention data cleaning and preprocessing in their pipeline, and we hypothesize that this step is likely vital for good performance, due to noise, domain mismatch, and differences in variants between the training and evaluation sets. For all languages except for Aymara, all teams have at least one submission which improves (often by a large margin) over the original 2021 baseline.\nComparison with Prior Years As the evaluation set for 10 of the languages is the same as for 2021, we can analyze the performance of submitted MT systems over time. In this year’s shared task, we see improvements over the best 2021 system, the 2021 Helsinki submission (Vázquez et al., 2021), for all languages, but to varying degree. The largest improvements are for Bribri, Aymara, Guarani and Quechua. We also see small improvements for Asháninka and Wixarika. However, improvements for Nahuatl, Otomí, and Shipibo-Konibo are marginal. Overall, the improvements over Vázquez et al. (2021) are smaller in magnitude, compared to the improvements in 2021. This can be expected, however, as the baseline for this year’s shared task represents a much stronger lower bound. Of the four languages with largest improvement, three are achieved by a Sheffield submission: Aymara,\nBas elin\ne 20 21\nBes t 20\n21\nBes t 20\n23\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\nbzd\nBas elin\ne 20 21\nBes t 20\n21\nBes t 20\n23\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\nbzd\nBas elin\ne 20 21\nBes t 20\n23\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\nctp\nBas elin\ne 20 21\nBes t 20\n23\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\nctp\nBas elin\ne 20 21\nBes t 20\n21\nBes t 20\n23\nFluency\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\noto\nBas elin\ne 20 21\nBes t 20\n21\nBes t 20\n23\nMeaning\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\noto\nSystem\nPr op\nor tio\nn\n5 4\n3 2\n1\nFigure 2: Results of the qualitative human evaluation. Ratings of fluency are displayed in the left column, and meaning in the right. Results are shown as a proportion of all evaluated sentences.\nBribri, and Quechua. This may be attributed, in part, to the use of the NLLB model by the team, which supports Aymara and Quechua in its original set of pretraining languages. On average across the 10 shared languages, we see a further 9.63 improvement in ChrF over 2021 results by the best submitted systems.",
|
| 22 |
+
"7.1 Qualitative Analysis": "As quantitative measures of translation performance do not paint a complete picture, we also conduct a qualitative analysis of the system outputs for Bribri, Chatino, and Otomí. We randomly sample 50 parallel examples across the 2021 baseline, the 2021 winning system (Vázquez et al., 2021), and the 2023 submission with best performance for\neach language: Sheffield Submission 1 for Bribri and Chatino, and Helsinki Submission 6 for Otomí. Examples are shuffled and presented to a native speaker of each language, along with the Spanish source and gold reference. Annotations are done across two dimensions: meaning and fluency, using a categorical 1-5 scale. The guidelines given to annotators can be found in Appendix A.1.\nThe results of this analysis are shown in Figure 2. Similar to the trend of improvement in ChrF, we also see improvements in the rating of meaning and fluency across the three systems in this analysis. For Bribri, a strong majority of translations from the original 2021 baseline has a score of 1 across both dimensions. While we see some improvements from the Helsinki 2021 system, the 2023 system provides a considerable increase in translation quality; ratings of between 2-4 are now assigned to the majority of examples. For Chatino, the baseline system is stronger than for Bribri, and the improvement between the two systems is smaller when considering the proportion of examples rated as 1. For the 2023 system, we see the largest increase in quantity for ratings of 3. Otomí sees the worst performance of the three languages, with the majority of examples being rated as 1, across all three systems. Fluency does improve slightly, with an increase in the number of 2 ratings. However, examples with higher ratings are effectively non-existent. We also see a difference in improvement across fluency and meaning, with the former showing higher improvement. For all languages, even if we see an increase in the proportion of higher rated examples, the number of near-perfect (i.e., rating of 5) remains consistently small.",
|
| 23 |
+
"7.2 Impact of In-domain Data": "The LTLAmsterdam team (Stap and Araabi, 2023) describes systems which make use of kNN and an external data store (Khandelwal et al., 2021) during decoding. It was jointly decided in a discussion between the organizers and team that submissions which use this approach – Submissions 4,5,6,7, and 8 – fall in a grey area with respect to the competition rules and would not be included in the main results, due to the fact that development set examples were included in the data store. However, these submissions can give insights into the potential improvements one can expect if there is access to parallel examples which are in-domain with respect to an expected test set. If we consider these\nsubmissions, they achieve the best performance for three languages: Bribri, Asháninka, and Nahuatl. Improvements over the next best team submission is 0.88 ChrF on average over the three languages. As such, given that systems still struggle with producing outputs with the highest qualitative rating (§7.1), this approach may be beneficial for producing more constrained and higher-quality outputs, given that access to high-quality parallel data is available.",
|
| 24 |
+
"8 Conclusion": "In this paper we present the results of the AmericasNLP 2023 shared task. For this iteration, we collect a new dataset for translation evaluation between Spanish and Chatino, consisting of legal text from court press releases. Additionally, we keep the prior 10 evaluation languages used in 2021. Overall, 7 teams participated in the shared task. For all languages, multiple submissions improve over the previous best ChrF, but the magnitude varies per language. The best results were achieved by either finetuned versions of NLLB or a from-scratch transformer encoder–decoder model. To confirm the improvement in ChrF from the previous shared task, we conduct a human evaluation of system outputs, which, although it supports the quantitative improvement, highlights the fact that systems are still not able to produce translations of the highest quality. Furthermore, there is still variability in the absolute performance across languages. As such, while the results of the shared task mark a promising trend in increasing translation quality for Indigenous languages, there are still improvements which can be made in order to create usable translation systems for Indigenous languages.",
|
| 25 |
+
"Acknowledgments": "We would like to thank all teams that submitted systems to this year’s AmericasNLP shared task! We would also like to thank Eric Ramos Aguilar for their help with the qualitative annotation of system outputs.",
|
| 26 |
+
"A Annotation and Table Guidelines": "A.1 Human Evaluation Guidelines Annotators were given the following guidelines for their evaluation:\nFluency: Is the output sentence easily readable and similar to a human-produced text?\n1. Extremely bad: The output contains mainly repetitions or hallucinations [> 80%], and is largely illegible. The text is clearly not produced by a human.\n2. Bad: The output may contain repetitions or erroneous characters [> 60%], but also some correct words or phrases.\n3. Acceptable: The output does not contain a significant number of repetitions, and mainly contains correct words, however may still have grammatical errors.\n4. Sufficiently good: The output seems like a human-produced text in the target language, without repetitions or erroneous characters, but may still contain some grammatical errors.\n5. Excellent: The output seems like a human produced text in the target language, and is readable without issues.\nMeaning: How well does the translation reflect the meaning of the reference?\n1. Extremely bad: The meaning of the source sentence can not be inferred at all.\n2. Bad: A small number of words or phrases allow the reader to guess the meaning or semantic content of the sentence\n3. Acceptable: A larger number of correctly translated phrases and words allow a stronger understanding of the meaning.\n4. Sufficiently good: The general meaning of the source sentence is conveyed, while some details may be missing.\n5. Excellent: The meaning of the source sentence, along with all relevant details, is conveyed completely.\nA.2 Guidelines for System Summary Data\n• Crawl: Does the team collect additional data from websites, PDFs, documents, books, etc.\n• External Bilingual: Does the team leverage existing parallel data for language pairs not used for evaluation?\n• Opus/Religious/Wikipedia: Does the team use additional data from the respective resource?\n• Prior Year: Does the team use data collected from the 2021 or 2022 Shared Tasks?\n• Monolingual Translation: Does the team create synthetic training data by translating a monolingual dataset?\n• Pivot Translation: Does the team leverage exiting parallel data, between an unsupported language pair, through translation?\n• Cleaning/Normalization: Does the team specifically describe any cleaning or normalization steps?\n• No Additional: Does the team solely use the data provided from the competition?\nPretraining: A check is given if the team describes a submission which uses one of the pretrained systems. Encoder-Decoder represents a vanilla encoder-decoder transformer model trained from scratch.\nTrain\n• Ensemble: Does the team describe a submission which makes use of multiple models for translation?\n• Multistage: Does the team describe the training procedure as multiple stages, with variations in hyperparameters or training data?\n• Multilingual: Does the team describe the training as multilingual, or create models which are trained on multiple language pairs?"
|
| 27 |
+
}
|
ACL_23_no_limitation/ACL23_1204.json
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1204",
|
| 3 |
+
"Title": "LFTK: Handcrafted Features in Computational Linguistics",
|
| 4 |
+
"abstractText": "Past research has identified a rich set of handcrafted linguistic features that can potentially assist various tasks. However, their extensive number makes it difficult to effectively select and utilize existing handcrafted features. Coupled with the problem of inconsistent implementation across research works, there has been no categorization scheme or generallyaccepted feature names. This creates unwanted confusion. Also, most existing handcrafted feature extraction libraries are not open-source or not actively maintained. As a result, a researcher often has to build such an extraction system from the ground up. We collect and categorize more than 220 popular handcrafted features grounded on past literature. Then, we conduct a correlation analysis study on several task-specific datasets and report the potential use cases of each feature. Lastly, we devise a multilingual handcrafted linguistic feature extraction system in a systematically expandable manner. We open-source our system for public access to a rich set of preimplemented handcrafted features. Our system is coined LFTK and is the largest of its kind. Find at github.com/brucewlee/lftk.",
|
| 5 |
+
"1 Introduction": "Handcrafted linguistic features have long been inseparable from natural language processing (NLP) research. Even though automatically-generated features (e.g., Word2Vec, BERT embeddings) have recently been mainstream focus due to fewer manual efforts required, handcrafted features (e.g., type-token ratio) are still actively found in currently literature trend (Weiss and Meurers, 2022; Campillo-Ageitos et al., 2021; Chatzipanagiotidis et al., 2021; Kamyab et al., 2021; Qin et al., 2021; Esmaeilzadeh and Taghva, 2021). Therefore, it is evident that there is a constant demand for both\n3Core contributor\nthe identification of new handcrafted features and utilization of existing handcrafted features.\nAfter reviewing the recent research, we observed that most research on automatically-generated features tends to focus on creating deeper semantic representations of natural language. On the other hand, researchers use handcrafted features to create wider numerical representations, encompassing syntax, discourse, and others. An interesting new trend is that these handcrafted features are often used to assist auto-generated features in creating wide and deep representations for applications like English readability assessment (Lee et al., 2021) and automatic essay scoring (Uto et al., 2020).\nThe trend was observed across various tasks and languages. For example, there are Arabic speech synthesis (Amrouche et al., 2022), Burmese translation (Hlaing et al., 2022), English-French term alignment (Repar et al., 2022), German readability assessment (Blaneck et al., 2022), Italian pre-\n1\ntrained language model analysis (Miaschi et al., 2020), Korean news quality prediction (Choi et al., 2021), and Spanish hate-speech detection (GarcíaDíaz et al., 2022) systems.\nThough using handcrafted features seems to benefit multiple research fields, current feature extraction practices suffer from critical weaknesses. One is the inconsistent implementations of the same handcrafted feature across research works. For example, the exact implementation of the average words per sentence feature can be different in Lee et al. (2021) and Pitler and Nenkova (2008) even though both works deal with text readability. Also, there have been no standards for categorizing these handcrafted features, which furthers the confusion.\nIn addition, no open-source feature extraction system works multilingual, though handcrafted features are increasingly used in non-English applications. The handcrafted linguistic features can be critical resources for understudied or lowresource languages because they often lack highperformance textual encoding models like BERT. In such cases, handcrafted features can be useful in creating text embeddings for machine learning studies (Zhang et al., 2022; Kruse et al., 2021; Maamuujav et al., 2021). In this paper, we make two contributions to address the shortcomings in the current handcrafted feature extraction practices.\n1. We systematically categorize an extensive set of reported handcrafted features and create a feature extraction toolkit. The main contribution of this paper is that we collect more than 200 handcrafted features from diverse NLP research, like text readability assessment, and categorize them. We take a systematic approach for easiness in future expansion. Notably, we designed the system so that a fixed set of foundation features can build up to various derivation features. We then categorize the implemented features into four linguistic branches and 12 linguistic families, considering the original author’s intention. The linguistic features are also labeled with available language, depending on whether our system can extract the feature in a language-agnostic manner. LFTK (Linguistic Feature ToolKit) is built on top of another opensource library, spaCy1, to ensure high-performance parsing, multilingualism, and future reproducibility by citing a specific version. Our feature extraction software aims to cover most of the generally found handcrafted linguistic features in recent research.\n1github.com/explosion/spaCy\n2. We report basic correlation analysis on various task-specific datasets. Due to the nature of the tasks, most handcrafted features are from text readability assessment or linguistic analysis studies with educational applications in mind. The broader applications of these handcrafted features to other fields, like text simplification or machine translation corpus generation, have been only reported fairly recently (Brunato et al., 2022; Yuksel et al., 2022). Along with the feature extraction software, we report the predictive abilities of these handcrafted features on four NLP tasks by performing a baseline correlation analysis. As we do so, we identify some interesting correlations that have not been previously reported. We believe our preliminary study can serve as a basis for future in-depth studies.\nIn a way, we aim to address the recent concern about the lack of ready-to-use code artifacts for handcrafted features (Vajjala, 2022). Through this work, we hope to improve the general efficiency of identifying and implementing handcrafted features for researchers in related fields.",
|
| 6 |
+
"2.1 What are Handcrafted Features?": "The type of linguistic feature we are interested in is often referred to as handcrafted linguistic feature, a term found throughout NLP research (Choudhary and Arora, 2021; Chen et al., 2021; Albadi et al., 2019; Bogdanova et al., 2017). Though the term “handcrafted linguistic features” is loosely defined, there seems to be some unspoken agreement among existing works. In this work, we define a handcrafted linguistic feature as a single numerical value produced by a uniquely identifiable method on any natural language (refer to Figure 2).\nUnlike automatic or computer-generated linguistic features, these handcrafted features are often manually defined by combining the text’s features with simple mathematical operations like root or division (Lee et al., 2021). For example, the average difficulty of words (calculated with an external word difficulty-labeled database) can be considered\na handcrafted feature (Lee and Lee, 2020). Though the scope of what can be considered a single handcrafted feature is very broad, each feature always produces a single float or integer as the result of the calculation. More examples of such handcrafted features will appear as we proceed.",
|
| 7 |
+
"2.2 Hybridization of Handcrafted Features": "It takes a great deal of effort to make automatic or computer-generated linguistic features capture the full linguistic properties of a text, other than its semantic meaning (Gong et al., 2022; Hewitt and Manning, 2019). For example, making BERT encodings capture both semantics and syntax with high quality can be difficult (Liu et al., 2020). On the other hand, combining handcrafted features to capture wide linguistic properties, such as syntax or discourse, can be methodically simpler. Hence, handcrafted features are often infused with neural networks in the last classification layer or directly with a sentence’s semantic embedding to enhance the model’s ability in holistic understanding (Hou et al., 2022; Lee et al., 2021). Such feature hybridization techniques are found in multiple NLP tasks like readability assessment (Vajjala, 2022) and essay scoring (Ramesh and Sanampudi, 2022).",
|
| 8 |
+
"2.3 Handcrafted Features in Recent Studies": "Until recently, NLP tasks that require a holistic understanding of a given text have utilized machine learning models based only on handcrafted linguistic features. Such tasks include L2 learner’s text readability assessment (Lee and Lee, 2020), fake news detection (Choudhary and Arora, 2021), bias detection (Spinde et al., 2021), learner-based reading passage selection (Lee and Lee, 2022). Naturally, these fields have handcrafted and identified a rich set of linguistic features we aim to collect in this study. We highlight text readability assessment research as an important source of our implemented features. Such studies often involve 80∼255 features from diverse linguistic branches of advanced semantics (Lee et al., 2021), discourse (Feng et al., 2010), and syntax (Xia et al., 2016).",
|
| 9 |
+
"3.1 Overview": "By exploring past works that deal with handcrafted linguistic features, we aim to implement a comprehensive set of features. These features are commonly found across NLP tasks, but ready-to-use\npublic codes rarely exist. We collected and categorized over 200 handcrafted features from past research works, mostly on text readability assessment, automated essay scoring, fake news detection, and paraphrase detection. These choices of works are due to their natural intimate relationships with handcrafted features and also, admittedly, due to the authors’ limited scope of expertise. Figure 3 depicts our general process of implementing a single feature. Tables 1 and 2 show more details on categorization.",
|
| 10 |
+
"3.2.1 Formulation": "The main idea behind our system is that most handcrafted linguistic features can be broken down into multiple fundamental blocks. Depending on whether a feature can be split into smaller building blocks, we categorized all collected features into either foundation or derivation. Then, we designed the extraction system to build all derivation features on top of the corresponding foundation features. This enables us to exploit all available combinations efficiently and ensure a unified extraction algorithm across features of similar properties.\nThe derivation features are simple mathematical combinations of one or more foundation features. For example, the average number of words per sen-\ntence is a derivation feature, defined by dividing total number of words by total number of sentences. A foundation feature can be the fundamental building block of several derivation features. But again, a foundation feature cannot be split into smaller building blocks. We build 155 derivation features out of 65 foundation features in the current version.",
|
| 11 |
+
"3.2.2 Linguistic Property": "Each handcrafted linguistic feature represents a certain linguistic property. But it is often difficult to pinpoint the exact property because features tend to correlate with one another. Such colinear inter-dependencies have been reported by multiple pieces of literature (Imperial et al., 2022; Lee and Lee, 2020). Hence, we only categorize all features into the broad linguistic branches of lexico-semantics, syntax, discourse, and surface. The surface branch can also hold features that do not belong to any specific linguistic branch. The linguistic branches are categorized in reference to Collins-Thompson (2014). We mainly considered the original author’s intention when assigning a linguistic branch in unclear cases.\nApart from linguistic branches, handcrafted features are also categorized into linguistic families. The linguistic families are meant to group features into smaller subcategories. The main function of linguistic family is to enable efficient feature search.\nAll family names are unique, and each family belongs to a specific formulation type. This means that the features in a family are either all foundation or all derivation. A linguistic family also serves as a building block of our feature extraction system. Our extraction program is a linked collection of several feature extraction modules, each representing a linguistic family (refer to Figure 4).",
|
| 12 |
+
"3.2.3 Applicable Language": "Since handcrafted features are increasingly used for non-English languages, it is important to deduce whether a feature is generally extractable across languages. Though our extraction system is also designed with English applications in mind, we devised a systematic approach to deduce if an implemented feature is language agnostic. Like the example in Table 3, we only classify a derivation feature as generally applicable if all its components (foundation features) are generally applicable.\nWe can take the example of the average number of nouns per sentence, defined by dividing total number of nouns by total number of sentences. Since both component foundation features are generally applicable (we use UPOS tagging scheme), we can deduce that the derivation is generally applicable too. On the other hand, Flesch-Kincaid Grade Level (FKGL) is not generally applicable because our syllables counter is English-specific.\nFKGL = 0.39 · # word # sent +11.8 · # syllable # word −15.59\nThere is no guarantee that a feature works similarly in multiple languages. The usability of a feature in a new language is subject to individual exploration.",
|
| 13 |
+
"3.3 Feature Details by Linguistic Family": "Due to space restrictions, we only report the number of implemented features in Tables 4 and 5. A full list of these features is available in the Appendices. The following sections are used to elaborate on the motivations and implementations behind features.\n3.3.1 WordSent & AvgWordSent WordSent is a family of foundation features for character, syllable, word, and sentence count statistics. With the exception of syllables, this family heavily depends on spaCy for tokenization. SpaCy is a high-accuracy parser module that has been used as a base tokenizer in several multilingual projects like the Berkeley Neural Parser (Kitaev et al., 2019). We use a custom syllables count algorithm.\nAvgWordSent is a family of derivation features for averaged character, syllable, word, and sentence count statistics. An example is the average number of syllables per word, a derivation of the total number of words and the total number of syllables foundation features.\n3.3.2 WordDiff & AvgWordDiff WordDiff is a family of foundation features for word difficulty analysis. This is a major topic in educational applications and second language acquisition studies, represented by age-of-acquisition (AoA, the age at which a word is learned) and corpus-based word frequency studies. Notably, there is the Kuperman AoA rating of over 30,000 words (Kuperman et al., 2012), an implemented feature in our extraction system. Another implemented feature is the word frequency statistics based on SUBLTEXus research, an improved word frequency measure based on American English sub-\ntitles (Brysbaert et al., 2012). AvgWordDiff averages the WordDiff features by word or sentence counts. This enables features like the average Kuperman’s age-of-acquisition per word.\n3.3.3 PartOfSpeech & AvgPartOfSpeech PartOfSpeech is a family of foundation features that count part-of-speech (POS) properties on the token level based on dependency parsing. Here, we use spaCy’s dependency parser, which is available in multiple languages. All POS counts are based on the UPOS tagging scheme to ensure multilingualism. These POS count-based features are found multiple times across second language acquisition research (Xia et al., 2016; Vajjala and Meurers, 2012). The features in AvgPartOfSpeech family are the averages of PartOfSpeech features by word or sentence counts. One example is the average number of verbs per sentence.\n3.3.4 Entity & AvgEntity Central to discourse analysis, Entity is a family of foundation features that count entities. Often used to represent the discourse characteristics of a text, these features have been famously utilized by a series of research works in readability assessment to measure the cognitive reading difficulty of texts for adults with intellectual disabilities (Feng et al., 2010, 2009). AvgEntity family are the averages of Entity features by word or sentence counts. One example is the average number of “organization” entities per sentence.\n3.3.5 LexicalVariation Second language acquisition research has identified that the variation of words in the same POS category can correlate with the lexical richness of a text (Vajjala and Meurers, 2012; Housen and Kuiken, 2009). One example of a derivative feature in this module is derived by dividing the number of unique verbs by the number of verbs, often referred to as “verb variation” in other literature. There are more derivations (“verb variation - 1, 2”) using squares or roots, which are also implemented in our system.\n3.3.6 TypeTokenRatio Type-token ratio, often called TTR, is another set of features found across second/child language acquisition research (Kettunen, 2014). This is perhaps one of the oldest lexical richness measures in a written/oral text (Hess et al., 1989; Richards, 1987). Though TypeTokenRatio features aim to measure similar textual characteristics\nas LexicalVariation features, we separated TTR into a separate family due to its unique prevalence.\n3.3.7 ReadFormula Before machine learning techniques were applied to text readability assessment, linear formulas were used to represent the readability of a text quantitatively (Solnyshkina et al., 2017). Recently, these formulas have been utilized for diverse NLP tasks like fake news classification (Choudhary and Arora, 2021) and authorship attribution (Uchendu et al., 2020). We have implemented the traditional readability formulas that are popularly used across recent works (Lee and Lee, 2023; Horbach et al., 2022; Gooding et al., 2021; Nahatame, 2021).",
|
| 14 |
+
"3.4 LFTK in Context": "As we have explored, we tag each handcrafted linguistic feature with three attributes: domain, family, and language. These attributes assist researchers in efficiently searching for the feature they need, one of two research goals we mentioned in section 1. Instead of individually searching for handcrafted features, they can sort and extract features in terms of attributes.\nNotably, our extraction system is fully implemented in the programming language Python, unlike other systems like Coh-Metrix (Graesser et al., 2004) and L2 Syntactic Complexity Analyzer (Lu, 2017). Considering the modern NLP research approaches (Mishra and Mishra, 2022; Sengupta, 2021; JUGRAN et al., 2021; Sarkar, 2019), the combination of open-source development and Python makes our extraction system more expandable and customizable in the community.\nTime with spaCy model’s processing time is reported in Table 6. Excluding the spaCy model’s processing time (which is not a part of our extraction system), our system can extract 220 handcrafted features from a dummy text of 1000 words on an average of 10 seconds. This translates to about 0.01 seconds per word, and this result is ob-\ntained by averaging over 20 trials of randomized dummy texts of exactly 1000 words. This time was taken with a 2.3 GHz Intel Core i9 CPU under a single-core setup. The fast extraction speed makes our extraction system suitable for large-scale corpus studies. Since our extraction system works with a wide variety of tokenizers (different accuracies and processing times) available through spaCy, one might choose an appropriate model according to the size of the studied text. Since spaCy and our extraction system are open sources registered through the Python Package Index (PyPI), reproducibility can easily be maintained by versions.\nIn addition, our extraction system achieves such a speed improvement due to our systematic breakdown of handcrafted features into foundation and derivation (see section 3.1.1). As depicted in Figure 4, designing the system so that derivation features are built on top of foundation features reduced duplicate program calculation to a minimum. Once a foundation feature is calculated, it is saved and used by multiple derivation features. Indeed, the total number of words does not have to be calculated twice for average word difficulty per word and Flesch-Kincaid Grade Level.",
|
| 15 |
+
"4 Which applies to which? Task-Feature Correlation Analysis": "For handcrafted features to be generally useful to the larger NLP community, it can be important to\nprovide researchers with a sense of which features can be potentially good in their problem setup. This section reports simple correlation analysis results of our implemented features and four NLP tasks.\nTo the best of our knowledge, we chose the representative dataset for each task. Table 7 reports the Pearson correlation between the feature and the dataset labels. We only report the top 10 features and bottom ten features. The full result is available in the Appendices. We used the CLEAR corpus’s crowdsourced algorithm of reading comprehension score controlled for text length (CAREC_M) for readability labels on 4724 instances (Crossley et al., 2022). We used the ASAP dataset’s2 domain1_score on prompt 1 essays for student essay scoring labels on 1783 instances. We used the LIAR dataset for fake news labels on 10420 instances (Wang, 2017). We used SemEval 2019 Task 5 dataset’s PS for binary hate speech labels on 9000 instances (Basile et al., 2019).\nThough limited, our preliminary correlation analysis reveals some interesting correlations that have rarely been reported. For example, n_verb negatively correlates with the difficulty of a text. But there is much room to be explored. One utility behind a large-scale feature extraction system like ours is the ease of revealing novel correlations that might not have been obvious.\n2www.kaggle.com/c/asap-aes/data",
|
| 16 |
+
"5 Conclusion": "In this paper, we have reported our open-source, large-scale handcrafted feature extraction system. Though our extraction system covers a large set of pre-implemented features, newer, task-specific features are constantly developed. For example, URLs count is used for Twitter bot detection (Gilani et al., 2017) and grammatical error count is used for automated essay scoring (Attali and Burstein, 2006). These features, too, fall under our definition (Figure 2) of handcrafted linguistic features. Our open-source script is easily expandable, making creating a modified, research-specific version of our extraction program more convenient. With various foundation features to build from, our extraction program will be a good starting point.\nAnother potential user group of our extraction library is those looking to improve a neural or nonneural model’s performance by incorporating more features. Performance-wise, the breadth of linguistic coverage is often as important as selection (Lee et al., 2021; Yaneva et al., 2021; Klebanov and Madnani, 2020; Horbach et al., 2013). Our current work has various implemented features, and we believe the extraction system can be a good starting\npoint for many research works. Compared to other historically important code\nartifacts like the Coh-Metrix (Graesser et al., 2004) and L2 Syntactic Complexity Analyzer (Lu, 2017), our extraction system is comparable or larger in size. To the best of our knowledge, this research is the first attempt to create a “general-purpose” handcrafted feature extraction system. That is, we wanted to build a system that can be widely used across NLP tasks. To do so, we have considered expandability and multilingualism from architecture design. And such consideration is grounded in the systematic categorization of popular handcrafted linguistic features into the attributes like domain and family. With the open-source release of our system, we hope that the current problems in feature extraction practices (section 1) can be alleviated.",
|
| 17 |
+
"A All implemented features": "Our extraction software is named LFTK, and its current version is 1.0.9. Tables 8, 9, 10, and 11 reference v.1.0.9. We only report linguistic family here due to space restrictions. Though our feature description will be regularly updated at this address 3\nwhenever there is a version update, we also put the current version’s full feature table in our extraction program. Through PyPI or GitHub, the published version of our program is always retrievable.",
|
| 18 |
+
"B Feature correlations": "Tables 12, 13, 14, and 15 report the full feature correlations that are not reported in Table 7. We\n3https://docs.google.com/spreadsheets/d/1uXtQ1ah0OL9 cmHp2Hey0QcHb4bifJcQFLvYlVIAWWwQ/edit? usp=sharing\nhave used spaCy’s en_core_web_sm model, and the library version was 3.0.5. Pearson correlation was calculated through the Pandas library, and its version was 1.1.4. All versions reflect the most recent updates in the respective libraries."
|
| 19 |
+
}
|
ACL_23_no_limitation/ACL23_1206.json
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1206",
|
| 3 |
+
"Title": "A Transfer Learning Pipeline for Educational Resource Discovery with Application in Survey Generation",
|
| 4 |
+
"abstractText": "Effective human learning depends on a wide selection of educational materials that align with the learner’s current understanding of the topic. While the Internet has revolutionized human learning or education, a substantial resource accessibility barrier still exists. Namely, the excess of online information can make it challenging to navigate and discover high-quality learning materials in a given subject area. In this paper, we propose an automatic pipeline for building an educational resource discovery system for new domains. The pipeline consists of three main steps: resource searching, feature extraction, and resource classification. We first collect frequent queries from a set of seed documents, and search the web with these queries to obtain candidate resources such as lecture slides and introductory blog posts. Then, we process these resources for BERT-based features and meta-features. Next, we train a treebased classifier to decide whether they are suitable learning materials. The pipeline achieves F1 scores of 0.94 and 0.82 when evaluated on two similar but novel domains. Finally, we demonstrate how this pipeline can benefit two applications: prerequisite chain learning and leading paragraph generation for surveys. We also release a corpus of 39,728 manually labeled web resources and 659 queries from NLP, Computer Vision (CV), and Statistics (STATS).",
|
| 5 |
+
"1 Introduction": "People rely on the internet for various educational activities, such as watching lectures, reading textbooks, articles, and encyclopedia pages. One may wish to develop their knowledge in a familiar subject area or to learn something entirely new. Many online tools exist that enable and promote independent learning (Montalvo et al., 2018; Romero and Ventura, 2017; Fabbri et al., 2018a; Li et al., 2019). A subset of these platforms provide primary literature resources (e.g. publications), such as Google\n∗Corresponding author: irene.li@aya.yale.edu\nScholar1 and Semantic Scholar2. As an alternative to these advanced materials, other educational platforms such as MOOC.org 3 deliver free online courses. Also, unstructured searching on the internet is a popular method to discover other useful resources, such as blog posts, GitHub projects, tutorials, lecture slides and textbooks. Rather than diving into the technical details, these secondary literature resources provide a broad overview of the given domain, which is more valuable for beginners. Still, sifting through this material can be challenging and time-consuming, even if the learner is simply looking for a general and reliable introduction into a new subject area.\nPublicly accessible data repositories that focus on gathering a fixed number of educational resources exist currently, such as scientific papers (Tang et al., 2008, 2010), online platforms like AMiner (Sinha et al., 2015) and Semantic Scholar. Some archives also compile secondary literature materials. TutorialBank (Fabbri et al., 2018a) is a manually-collected corpus with over 6,300 NLP resources, as well as related fields in Artificial Intelligence (AI), Machine Learning (ML) and so on. LectureBank (Li et al., 2020) is also a manuallycollected corpus and contains 1,717 lecture slides. MOOCCube (Yu et al., 2020) is a large-scale data repository containing 700 MOOC (Massive Open Online Courses), 100k concepts and 8 million student behaviours with an external resource. However, in their initial synthesis, these existing corpora either heavily relied on manual efforts that restricted in certain domains, or on a large volume of existing courses sourced from a certain platform. Such solutions are not practically extensible into new or evolving domains. Moreover, according to (Fabbri et al., 2018a), some web data such as blog posts, tutorials and educational web pages are\n1https://scholar.google.com/ 2https://www.semanticscholar.org/ 3https://www.mooc.org/\n29\nalso suitable materials for learners. These rich web data are ignored by existing educational platforms such as google scholar and MOOCcube. In this paper, we wish to ease the need for human annotators by proposing a pipeline that automates resource discovery to similar unseen domains through transfer learning. Besides, such a pipeline deals with multiple resource types to take advantage of web data.\nOur contributions can be summarized into three parts. First, we present a self-sustaining pipeline for educational resource discovery in close unseen subject area or domain. We apply transfer learning with a novel pre-training information retrieval (IR) model, achieving competitive performances. We show that this pipeline achieves 0.94 and 0.82 F1 scores for two arbitrary target domains on discovering high-quality resources. Second, we demonstrate an application that leverage resources discovered by our pipeline, survey generation for leading paragraph. Lastly, we release the core source code of the pipeline, as well as the training and testing datasets, comprised of 39,728 manually labelled web resources and 659 search queries. 4",
|
| 6 |
+
"2 Educational Resource Discovery Pipeline": "We propose the Educational Resource Discovery (ERD) pipeline that aims at automatically recognizing high-quality educational resources. We model this problem as a resource classification task. Given a resource r, where r can be any source type such as web page, PDF, we can obtain a list of features by feature engineering; based on these features, r is classified positive if it is a high-quality resource, otherwise negative. We illustrate the ERD pipeline in Figure 1. It consists of data collection, feature extraction and resource classification.\n4https://github.com/IreneZihuiLi/ Educational-Resource-Discovery",
|
| 7 |
+
"2.1.1 Queries for search": "In this step, we need to conduct a list of meaningful and fine-grain search queries to start. These search queries will then be applied to online search engines for web resources. Queries can be borrowed from external corpora or extracted from existing seed documents (e.g., textbooks). We focus on three domains: NLP (natural language processing), CV (computer vision) and STATS (statistics). For NLP queries, we utilize external topic lists provided by LectureBankCD (Li et al., 2021), in which there are totally 322 NLP-based and 201 CV-based topics from crowdsourcing. For STATS, we extract a list of fine-grained terms from several seed documents, including several textbooks. These terms contain frequent keywords and phrases that are extracted by TextRank (Mihalcea and Tarau, 2004), a statistical method to keyword ranking. In total, we end up with 322, 201 and 137 queries for NLP, CV and STATS domain.\nTo craft our search engine queries, we leverage advanced search conditions: filetype and site (website). Specifically, we consider three file types: PDF, PPTX/PPT, and HTML. Moreover, according to the TutorialBank corpus (Fabbri et al., 2018b), resources clustered by the components of their URL possess highly correlated educational content. Thus, we prioritize restricting our queries to websites that consistently provide high-quality resources. We select the top sites from the manuallycreated TutorialBank corpus and incorporate them into our search queries, as exemplified in 1. We also include the “.edu” top-level domain as a special case for our search queries in order to capture general educational resources. Finally, we combine our query terms with the website and file-type constraints: e.g. “word embeddings filetype:pdf”. We also augment the original query by generating a disjunction of its variations: e.g., “stochastic gradient\ndescent” becomes “stochastic gradient descent OR SGD”. Table 2 displays several sample queries.\nOnce the queries are generated, we leverage three well-established online search engines: DuckDuckGo (https://duckduckgo.com/), Yahoo (https://search.yahoo.com/) and Bing (https://www.bing.com/) to obtain our candidate resources. The top N URLs (where N is determined from the domain, file type and site type, varying from 20 to 100 to control the total number of resources we want to collect) for a given query are cached after checking their HTTP response status and ensuring that a URL has not already been collected as part of another query. Moving forward, the documents pointed to by all of these URLs were automatically downloaded and parsed for their features. Certain features, such as the number of authors were collected using heuristics that accounted for most of the variability within the diverse dataset. The ERD Pipeline’s parsers use the pdfminer5 and grobid6 libraries for PDF files, Apache Tika7 for PPTX/PPT and beautifulsoup8 for HTML.",
|
| 8 |
+
"2.1.2 Annotation": "After collecting all resources, the next step is to assign a binary label to each resource based on its quality. Our annotators consist of 7 graduate and senior college students with a solid background in NLP, CV, and STATS. A resource is annotated as positive if it is a high-quality one. Guidelines for a positive resource are:\n• Informative and relevant: introducing basic knowledge about a specific topic. For example, tutorials, introductions, explanations, guides.\n• Papers and lecture slides: papers and lecture notes about a topic in the correct domain.\n5https://github.com/pdfminer/ 6https://github.com/kermitt2/grobid 7https://tika.apache.org/ 8https://crummy.com/software/\nBeautifulSoup/\n• Other secondary literature articles: i.e., blog posts with informative descriptions, definitions and code blocks.\nThe annotation criteria for a poor resource are:\n• Not informative: dataset/software/tool download page without introductory descriptions, such as a paper abstract page (not the paper content), a download page with links.\n• Irrelevant: not showing correct content, broken URLs, URLs with not enough or no text (video or image only).\n• No knowledge included: such as a course landing page, a person’s personal website page.\n• A list of resources/datasets: containing only links to other pages.\nFinally, to measure the inter-coder agreement of the labels, we randomly picked 100 resources and asked each annotator to provide labels independently. Krippendorff’s alpha (Krippendorff, 2011) on this sample evaluated to 0.8344, indicating a high degree of consistency amongst all annotators.\nWe detail statistics about our collected dataset in Table 2, providing the total counts by file type and domain. From the three domains, we collected 39,728 valid resources using 659 distinct queries and achieved a total positive rate of 69.05%.",
|
| 9 |
+
"2.2 Feature Extraction": "To train a classifier to identify high-quality educational resources, we first focus on feature engineering. Specifically, we investigate the following three groups of classification features and summarize them in Table 4.\nGroup 1 Features Some of the meta-features of a document that can characterize its quality are embedded in its structure. The features encompassed by Group 1 are high-level and coarse-grained, and focus on aspects such as: the number of headings, equations, outgoing links and authors in a given resource. Heuristically, some good tutorials may tend to include more equations and paragraphs, with many details included. We list all 8 such features in Table 4, Group 1.\nGroup 2 Features These meta-features describe the fine-grained but statistical details of the document. The resource URL’s components, such as the top-level domain name and subdomain name, correlate resources from websites that deliver consistent quality. The other Group 2 features are centered around the characteristics of the free text. For instance, NormalizedUniqueVocab (the size of the vocabulary divided by the total number of words) can estimate the vocabulary’s complexity and PercentTypos (the percentage of words that are incorrectly spelled) can approximate reliability. We itemize such features in Table 4, Group 2.\nGroup 3 Features In addition to the above features, we propose 9 features based on pretrained language models. To achieve this, we first choose three models9: BERT (Devlin et al., 2019), SciBERT (Beltagy et al., 2019) and Longformer (Beltagy et al., 2020). BERT is a pretrained language model that was pretrained on Wikipedia documents. SciBERT is a BERT-based model trained on the sci-\n9https://huggingface.co/transformers/ pretrained_models.html\nentific domain, making it suitable for our use case. Longformer is a BERT-based model that handles longer input sequences.\nMoreover, we introduce a novel pre-training approach: QD-BERT MLM (Query-document BERT Masked Language Modeling). A query could be a single word, phrase or a paper title, indicating the topic or main idea of the document. We pair the query term with the corresponding document as the input and follow the Masked Language Modeling (MLM) method of BERT (randomly masking 15% tokens and letting the model predict them), as shown in Figure 2. We apply two external corpora for pre-training to ensure the data quality: TutorialBank (TB) 10 and arXiv 11. The latest TutorialBank has 15,584 topic-document pairs; and arXiv has 259,050 title-abstract pairs (computer science papers only). We enumerate all models in Table 4, Group 3, naming dataset_modelname.\nWe propose an information retrieval-based scoring function to combine features from deep models with Group 1 and 2 features. This scoring function\n10http://aan.how/download/ 11https://www.kaggle.com/\nCornell-University/arxiv\ncalculates a score of each resource, showing the relevancy of the resource to all the searching queries. Relevancy is one of the most indicators that the resource is annotated as positive. The score is higher if it is more relevant to the queries. In Section 2.1.1, we apply a list of queries (q ∈ Q) to download resources, we compute a cosine-similarity based ranking score scorer for resource r:\nscorer = ∑\nq∈Q cosine (Vq, Vr)\nwhere Vq and Vr are BERT-based model embeddings for the query term and resource respectively. We compute scores on each pre-trained BERT models of each resource.",
|
| 10 |
+
"2.3 Resource Classification": "Since there are various feature types, we conduct prepossessing before applying the classifiers. Numerical values are binned into groups, and categorical features are converted into integer codes. We evaluate four traditional classifiers: Random Forest (RF), Decision Tree (DT), Support Vector Machine (SVM) and Logistic Regression (LR). We find that RF performs the best and has a slight edge over DT, but SVM and LR significantly lag behind. Thus, we report the Random Forest’s performance, summarized in Table 5. Specifically, we include precision, recall and F1 scores on different feature groups: Group 1, Group 1+2, and Group 1+2+3. The last setting achieves the best performance. Additionally, since it is also possible to solely apply BERT models (Group 3) for the classification task, we include a special setting: Group 3, BERT only. While BERT’s results in isolation are good, Group 1+2+3 still remains the winner.\nIn general, performance on the CV domain is better than on STATS. This is expected given that the corpus distance between NLP and CV is smaller than the one between NLP and STATS. We give detailed data analysis in the next section.",
|
| 11 |
+
"3 Data Analysis": "To better understand the collected data and our classifier’s performance, we conduct a study on the features and corpus differences between the three experimental domains.\nFeature Importance Score We take the bestperformed model of NLP→CV domain (Group 1+2+3), and take the Gini Index calculated by Decision Trees as the feature importance score. Overall, we extract 8746 features in CV and 8525 features of STATS after binning numerical values and encoding categorical features. In Figure 3, we list the top 20 features of CV and STATS. Some Group 1+2 features rank in the top 5, since they are main indicators that the resource is informative (i.e., more heading numbers, longer contents). Additionally, Group 3 features (starting with BERTScore) also play an important role. In fact, all 9 BERT-based feature scores rank top 20, suggesting that our scoring function that adds these BERT-based semantic features into the pipeline is very helpful when doing classification for resource discovery.\nCorpus Differences Our pipeline performs better on CV topics, which can be attributed to cor-\npus differences relative to NLP. In Figure 4, we plot the percentage of overlapping n-grams of the {NLP, CV} and {NLP, STATS} domain pairs. This shows that NLP and CV have a larger overlap than {NLP, STATS} with respect to all of the n-grams (n ∈ {1, 2, 3, 4}). From this, we uphold that the classifiers trained on semantic features based on BERT models are valuable for bridging more distant domains with transfer learning.\nTo further contrast our findings, we enumerate the top 10 URLs in Table 6. Although the websites are ranked in different orders, there are still common URLs across the domains (highlighted in the table). Once again, CV shares a larger overlap with NLP in comparison to STATS. Along with the feature importance score, this cross-domain consistency further illustrates that the URL metafeatures will benefit our model’s out-of-domain classification. We show more feature statistics in the Appendix.\nComparison With Similar Datasets We compare a number of existing NLP educational datasets in Table 7, emphasizing the resource type, human effort for annotations, and corpus scale. Note that in this table, we only concentrate on human annotation efforts for free-text resources. This is because these free-text resources are the primary goal of the ERD Pipeline, as opposed to other tasks (e.g. learning concept relations, concept mining). We can see that MOOCcube (Yu et al., 2020) has a massive\nquantities of a single resource type (papers). They obtained the metadata from a third-party platform, AMiner, without a full round of human annotations. TutorialBank (Fabbri et al., 2018b) has a larger number of resources than LectureBank (Li et al., 2020), and it consists of diverse resource types. Our pipeline is very similar to TutorialBank in terms of resource type, but ours extends to more resources and subject areas, enabling us to research transfer learning across domains.",
|
| 12 |
+
"4 Application: Survey Generation for Lead Paragraphs": "In this section, we demonstrate an interesting application that applies the resources discovered using our ERD Pipeline, Leading Paragraph Generation for Surveys.\nNovel concepts are being introduced and evolving at a rate that creates high-quality surveys for web resources, such as Wikipedia pages, challenging. Moreover, such existing surveys like Wikipedia still needs human efforts on collecting relevant resources and writing accurate content on a given topic. Researchers have been investigating automatic ways to generate surveys using machine learning and deep learning methods. Survey generation is a way to generate concise introductory content for a query topic (Zhao et al., 2021). While most of the existing work focuses on utilizing Wikipedia to achieve this (Liu et al., 2018), little has been done for the web content. Since our ERD pipeline provides sufficient web data, we propose a two-stage approach for generating the lead paragraph that applies these web data selected from the ERD pipeline.",
|
| 13 |
+
"4.1 Two stage method": "We illustrate the two stage method in Figure 5. Given a query topic and high-quality web resources selected by ERD pipeline, we wish to generate the leading introductory paragraph for the query topic. This approach consists of content selection (step 1) and abstractive summarization (step 2). Content selection is the process of selecting the most relevant materials (including documents or sentences) according to the given query. Abstractive summarization generates the accurate lead paragraph from the selected materials.\nContent Selection ERD pipeline is supposed to identify massive resources with broad coverage of the topics, so the first step is to select related content with the query topic.\nWhile there is no suitable pretrained data for this task, and we do not collect survey data for training, we utilize the WikiSum dataset (Liu et al., 2018).\nMethods L=5 L=10 L=20 L=40 LSTM-Rank 39.38 46.74 53.84 60.42 Semantic Search 34.87 48.60 61.87 74.54 RoBERTa-Rank 64.12 72.49 79.17 84.28\n(a) ROUGE-L (Lin, 2004) Recall scores for WikiSum content selection, varying the number of paragraphs returned.\nWikiSum contains 1.5 million Wikipedia pages, their references and their associated Google Search results. WikiSum includes many well-established topics and comprehensive reference documents, making it suitable for survey generation. We first evaluate content selection models using WikiSum. We experiment with three approaches in this step. Liu and Lapata (2019) undertake query-based content selection as a regression problem of predicting the ROUGE-2 recall of a given paragraph-topic pair (LSTM-Rank). Reimers and Gurevych (2019) fine-tune BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) to produce fixed-length vectors which can be compared using cosine similarity. We embed the topic of each Wikipedia page and candidate paragraph using this method, and select the paragraphs with the closest vectors to the title (Semantic Search). Additionally, we train RoBERTa in a similar manner as (Liu and Lapata, 2019). Then, we compare the query topic and paragraphs as sentence pairs and use the resultant relevance scores to for the paragraph ranking (RoBERTa-Rank). As shown in Table 8a, RoBERTa-Rank is the highest-scoring content selector, so we employ it for the abstractive summarization’s input.\nAbstractive Summarization This step is to generate summarization from the content selected previously. As a sequence-to-sequence task, there are many existing pretrained models to use. We experiment with BART (Lewis et al., 2019), a pre-trained model for text generation, as well as HierSumm, a hierarchical model from Liu and Lapata (2019). We show the summarization results on the WikiSum data in Table 8b, and observe that BART achieves the higher performance.",
|
| 14 |
+
"4.2 Human Evaluation and Case Studies": "So far we have shown that applying RoBERTaRank and BART as a two-step method gives promising results evaluated on the WikiSum dataset. We connect our pipeline with this method to generate the leading paragraph. We choose 10 queries randomly as survey topics in each domain, for example, “sentiment analysis ”in NLP. A full query topic list is in the Appendix. Since we do not have ground truth, we conduct human evaluation and case studies.\nWe evaluate the model outputs on a 1-5 Likert scale based on the following qualities:\n• Readability: attains a maximum score of 5 if the output is readable with a high degree of fluency and coherency.\n• Relevancy: attains a maximum score of 5 if the output is perfectly relevant to the current topic with no hallucinations.\n• Non-redundancy: attains a maximum score of 5 if the output has no repeating phrases/concepts.\nWe report average scores among 2 human judges of all topics by domain, shown in Table 9. The scores of NLP are the highest for all qualities, and STATS performed most poorly. This discrepancy may be caused by data collection bias, as more NLP resources were included.\nWe randomly pick one case study from each domain in Table 10. The model is able to generate leading paragraphs in a similar Wikipedia article style by giving a definition of a certain concept, following by descriptions of possible applications. Overall, while these surveys contains some facts, the quality can still be improved. For instance, the STATS paragraph exhibits some redundancy (e.g., “computer graphics”,“computer vision”). As an initial experiment, we have demonstrated the opportunities of extending our ERD Pipeline to produce survey paragraphs. In the future, we aim to enhance the generated lead paragraphs and extend the model for generating complete surveys.",
|
| 15 |
+
"5 Conclusion": "In this paper, we proposed a pipeline for automatic knowledge discovery in novel domains. We applied transfer learning with a novel MLM pre-training method and achieved competitive classification performances. Moreover, we demonstrated two applications that take advantage of resource discovered by our pipeline. Finally, we released our source code and the datasets that we collected, including the 39,728 manually labelled web resources and 659 search queries. We plan to make this pipeline an online live educational tool for the public.",
|
| 16 |
+
"A Chosen topics for Human Evaluation in Survey Generation": "Table 11 shows the randomly selected topics for survey generation, 10 from each domain.",
|
| 17 |
+
"B More Sample Queries": "We list more sample queries in Table 12, such queries are applied in the Data Collection step of the proposed pipeline.",
|
| 18 |
+
"C BERT models for Group 3 features": "The three main deep features were extracted using the following pre-trained models:\nBERT-base https://huggingface.co/ bert-base-uncased. SciBERT https://huggingface.co/allenai/ scibert_scivocab_uncased. Longformer https://huggingface.co/allenai/ longformer-base-4096.",
|
| 19 |
+
"D More Data Statistics": "In Table 13, we show token-level and sentencelevel statistics of our collected data.",
|
| 20 |
+
"E Meta-Feature Distributions": "In the following pages, we show the histograms of the 18 quantitative meta-features collected for each data point. Recall from Table 4 that these features were segregated into two groups. Group 1 features are higher-level and generally pertain to the document layout. Group 2 features focus on more specific aspects of the resource’s URL and free text."
|
| 21 |
+
}
|
ACL_23_no_limitation/ACL23_1210.json
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1210",
|
| 3 |
+
"Title": "ChatBack: Investigating Strategies of Providing Synchronous Grammatical Error Feedback in a GUI-based Language Learning Social Chatbot",
|
| 4 |
+
"abstractText": "The increasing use of AI chatbots as conversation partners for second-language learners highlights the importance of providing effective feedback. To ensure a successful learning experience, it is essential for researchers and practitioners to understand the optimal timing, methods of delivery, and types of feedback that are most beneficial to learners. Synchronous grammar corrective feedback (CF) has been shown to be more effective than asynchronous methods in online writing tasks. Additionally, self-correction by language learners has proven more beneficial than teacherprovided correction, particularly for spoken language skills and non-novice learners. However, existing language-learning AI chatbots often lack synchronous CF and self-correction capabilities. To address this, we propose a synchronous conversational corrective feedback (CCF) method, which allows self-correction and provides metalinguistic explanations (ME). Our experiments examine the effects of different feedback presentation methods and selfcorrection on users’ learning experiences and intention to use the system.Our study suggests that in chatbot-driven language-learning tools, corrective feedback is more effectively delivered through means other than the social chatbot, such as a GUI interface. Furthermore, we found that guided self-correction offers a superior learning experience compared to providing explicit corrections, particularly for learners with high learning motivation or lower linguistic ability.",
|
| 5 |
+
"1 Introduction": "The growing prevalence of AI chatbots as conversational partners for second-language learners emphasizes the vital role of delivering effective feedback to enhance the overall learning experience. As researchers and practitioners work to optimize computer-based conversational language learning, it is essential to determine the optimal timing, methods of delivery, and feedback types that contribute\nto the most successful outcomes. Prior research has shown that synchronous corrective feedback (CF) for grammatical errors is more effective than asynchronous methods in online writing tasks (Shintani and Aubrey, 2016). However, the best form of synchronous CF in AI chatbot systems has yet to be determined. Furthermore, self-correction by language learners has proven to be more beneficial than teacher-provided correction (Brown, 2009), especially for spoken language skills and for learners with more than limited L2 proficiency. Despite this evidence, numerous current languagelearning AI chatbots lack diverse synchronous CF and self-correction features. And while past research has shown that learners’ proficiency levels significantly influence their preferences (Orts and Salazar, 2016; Yang, 2016; Wiboolyasarin et al., 2022), the optimization of feedback strategies to adapt to users with varying proficiencies and motivations in language-learning chatbots remains unexplored. To address this limitation, we propose a AI chatbot for language learning with synchronous conversational corrective feedback (CCF), and investigate the effect of the feedback form and selfcorrection with metalinguistic explanations (ME). Specifically, we explore the following two research questions:\nRQ1: How do the forms of CF delivery, specifically, feedback from the conversational partner (i.e., the chatbot) and a separate role (i.e., a GUI), impact the learning experience, including conversational enjoyment, negative emotions, self-efficacy, perceived usefulness, and intention to use the system? We hypothesize that: H1: Learners prefer receiving feedback from a separate role rather than from the conversation partner.\nRQ2: How does the process of self-correction (compared to explicit feedback without selfcorrection) impact the learning experiences, including conversational enjoyment, negative emotions, self-efficacy, perceived usefulness, and intention to\n83\nuse the system? Specifically, what are the effects on people with different linguistic ability and learning purposes? We hypothesize that: H2.1: Learners with lower linguistic ability prefer receiving guided self-correction compared to those with higher proficiency. And H2.2: Learners with serious learning purposes prefer receiving guided self-correction relative to those who report other learning motivation.",
|
| 6 |
+
"2.1 Chatbots as Conversational Partners for": "L2 Learners\nA major challenge for second language instructors and students is finding adequate opportunities for students to practice conversational skills. A possible solution is the use of AI-driven chatbots to fill this gap. For example, Fryer and Carpenter (2006) discuss how chatbots can be used to increase opportunities for students to practice their second language. Fryer and Carpenter (2006) also point out that students who are reticent to speak with human interlocutors are often able to talk more freely with a computer. Similarly, Huang et al. (2022) states that chatbots “encourage students’ social presence by affective, open, and coherent communication.” This interaction is driven by recent advances in generative AI and chatbot design that have improved the dialogue flow of chatbots as well as their adaptability to individual user attributes (Li et al., 2022). In the present work we combine scripted dialogue with generative AI to create a chatbot which is able to effectively interact with users.",
|
| 7 |
+
"2.2 Automatic Corrective Feedback for L2 learners": "Providing CF to students is an extremely timeconsuming prospect for instructors (Shintani, 2016), and the automation of feedback can free up instructor time to focus on rhetorical and conversational skills (Li et al., 2015). Particularly, automated CF (ACF) can provide the type of realtime feedback to students that is impossible for instructors to provide, allowing students to immediately take advantage of the proposed suggestions and gain more confidence in their independent expressive abilities (Barrot, 2021). Heift and Hegelheimer (2017) further explains that ACF enables “learner self-study and practice of the target language by identifying and explaining error sources” and allows for self-revision.\nIn the present work, we test two alternate types of CF: explicit and implicit feedback, in the context of an educational chatbot for language learning. Previous work had shown that providing metalinguistic explanations without explicit corrections, which we term guided self-correction, tends to result in better student engagement and immediate gains in target-form usage (Sauro, 2021) and may improve long-term learning outcomes in writing tasks (Gao and Ma, 2019; Barrot, 2021). (Penning de Vries et al., 2020) investigates the use of ACF in a spoken language system, and finds speaking practice with ACF benefits users’ learning goals. However, these feedback methods have not previously been tested in the context of language learning chatbots, a gap that the present paper seeks to address.\nAn additional key aspect of the present work is our testing alternate strategies for presenting feedback to language learners. Specifically, we test whether students prefer receiving CF directly from the chatbot as part of the conversational flow, or from another source such as the GUI window. While previous work has looked at student reactions to the timing of CF (Deeva et al., 2021), student control over feedback (Deeva et al., 2021), and level of explicitness (Sarré et al., 2021; Sauro, 2021), few studies investigate the effect of method of feedback presentation on engagement and learning experience. As such, this study is the first to investigate the impact of strategies for providing feedback on learning experiences and self-efficacy in the setting of a language learning chatbot.",
|
| 8 |
+
"2.3 Grammatical Error Correction & Classification models": "Much recent progress has been made in the task of Grammatical Error Correction (GEC). To date, this work has largely focused on student essays (Ng et al., 2014; Bryant et al., 2019). For example, Omelianchuk et al. (2020)’s GECToR reframes the GEC task as a sequence labeling task rather than a sequence transformation task. Other promising models are proposed by Stahlberg and Kumar (2021) and Rothe et al. (2021), who achieve strong results on the JFLEG (Napoles et al., 2017) and CoNLL-2014 (Ng et al., 2014) datasets, respectively. Furthering this work, Qorib et al. (2022) achieves state-of-the-art results on several datasets by combining successful GEC models, such as Omelianchuk et al. (2020) and Rothe et al. (2021)\nusing a simple logistic regression algorithm. More recently, Fang et al. (2023), Wu et al. (2023), and Coyne and Sakaguchi (2023) have investigated the application of pretrained large language models, such as GPT-3, to GEC benchmark tasks. We emphasize that the above-referenced works primarily target correcting written student essay data. We, on the other hand, seek to apply GEC to the dialogue domain, and thus previously proposed GEC models may not work as effectively as demonstrated in prior art.\nThe present work also relies on error classification models to ensure that the correct type of feedback is presented to users. ERRANT (Bryant et al., 2017) is a rule-based algorithm to discriminate error categories by their part-of-speech (POS) tags. As an improvement to ERRANT, SERRANT (Choshen et al., 2021) improves the type accuracy by utilizing SErCL (Choshen et al., 2020) rules when ERRANT is not informative. SErCL defines errors by combining the Universal Dependencies (Nivre et al., 2016) tags of the target item before and after correction.",
|
| 9 |
+
"3.1 Recruitment and participants": "For this study, we recruited native Mandarin speakers as participants. To find users genuinely interested in conversing with a chatbot and improving their English grammar, we used social media for recruitment, rather than relying on school classes or Amazon Mechanical Turk. Our demographic recruitment criteria included being a native L1 Mandarin speaker aged 18 years or older. We also sought participants having an interest in discussing travel (the topic of the study) in English via text message while receiving grammatical error feedback. Participation in the study was entirely voluntary and unpaid.\n175 participants completed the conversation and post-survey, with the following socio-demographic profile. The average age of respondants was 32 years, with the large majority having postsecondary education. Participants have studied English for an average of 15.7 years. Most participants reported self-improvement or having fun as their motivation for engaging with our system. Of those users who participated, 120 users produced one or more targeted errors while using the system. A full breakdown of sociodemographic details can be found in Appendix B.",
|
| 10 |
+
"3.2 Procedure": "Figure 1 depicts the user study procedure. Participants were randomly allocated to one of three experimental groups, each implementing a unique grammatical error feedback strategy. The study initiated with a travel-themed conversation with the chatbot. If participants made grammatical errors, as detected by our GEC model, the system offered feedback in accordance with their group’s strategy. To ensure that grammar errors could be identified, users were required to type at least three words per turn and encouraged to use complete sentences. They also needed to complete a minimum of 12 dialogue turns, corresponding to the length of the scripted responses. After the conversation, users completed a post-survey collecting their socio-demographic information, English learning background, motivations, and subjective experiences with the system. To incentivize survey completion, participants who finished the survey received asynchronous grammar feedback, including a conversation summary and grammar error corrections for their responses. Both the system UI and post-survey were in Mandarin.",
|
| 11 |
+
"3.3 Conversation and grammar error feedback": "As shown in Figure 2, the conversation alternates between chatting and feedback modes for all experimental groups. It starts with a chatting mode discussing travel with users. Whenever a user makes a grammatical error from the targeted error types (as defined in Section 3.3.1 below), the system first acknowledges their response and then switches to feedback mode. In Group 1, users receive feedback directly from the chatbot (i.e., the interlocutor) via guided self-correction. In Groups 2 and 3, however, users receive feedback via a pop-up window on the system GUI (i.e., separate from the interlocutor) to distinguish it from the conversation. While Group 2 receives guided self-correction, group 3 only receives explicit error correction without an opportunity to self-correct. (See 3.3.2 for more details.) Once the feedback is completed, the system switches back to chatting mode and resumes the ongoing conversation. In case of a non-targeted\nerror (i.e., an error detected by the GEC model but not explicitly handled by our feedback generator), the system simply highlights the error in the GUI and displays the corrected form at the appropriate location in the user’s previous utterance, without disrupting the chatting mode.",
|
| 12 |
+
"3.3.1 Targeted error types": "Our current feedback generation method generates feedback for five common types of grammatical errors frequently made by English learners. The error types are defined according to the SERRANT framework (Choshen et al., 2021). The error types we target are as follows:\n• VERB:SVA : Subject-verb agreement errors. • VERB:TENSE : Incorrect verb tense usage. • VERB:FORM : Verb form errors. For exam-\nple, using an infinitive verb when a conjugated form is needed. • NOUN:NUM : Noun number errors. For example, a user saying “I like cat” instead of “I like cats”. • DET : Misuse or omission of a determiner, such as “the” or “a”.\nWe target these errors because they are among the most common errors identified in the ErAConD dataset, indicating a high prevalence of these error types in L2 English learner conversations. We also consulted with professional second language educators who agreed that these error types are among the most frequently seen in their students’ speech.\nFinally, to avoid overwhelming students with feedback and disrupting the conversation too frequently, we chose this relatively small set of errors to target for the purposes of this study; we plan to add additional error types in future work.",
|
| 13 |
+
"3.3.2 Grammar error feedback strategies": "When the user makes a targeted error, we generate CF that includes metalinguistic explanations, hints, and corrected forms. We use the term “metalinguistic” to reference a student’s capacity to “reflect on and manipulate the structural features of language” (Nagy and Anderson, 1995). In the context of the present work, we define “metalinguistic explanation” as feedback which contains explicit information about the student’s language use, such as pointing out that the student used an incorrect verb tense. Depending on the experimental group, the feedback presented to the user can consist of one or more of the following types:\n1. Error identification: This specifies the portion of the user’s utterance that contains the error without providing the correct form.\n2. Implicit metalinguistic clues: This includes a metalinguistic suggestion about the type of error made, followed by prompts that encourage the user to self-correct, with additional guidance. There are two levels of this type of feedback: Level 1 provides a simple metalinguistic suggestion for the user’s first attempt, while level 2 provides a more detailed metalinguistic explanation for the second attempt.\n3. Explicit correction: This provides an explicit statement of the corrected form.\nWe present these suggestions in different ways depending on the experimental setting. The first type of feedback, which we refer to as guided selfcorrection, begins with feedback types 1 and 2, and progresses to type 3 only if the student is unable to self-correct after two attempts. In this approach, the user is first provided the identified error portion (e.g. “In this sentence you made a mistake on the verb ‘are’. ”), along with a metalinguistic suggestion (level 1) and an opportunity to self-correct (e.g. “What verb form should you have used? For example, \"sees\" and \"saw\" are different forms of \"see\".”). If the user is unable to self-correct, they are given a second chance with a more detailed metalinguistic suggestion (level 2) (e.g. “Not quite. Think about subject-verb agreement. How should your verb be changed to agree with the subject \"He\"? ”) If the user is still unable to self-correct after two attempts, we then present the explicit correction containing the corrected form. (e.g. ‘“Good try, but not quite. It’s tricky, I know. The correct verb form here is \"is\". Remember to make your verbs agree with their subjects.”) This guided self-correction feedback approach is presented to experimental groups 1 and 2, as shown in Figure 2. The second type of feedback, which we refer to as explicit feedback, consists only of providing type 1 and type 3 feedback (see group 3 in Figure 2).",
|
| 14 |
+
"3.4.1 Linguistic ability": "Linguistic ability includes various aspects. In this study, we focus on learners’ lexical competence in their produced utterances. We measure lexical diversity using the VocD method (McKee et al., 2000) 1 and assess lexical sophistication with the English Vocabulary Profile (EVP), aligning vocabulary usage with CEFR levels. Both metrics are evaluated with the online tool Text Inspector (Bax, 2012), with the medium of text designated as \"writing.\" While the Text Inspector tool also provides language proficiency levels based on the CEFR framework, we do not rely on this information in our study. The tool’s original design primarily targets writing tasks and may not be as suitable for evaluating language proficiency in textual conversation. For a comprehensive evaluation of the results, please refer to Appendix D.\n1https://textinspector.com/help/lexical-diversity/",
|
| 15 |
+
"3.4.2 Post-conversation surveys": "Upon the completion of each conversation, we gathered self-reported ratings from users on five distinct constructs related to users’ attitudes toward the system: negative emotion toward the feedback (frustration and annoyance), self-efficacy (confidence in grammar usage and expressive ability), perceived usefulness of the grammatical CF and suggestions, enjoyment using the system, and future intention to use the system. To ensure the reliability and validity of these constructs, we utilized a set of two measurement items, each rated on a 5-point Likert scale, for each construct. These measurement items were adapted from previous research studies (See Table 9) and subsequently modified to better suit the context of language learning chatbots. Figure 5 shows the survey results for each item. Hypotheses related to each construct and detailed descriptions of the constructs are shown in Appendix F.",
|
| 16 |
+
"4.1 Overview": "Figure 3 presents the system pipeline in chatting mode. At each turn, user input is first processed by the grammar error correction (GEC) module. If any targeted errors are identified, the system switches to feedback mode. The system first highlights the portion of the user’s utterance that contains errors with red backgrounds. Then, the topic chatbot acknowledges the user’s response using its generation model. Subsequently, the conversational feedback generator provides grammatical feedback to the users. The feedback content and form of delivery will vary depending on the group’s feedback strategies. For non-targeted error types, the topic chatbot will continue the conversation while the system will highlight the user’s error and display the corrected form on the GUI at the user’s previous response. If there are no grammar errors in the user’s input, the topic chatbot continues the conversation without highlighting or interruption.\nThe process in feedback mode, where targeted types are being addressed, proceeds as follows: For the group without guided self-correction (group 3), the system switches back to chatting mode immediately after providing explicit grammatical feedback at the same turn. For groups with guided selfcorrection (groups 1 and 2), the feedback mode continues to the next turn until the correction process concludes. During feedback mode in subsequent turns, the GEC module checks if users are able\nto successfully self-correct their errors. If users self-correct successfully, the feedback generator acknowledges the correction and the system returns to chatting mode where the topic chatbot continues the conversation. If they don’t, they are given a second chance where the feedback generator provides a more detailed metalinguistic hint. If they fail to self-correct after two attempts, the feedback generator provides explicit feedback the system switches back to chatting mode. Otherwise, the feedback continues.",
|
| 17 |
+
"4.2 Topic chatbot": "The topic chatbot combines scripted dialogue with a generative model to create a topic-oriented chatbot capable of effectively interacting with users. At every dialogue turn, the chatbot first generates a response and subsequently concatenates it with the scripted responses. Scripted dialogue is employed for experimental control purposes, primarily to pose questions designed to elicit more grammatical errors and to ensure consistency in the topics presented to users across different experimental groups. Conversely, the generative model is used to acknowledge user responses in a more natural manner by dynamically responding to user input.\nThe script encompasses 12 dialogue turns covering travel preferences, past travel experiences, and dream vacations. We employ Blenderbot3 3B as our generative model, which possesses various conversational skills and long-term memory. To reduce latency, Blenderbot’s internet access was disabled during experiments. After completing the scripted portion of the conversation, if users decide to continue the conversation, the chatbot’s responses will rely solely on the generative model.",
|
| 18 |
+
"4.3.1 Grammar error correction": "Figure 3 illustrates the grammar error correction process, which consists of two main steps: grammar error correction and error annotation. First, we use a grammar error correction (GEC) model to generate corrected sentences based on userinput sentences. The GEC model is a T5 (Raffel et al., 2020) model trained for grammar correction2. We fine-tuned the model on the ErAConD dataset (Yuan et al., 2022), a GEC conversation dataset between L2 English learners (of at least intermediate proficiency level) and an educational chatbot. We selected level 3 errors (as defined in the ErAConD dataset) as our training data since they are most likely to result in misunderstanding. The resulting fine-tuned model achieves an overall F0.5 of 0.43 evaluated by 5-fold cross-validation, as shown in Table 1. Detailed results by error type are shown in Appendix Table 10. While our reported F0.5 is substantially lower than SOTA GEC models designed for written text, there is no established baseline for dialog GEC. Note that the precision of 0.56 doesn’t mean that half of the edits generated are incorrect. In fact, there are many equally valid ways to correct a given grammar error; however, when\n2https://huggingface.co/ deep-learning-analytics/GrammarCorrector\ncalculating precision using a test dataset, we can only compare system-generated corrections with the one or two human-annotated gold edits. If the machine-generated correction does not match the gold annotation, it will negatively impact evaluation performance, even if the correction is a completely legitimate alternative. As a result, current evaluations tend to underestimate the performance of GEC models. Rozovskaya and Roth (2021) provides an in-depth study of this issue. While the current model is effective for the present study, we are working to improve the GEC model for future iterations of our system.\nAfter error correction by the GEC model, SERRANT compares the user input sentence with the corrected version to extract edits and classify error types. For most categories, there are three possible operations to specify user input errors: Missing (M), Replacement (R), and Unnecessary (U), indicating whether tokens should be inserted, substituted, or removed, respectively. Subsequently, we filter out trivial grammar error types (e.g., punctuation) and reapply the edits to the original sentences.",
|
| 19 |
+
"4.3.2 Grammar error feedback presentation": "Grammar errors can be presented in three different forms: 1) GUI inline highlighting on the user’s utterance, 2) conversational feedback presented in the form of a chatbot response from the feedback generation module, and 3) conversational feedback presented in a pop-up window from the feedback generation module.\nAs discussed in Section 3.3.1, our feedback generation module explicitly targets five error types, while other error types detected by our GEC model are referred to as “non-targeted”. For targeted errors, the error is first presented in the form of GUI inline highlighting on the user’s previous response. Then, after the topic chatbot acknowledges the user’s content, conversational feedback is presented in a form that depends on the experiment group. For group 1, the feedback is presented by the chatbot, while for groups 2 and 3, it is presented in a pop-up window. For non-targeted errors, only GUI inline highlighting is shown without any additional feedback.\nTo generate conversational feedback, we rely on a number of feedback templates that can be modified based on the specifics of the respective error. For example, if SERRANT tags an error as R:NOUN:NUM , indicating a replacement operation (’R’) resulting from a difference in noun num-\nber between the original input and the correction, we populate a template with noun number information to generate feedback such as “In this sentence, you used a single noun when you should have used a plural noun”, as shown in Figure 2. We use a similar approach to populate feedback templates for error types such as subject-verb agreement, verb tense, verb forms, and determiners.",
|
| 20 |
+
"5.1 Dialog statistics": "Table 2 displays the distribution of participants across each experimental group. Among the 175 participants, 154 encountered at least one error, with 120 experiencing at least one targeted error. In this study, our survey analysis focuses on the 120 users who encountered targeted errors, since the primary experimental treatment involved the feedback delivery strategy for these errors.\nTable 3 offers statistics for users who had targeted errors in their conversations, with a sample size of 120. On average, users engaged in 15.1 dialog turns (i.e. 15.1 responses from users), each consisting of 10.1 tokens. Each conversation contained 3.4 turns with any error, 1.6 turns with nontargeted errors exclusively, and 1.8 turns with targeted errors. The average number of errors per dialog amounted to 4.3. We also analyzed the most frequently occurred error types among all 175 participants, with the top ten including the five targeted error types as well as preposition, spelling, noun, and verb errors (see Appendix E for comprehensive error type counts).\nRegarding learners’ lexical competence, we assessed their lexical diversity, which had a mean (M) value of 84.8 (SD = 27.0) and a median of 80.25. The range of lexical diversity scores ranged from 37.1 to 200 (see Appendix D for more details).",
|
| 21 |
+
"5.2 Survey results": "Figure 5 Shows the survey results of all dialogs with targeted errors. We performed two-tailed ttests between groups (Groups 1 and 2 for RQ1,\nand Groups 2 and 3 for RQ2), and use Welch t-\ntest when the sample sizes are unequal, as recommended by Zimmerman (2004).",
|
| 22 |
+
"5.2.1 Effects of the form of feedback delivery": "The results presented in Figure 5 demonstrate that users experienced higher frustration levels when interacting with Group 1 than with Group 2 (t(58.61) = 2.26, p < .05). Our findings suggest that feedback provided by the dialogue agent leads to greater frustration than feedback delivered from another role, such as the GUI, even when the content and timing of the feedback are identical.",
|
| 23 |
+
"5.2.2 Effects of guided self-direction": "Figure 5 shows that users gained more self-efficacy in their grammar skills when interacting with Group 2 compared to Group 3 (t(77.88) = 2.51, p < .05). These results suggest that guided self-correction may be beneficial for enhancing users’ confidence in their English grammar skills during conversations.\nEffects of user’s linguistic ability To examine the influence of guided self-correction on users with varying linguistic abilities, we analyzed survey data from participants with higher and lower lexical diversities (VocD >= 90 and VocD <=70,\nrespectively). The threshold values were determined based on the median VocD score (80) with a range of plus or minus 10. Our results indicate that users with higher lexical diversity found guided self-correction (Group 2) more annoying compared to the absence of guided self-correction (Group 3). This could be because users with higher lexical competence might have already understood the corresponding metalinguistic rules, making guided self-correction redundant and less efficient than explicit feedback.\nEffects on users’ motivation To investigate the effects on users with varying motivations, particularly their level of commitment to improving their English conversation skills, we excluded approximately one-third of users who reported using the system out of curiosity or for fun and defined the remaining users as \"serious learners\". Our findings (Figure 4c) reveal that serious learners not only experienced significantly higher levels of confidence in their grammar skills with guided selfcorrection (t(46.57) = 2.96, p < .01), but also perceived the feedback to be more useful compared to the absence of guided self-correction (t(40.54) = 2.47, p < .01). Moreover, we conducted a further analysis on serious learners with low lexical diversity (VOCD <= 70) (Figure 4d) and found that when receiving guided selfcorrection, they reported higher enjoyment in conversation (t(9.14) = 3.46, p < .01 for enjoyment1 and t(8.28) = 2.84, p < .05 for enjoyment2), increased self-efficacy in both grammar skills (t(8.21) = 4.20, p < .01) and expressing ideas (t(6.61) = 3.01, p < .05), and perceived the grammatical corrective feedback (t(6.78) = 2.70, p < .05) and suggestions (t(6.94) = 3.03, p < .05) to be more useful compared to the absence of guided self-correction.",
|
| 24 |
+
"6 Conclusion": "Results from this preliminary study provide evidence that learners may prefer getting corrective feedback from a separate role, instead of from the conversation partner to reduce frustration. In addition, guided self-correction may provide better learning experiences than the absence of selfcorrection, especially for learners with lower lexical competence or more serious learning motivation. These findings highlight the importance of considering users’ individual differences when designing language-learning chatbots, and the need\nfor personalized feedback mechanisms that cater to individual users’ need.",
|
| 25 |
+
"7.1 Assessment of learner’s linguistic ability and future research": "In this study, the assessment of learners’ linguistic ability was limited to analyzing the learners’ produced utterances in a single short conversation. Also, it was analyzed with the online tool TextInspector, which was primarily designed for evaluating writing tasks rather than textual conversation. While this provides some insight into their language proficiency, a more comprehensive assessment of learners’ language proficiency could offer a deeper understanding of how it influences their preference toward different feedback strategies. Future research should consider incorporating additional measures to evaluate learners’ language proficiency comprehensively. This could involve utilizing standardized tests for receptive and productive skills and conducting detailed assessments of vocabulary, grammar, and discourse abilities.\n7.2 Effect of participants’ language proficiency\nIn this study, survey data were collected from participants capable of engaging in a conversation about travel with at least 12 turns from each side. Participants without the ability to meet this requirement were automatically excluded and did not complete the post-survey. Previous research (Van Beuningen et al., 2012) indicates that learners with limited proficiency may prefer explicit corrective feedback, as they may face challenges in independently arriving at correct answers. However, it should be noted that due to the inherent study design, some learners with limited proficiency might not have been included in the sample.",
|
| 26 |
+
"7.3 Effect of the GEC model performance": "During the experiment, there were no existing GEC (Grammar Error Correction) models specifically designed for conversational grammar errors. As a result, we developed our own GEC model using a small dataset of GEC dialogues. To enhance the performance of the GEC model in future iterations, we are actively working on collecting additional conversational GEC datasets. By incorporating more diverse and extensive data, we aim to improve the accuracy and effectiveness of the GEC\nmodel. The enhanced performance of the GEC model is anticipated to have an impact on the effectiveness of different feedback strategies. A more proficient GEC model could potentially yield better user experiences, resulting in higher intentions to use the system. The availability of improved GEC capabilities will enable more precise and tailored feedback, enhancing the overall effectiveness of the system.",
|
| 27 |
+
"7.4 Effect of different feedback strategies": "In this study, all feedback strategies used were interruptive, potentially disrupting the conversation flow. However, learners with higher linguistic ability may prefer fewer interruptions, such as preferring no self-correction than self-correction. Additionally, it is important to acknowledge that individual learners may have different preferences and learning styles. To address this, future systems could consider non-intrusive feedback strategies. For example, grammar errors could be highlighted with a background color, and optional metalinguistic explanations could be provided on-demand. This allows learners to access guidance without forcefully interrupting the conversation, catering to their preferences and maintaining a smoother learning experience.",
|
| 28 |
+
"A Supplementary Materials": "The detailed experiment results related to this paper are available in the following GitHub repository: https://github.com/KaihuiLiang/chatback_gec_feedback\nIn the following sections, we have selected the most critical aspects of these results for a concise understanding.",
|
| 29 |
+
"F Survey constructs": "Table 9 shows all survey questions and references.\nNegative emotions For negative emotions towards feedback, we measured users’ negative emotions, specifically their levels of frustration and annoyance when receiving immediate corrections during the conversation. Our hypotheses were that users would experience fewer negative emotions in two scenarios: 1) when receiving corrections from the GUI, which is a separate role from the chatbot; and 2) when not required to correct themselves.\nSelf-efficacy Regarding self-efficacy, we measured the level of self-efficacy that users gained after the conversation, specifically their confidence in their grammar skills and their ability to express ideas in English conversations. Our hypotheses were that users would experience a greater increase in self-efficacy when: 1) corrections were given through the GUI, which would provide a less frustrating experience; and 2) they were given the opportunity for guided self-correction, allowing them to actively participate in the learning process and gain a better understanding of their mistakes.\nUsefulness For usefulness, we measured the level of perceived usefulness of the grammatical CF by users. Our hypothesis was that guided self-correction would be perceived as more useful than without.\nEnjoyment Regarding enjoyment, we measured the level of enjoyment that users experienced while conversing with the chatbot. Our hypothesis was that receiving grammatical correction feedback from the GUI would be more enjoyable than from the chatbot, as the interruptive feedback would be given from a separate role rather than the conversation partner. Additionally, we hypothesized that higher proficiency\nlearners would find having a conversation without guided self-correction more enjoyable, as they would require less self-correction and experience fewer interruptions.\nIntention to use Lastly, we asked users if they intended to use the system again, using one item that was reverse-coded for a sanity check. Our hypothesis was that users would have a higher intention to use the system if they experienced less negative emotion, gained more self-efficacy, perceived the system as more useful, and enjoyed the conversation more."
|
| 30 |
+
}
|
ACL_23_no_limitation/ACL23_1214.json
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1214",
|
| 3 |
+
"Title": "Evaluating Classroom Integration for Card-it: Digital Flashcards for Learning Italian Morphology",
|
| 4 |
+
"abstractText": "This paper presents Card-it, a web-based application for learning Italian verb conjugation. Card-it integrates a large-scale finite-state morphological (FSM) analyzer and a flashcard application as a user-friendly way for learners to utilize the analyzer. While Card-it can be used by individual learners, to support classroom adoption, we implemented simple classroom management functionalities such as sharing flashcards to a class and tracking students’ progression. We evaluated Card-it with teachers of Italian. Card-it was reported as engaging and supportive, especially by featuring two different quiz types combined with a verb form lookup feature. Teachers were optimistic about the potential of Card-it as a classroom supplementary tool for learners of Italian as L2. Future work includes sample sentences and a complete learners evaluation.",
|
| 5 |
+
"1 Introduction": "Learning verb morphology plays a crucial role in the acquisition of morphologically rich languages (Slabakova, 2009), such as Italian and French. Thus, learners of Italian deal with the acquisition of a rich system of verbal inflections (e.g., Pizzuto and Caselli, 1994). Explicit morphological instructions and training have been shown to help students on acquiring new words as well as to improve their syntactic knowledge (Chen and Schwartz, 2018; Mobaraki and Jahromi, 2019). Similarly, raising meta-linguistic awareness improves the learners’ production and competence in second language (L2) acquisition (Heift, 2004; Kieseier et al., 2022). To support learners of Italian as L2, we designed, implemented, and evaluated Cardit with the help of experts: teachers of Italian as a foreign language. Card-it fosters meta-linguistic knowledge when presenting linguistic information on the analysis of verb forms (i.e., for the verb mangiare (to eat) “Prima Persona Singulare Presente Indicativo” → (io) mangio) along with additional\nexplanations of linguistic categories related to verb morphology that are displayed on demand. In addition, meta-linguistic information is also used to present corrective feedback (see Sec. 4.2).\nCard-it is an online application for teachers and learners of Italian to create collections of digital flashcards – based on a semi-automatic approach – with which they can study and test themselves on verb morphology explicitly. Our choice for using a digital flashcard design reflects a traditional way of learning vocabulary explicitly, which has been shown to be a successful learning method that is perceived well by students (Yüksel et al., 2022). While some flashcard systems may support verb morphology with pre-defined cards and modules, they do not allow for the customization of cards or decks (e.g., Memrise1). Other systems support custom card collections, but they require manual input of the card information (e.g., Anki2). Yet, these systems do not enable teachers to track and analyze their students’ progress over time. In addition, Card-it’s learner-centred design embeds corrective feedback, meta-linguistic information, and different study modes.\nThis paper introduces the system’s architecture, the FSM implementation, and Card-it’s iterative design and features. Lastly, we report the results of a brief evaluation with Italian teachers which indicates Card-it’s potential for their classroom and outlines our future steps towards a learners evaluation.",
|
| 6 |
+
"2 Related Work": "Traditionally, Natural Language Processing (NLP) tools like an FSM are a component of larger pipelines, for example, as a tokenizer (e.g., Jurafsky and Martin, 2009). As a result, using these tools is often not intuitive or easy for users unfamiliar with NLP. However, since these tools can\n1https://www.memrise.com/. Accessed 05-2023. 2https://www.ankiapp.com/. Accessed 05-2023.\n130\nwork with text, NLP has become an integral part of the field of Computer-Assisted Language Learning (CALL), with several systems using NLP tools in a language-learning context. Examples include E-Tutor (Heift, 2010), an intelligent tutoring system for learners of German that is fully incorporated into the German curriculum at Simon Fraser University; TAGARELA (Amaral and Meurers, 2011), a system for Portuguese that includes exercises on vocabulary; and FeedBook (Meurers et al., 2019), an intelligent tutoring system for English that can be fully integrated into regular classes.\nSimilarly, Google-Assisted Language Learning (GALL), corpus-based or data-driven learning (DDL) are increasing in popularity as language learning tools (Conroy, 2010; Pérez-Paredes, 2022). While GALL refers specifically to learners using tools provided by Google, both GALL and DDL happen when learners take advantage of online access and text processing power to use corpus tools, such as dictionaries and linguistic corpora.\nFurthermore, Yoon (2016) verified that DDL was an effective cognitive tool for helping people with their lexical and grammatical problems while dealing with concordance tasks; for example, learning frequent word pairs such as to take instead of to eat a [medicine] pill. However, he suggests that some of the available resources are not user-friendly and difficult to use, such as functions for linguistic resources applied for stemming. That said, Card-it’s design uses a learner-centred approach with teacher support features; it provides a user-friendly interface to leverage an FSM to power a semi-automatic generation of flashcards that can be used to study and self-assess Italian verb conjugations. Related to using FSM in Card-it, Kaya and Eryiğit (2015) used a Finite-State Transducer to power a Turkish word synthesis system and a word-level translation system between Turkish and English. Another example is the ICALL system for two Saami languages that is based on Finite-State Transducers (Antonsen et al., 2013).",
|
| 7 |
+
"3 Card-it: System Architecture": "Card-it is a web-based application consisting of two components: back-end and front-end.\nBack-end: The FSM Analyzer. The main component of the back-end is our FSM, containing over 5000 verb lemmata and their conjugations Beesley and Karttunen (2003). It was created by extracting verb roots from free resources, the Morph-it! lexi-\ncon by Zanchetta and Baroni (2005) and the online dictionary provided by one of Italy’s leading news magazines, Corriere della Sera3. FSMs are usually part of a text processing pipeline within NLP tools. Here, we leveraged our FSM as a dynamic form generator and analyzer in a language-learning context. The FSM ties a verb form to its linguistic analysis: it may analyze a verb form and return its linguistic tags (analysis) or generate a verb form given its linguistic tags (generation) – see Fig. 1.\nIn our case, the FSM consists of a lexicon that contains verb stems, their inflectional paradigms and the appropriate morphological analysis. The lexicon of the FSM creates all verb forms following the regular pattern of concatenating stems with their respective inflectional endings. With the use of regular expressions the FSM is able to manipulate those regular forms of the lexicon on the basis of phonological rules. For example, some forms require the insertion of an -h to retain certain pronunciation patterns. Consider the verb mancare (“to miss\"): the regular inflection paradigm in the lexicon creates the incorrect form manci (“you miss\"), for the second person singular present indicative. However, to retain the correct pronunciation, the correct form is manchi. Whenever the FSM is run, it first creates all forms in the lexicon and then applies regular expressions to manipulate these forms based on phonological rules of the language. This architecture allows us to build a powerful and large morphological resource since it automatically creates verb forms on the basis of their stems. If we were to add new verbs to our tool, it simply requires to manually add verb stems into the FSM lexicon.\nVerbs generated by the FSM, user accounts, flashcards and classroom organization are stored in a MySQL database. A Flask middleware is responsible for querying changes users request from the front-end. These changes are related to flash-\n3https://dizionari.corriere.it/ dizionario_italiano/. Accessed 05-2023\ncard, classroom, and account organization. The main advantage of this back-end architecture is to scale the system for multiple users simultaneously; this integration approach has been taken by others (de Bernardinis et al., 2015). A set of Python scripts are responsible for parsing and updating the database with any changes to the FSM; currently, these updates are triggered manually whenever the list of verbs or morphology is altered.\nFront-end: User Interface. The user interface front-end of Card-it is developed with React.js. The main function of the front-end is the flashcard design for users to study and be assessed from. Sec. 4 explains Card-it’s digital flashcards design and interaction.",
|
| 8 |
+
"4 Card-it Design and Features": "Card-it can be used for autonomous learners who may interact with the app to study Italian conjugations on their own. In addition, Card-it can also be integrated by teachers in the classroom. In either case, learners interact with verbs and conjugations via digital flashcards.",
|
| 9 |
+
"4.1 Grouping and Organizing Flashcards": "The flashcards reflect a traditional way of language learning. Particularly, the flashcard design reflects both directions of the FSM: one side of the card contains a verb form, the other its linguistic attributes (compare Figs. 1 and 2); learners may choose which side they want to use for studying.\nFlashcards can be organized in decks; decks can be organized in collections. Both learners and teachers can organize flashcards according to their learning or teaching needs. For example, a teacher may create collections for different language classes: in a collection “Italian for Beginners”, the teacher may add a deck for present tense only, another for past tense(s), and so on.\nUsers can create decks of cards by searching the database for specific verbs and filtering values for the categories tense, mood, number, and person.\nAlternatively, if no value is chosen, Card-it returns all forms for that category. E.g., one may search the verb amare “to love”, selecting the values present tense and indicative mood, but selecting none for person and number. Card-it returns 1st, 2nd and 3rd person singular and plural forms of amare, where each form is a flashcard. Users can select any flashcards they want to add to a specific deck.\nThe knowledge of the underlying linguistic concepts benefits the acquisition of a new language Heift (2004). Therefore, we made the decision to include the morphological attributes in the application to raise meta-linguistic awareness. Card-it also offers a page with definitions and explanations of all the terms used (i.e., “What is tense?”).",
|
| 10 |
+
"4.2 Studying and Self-Assessing": "Card-it offers different study modes and ways to interact with its flashcards.\nStudying with Card-it. One way is to use the flip card functionality, where Card-it presents the user with one side, and the learner can think about the content on the corresponding side. When hovering the mouse over the card, the flashcard flips to its other side, and learners can check their answer. Another mode is conjugation. Here, the flashcard presents the user with the infinitive form of a verb, a tense/mood combination, and personal pronouns for number/person configurations and prompts the user to type in the corresponding verb form. If wrong, the system returns the corrected answer as seen on the left side of Fig. 3, showing the “conjugation” study mode, with corrective feedback.\nSelf-assessment and corrective feedback. For testing, Card-it has two different types of quizzes, called Identify Tense, Conjugate, and a third\nMixture, a random mix of tasks from the other two types. While “Conjugate” corresponds to the above-described study mode prompting the user to type in the corresponding conjugated form, in quiz mode, it additionally contains a “Hint” button that displays multiple choice options when used (Fig. 4), otherwise hidden by default.\nStudies have shown the importance of informative feedback for a positive learning trajectory as it helps learners to understand the nature of their mistakes and to improve in the future (e.g., Heift, 2004). Card-it returns informative feedback to the learner by checking whether their incorrect answer corresponds to another morphological analysis and returning that information to them, see Fig. 3. The second quiz type, “Identify Tense”, presents learners with a specific verb form, asking them to select its respective tense (Fig. 4). All quiz types may be used for self-assessment or as classroom activities.",
|
| 11 |
+
"4.3 Classroom Management and Analytics": "To enable classroom and teacher support, we focused on 3 main tasks. The tasks supported in this category are (1) creating classrooms and generating a unique code that is shared with students allowing them to join it, (2) sharing specific collections to one or multiple classrooms, and (3) tracking the progress of students enrolled in the classroom.\nAfter students join the classroom using the code, they can explore all collections and decks their teacher shares. Similarly, students have access to both studying and quiz modes for all decks in the classroom. Teachers can access statistical information on the students’ progress with the classroom decks. Teachers can analyze individual attempts for each student with a breakdown of correct and incorrect answers. Alternatively, teachers can see average scores per attempt for the entire group; and analyze the class’ progress over time. Lastly, Cardit shows the number of correct attempts for each card in a deck. Thus, the teacher can pinpoint the specific cards students had the most trouble with.",
|
| 12 |
+
"5 Evaluation": "We took an iterative design approach for implementing Card-it, where we performed a preliminary expert evaluation (N = 2) with teachers of Italian at the Institute of Speech and Language at our university with an earlier version of the application. Based on this preliminary evaluation, we determined the fitness of the flashcards and the\nquiz formats and iterated over the application. The teachers responded positively to Card-it as a digital version of their current classroom practices, such as verb conjugation worksheets. We also learned that Card-it could be adopted as a supplemental tool to the classroom, which led us to implement the classroom features. The following section describes our second expert evaluation.",
|
| 13 |
+
"5.1 Card-it Expert Evaluation": "After implementing changes to reflect the feedback from the early preliminary evaluation; we reached out to Italian teachers via our professional networks. In total, 9 teachers from 2 institutions in Germany were invited to participate. Of those, 5 volunteered, but only 3 completed the study. Participants were teachers of Italian language courses; after receiving the study instructions, they had 14 days to follow to complete all steps remotely, then compensated with a $20 Amazon gift card.",
|
| 14 |
+
"5.1.1 Methodology": "We ran our expert evaluation remotely, which allowed us to provide flexibility to participants to complete the study. Participants were asked to follow three steps to complete the study: (1) Watch a recorded video demo of Card-it’s main features; (2) Explore Card-it on their own using both teacher and student account types; (3) Answer a survey questionnaire about their experience using Card-it. In the survey, we asked 5-Point Likert Scale questions on general usability, the potential for classroom adoption, and specific questions on different features such as studying and testing modes. We also asked experts to answer a section where they give their opinions from a student perspective.",
|
| 15 |
+
"5.1.2 Results and Discussion": "The system’s usability was rated positively, with two experts selecting easy and one expert very easy. All experts rated both quiz types, “Conjugate” and “Identify Tense”, as either appropriate or very appropriate. One expert mentioned the quizzes were their favourite features. When asked to rate the classroom management usability, two chose good and one very good. As a follow-up, we asked them about the steps to create a classroom: one expert found it difficult, and the others easy. They all mentioned that they could foresee themselves using Card-it for homework in their classes or as a tool for students to self-study at home. When asked to take on a student’s perspective, they all rated the\nquiz and verb look-up features of Card-it most useful. Yet, they suggested including translations and example sentences containing the individual verbs as it would be useful for students and teachers’ perspectives.",
|
| 16 |
+
"6 Future Work and Conclusion": "This paper discussed the power of the adequate use of NLP tools in language learning, including designing appropriate interfaces. We presented Card-it as a user-friendly app for learning Italian verb conjugation using digital flashcards; we also described Card-it’s classroom management and analytics features (more details in App. A). Lastly, we discussed our iterative approach to design, which combined expert evaluations between iterations of Card-it. The results of the expert evaluation show that according to their expertise, Card-it is an appropriate conjugation tool for autonomous learning and for classroom integration as a supplementary resource. Card-it’s usability and different quiz functions were positively evaluated. Nonetheless, we also learned that Card-it might be further improved by adding example sentences. The most promising result from the evaluation is the experts’ expression of interest in using Card-it in their classrooms.\nDespite asking for the experts’ perspectives as students, it would be more reliable to run a user study with learners of Italian as a second language. We are designing a remote longitudinal study with 3 weekly sessions. At the end of each session, participants are invited to submit a Card-it quiz and a short usability survey. We also plan on testing their knowledge of a set of verb conjugations before and after their study period of 3 weeks. Other future directions may include gamification of Cardit’s quizzes and quiz modes that can support live classroom exercises such as Kahoot (Dellos, 2015).",
|
| 17 |
+
"Acknowledgements": "We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC).",
|
| 18 |
+
"A Classroom Management": "Fig. 5 shows an example classroom with two collections and the entry code for students to join the classroom:\nFig. 6 shows the statistical overview of students’ performance in a quiz. Teachers may filter for a specific collection (here: Presente Indicativo), deck (here: Regular Verbs) and quiz type (here: Conjugate). Additionally, teachers see the score for each student:\nFig. 7 illustrates how teachers can check on the groups’ performance on every single card, sorted from the least correct to the most correct:\nTeachers may select one particular student to get detailed information on their performance, as in Fig. 8:\nFig. 9 shows the same example classroom as in Fig. 5 but from the students’ perspective. Here, students can select one of the three quiz types or scroll down for study mode:"
|
| 19 |
+
}
|
ACL_23_no_limitation/ACL23_1215.json
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1215",
|
| 3 |
+
"Title": "Scalable and Explainable Automated Scoring for Open-Ended Constructed Response Math Word Problems",
|
| 4 |
+
"abstractText": "Open-ended constructed response math word problems (\"math plus text\", or MPT) are a powerful tool in the assessment of students’ abilities to engage in mathematical reasoning and creative thinking. Such problems ask the student to compute a value or construct an expression and then explain, potentially in prose, what steps they took and why they took them. MPT items can be scored against highly structured rubrics, and we develop a novel technique for the automated scoring of MPT items that leverages these rubrics to provide explainable scoring. We show that our approach can be trained automatically and performs well on a large dataset of 34,417 responses across 14 MPT items.",
|
| 5 |
+
"1 Introduction": "Math word problems are a common question type in both formative and summative mathematics assessment. In a math word problem, the prompt describes a scenario and asks the student to calculate some value or construct some mathematical expression pertaining to that scenario. Such problems assess both the student’s ability to carry out mathematical computation and reasoning as well as their ability to apply their knowledge in determining how to solve a mathematical problem.\nAutomated assessment of closed constructedresponse (CR) math problems is straightforward, although complexities arise due to the variety of possible representations for a given mathematical expression. Examples of automated assessment systems for closed CR items include m-rater (Fife, 2017) and MathQuery (Streeter et al., 2011). In contrast, open-ended CR math problems are difficult to automatically score, since responses to open CR items combine mathematical expressions with prose explanations. And if a problem asks students to both compute a value and explain their computation, that introduces the complexity of partial\ncredit; in the dataset we consider in this work, score ranges for items vary between 0–2 and 0–4. Even for humans, these sorts of items, with partial credit and open-ended responses, are time-consuming to score (Stankous, 2016).\nAutomated assessment of CR items outside of mathematics is now common, thanks to the achievements of researchers in the areas of Automated Essay Scoring (AES) and Automated Short Answer Scoring (SAS). The reliability of AES systems is often comparable to that of humans (Shermis and Burstein, 2003, 2013), and the same is true for SAS systems (Butcher and Jordan, 2010). Given that MPT items are themselves CR items, this suggests that such approaches could also be used for MPT; research in this area is promising, but sparse (Erickson et al., 2020; Cahill et al., 2020).\nHow mathematical expressions are encoded in response text is a key attribute of a given MPT dataset. In this work, we use data generated by a writing environment that allows students to enter mathematics using a math editor tool. Any math written in this tool is represented in the final response text as Content MathML (an XML-based specification for the representation of mathematics). As students can also write math outside of the math editor, the dataset that we consider in this work contains math represented both in MathML and in plain text, often within the same response.\nGiven this set of challenges, our interest is in creating an explainable predictive model for MPT. Such a model would be able to differentiate, for example, between a response that received a 1 out of 3 because it contained the correct final answer without showing work, and a response that received a 1 out of 3 because it contained correct reasoning but incorrect computations. A model that successfully achieved this would be useful both for students, as they would better understand why their responses received their assigned scores, as well as for test administrators, as the explanations would build trust\n137\nin the validity of the model’s scoring. This paper is structured as follows. We begin with a discussion of related work and a detailed description of our task. We introduce a novel scoring model that uses the rubric’s structure to provide explainable scoring for MPT, and show how our model can be automatically trained. We then present experimental results that show the effectiveness of our approach, and conclude with a discussion of the present and future work.",
|
| 6 |
+
"2 Related Work": "There is a substantial literature around the automated scoring of non-mathematical CR items. Work on AES dates back to the 1960s (Page, 1966), and modern-day AES systems involve a wide variety of approaches, including linear regression (Larkey, 1998), random forests (Hellman et al., 2019), and neural networks (Taghipour and Ng, 2016; Dong et al., 2017; Riordan et al., 2017). Short answer scoring is also relevant, as our MPT responses tend to be only a few hundred characters long. For SAS, many systems involve paraphrase detection, or some similar notion of semantic similarity to reference answers (Leacock and Chodorow, 2003; Tandalla, 2012; Ramachandran et al., 2015; Kumar et al., 2017).\nWhile much work has been done on AES and SAS, as well as around the automated solving of math word problems (e.g. Kushman et al. 2014; Huang et al. 2016; Wang et al. 2017; Xie and Sun 2019), work around the automated scoring of math word problems is more limited. Livne et al. demonstrate a system that successfully uses instructorprovided reference answers to automatically score responses to closed CR math word problems (Livne et al., 2007). Lan et al. present a system that predicts scores by embedding multi-step math responses using a bag-of-expressions model, a bagof-words approach designed to capture mathematical features (Lan et al., 2015). Once embedded, they use a combination of clustering and limited human scoring to score all responses. However, while their items were open CR math word problems, any prose in student responses was ignored by the scoring system.\nSome systems do attempt to grapple with the full complexity of open CR math word problems. Kadupitiya et al. present a system that can score CR math word problems for summative assessments whose responses contain both prose and\nmath (Kadupitiya et al., 2017). Their system assumes that all math is encoded as MathML, and prose is handled by estimating the semantic similarity of response phrases to known reference phrases. Erickson et al. (Erickson et al., 2020) investigated the effectiveness of random forests, XGboost, and LSTMs for scoring formative open CR math problems with only plain text responses, and follow-up work has shown that transformer-based approaches can also perform well on this task (Baral et al., 2021; Shen et al., 2021).\nAs mentioned above, we expect that many realworld MPT datasets will include responses that contain math represented both as plain text and as MathML. To the best of our knowledge, Cahill et al. is the only published work that attempts to score these sorts of responses (Cahill et al., 2020). In their work, they extract plain text math from student responses using regular expressions, and then use the m-rater (Fife, 2017) math scoring system to evaluate the correctness of this extracted math. They then build a feature space that includes binary features indicating whether certain rubric elements were covered by the student response. By training machine learning models on this feature space, they create models with interpretable features. This process requires knowledge of the rubric during training. Our work differs from Cahill et al. in that the model that we introduce relies only on features that are aligned with the rubric, and produces scores that are inherently explainable. Furthermore, it requires no knowledge of the rubric during training. We also evaluate our approach across a wider variety of items with more responses per item.",
|
| 7 |
+
"3 Open Constructed Response Math Word Problems": "The dataset we use in this work is proprietary, so we have adapted an item from the GSM8K dataset 1 (Cobbe et al., 2021) as an illustrative example, shown in Table 1. In this example, the prompt establishes a scenario and asks the student to compute a value related to that scenario. The rubric defines three binary components that a response can achieve, which defines the score range for this item to be from 0 to 3. Finally, the example response shows a typical mixing of MathML and prose.\n1Dataset located at https://github.com/openai/ grade-school-math/tree/master/grade_ school_math/data.\nWe are focused on word problems that ask the student to construct some mathematical equation and/or compute some number, as well as to provide the work and reasoning that they used in coming to their answer. For some items, this explanation is required to be prose, while for others the chain of mathematical expressions that led to the answer can suffice.\nEach item has a rubric composed of some number of computation, modeling, and reasoning components, each of which is worth one point. Computation components generally refer to the presence of a correct final answer, modeling components to showing the correct mathematical derivation of the final result, and reasoning components to an explanation of why those steps were taken. A given rubric may not include all three of these components, and may also define multiple components of a given kind. The final score of a response is the sum of these binary component scores. Note that even if a rubric does not require a prose explanation, the student may still include prose in their final response.\nThe characteristics of the dataset used in this work are shown in Table 2. A critical aspect of our dataset is that MPT problems are, in general, quite difficult for students to answer correctly. For some items, more than 70% of student responses received a score of 0. This is an expected feature of our dataset, as math word problems are known to be substantially harder for students to solve than conventional math problems (Cummins et al., 1988).\nStudent responses are written in an environment\nthat supports the entry of both plain text in a conventional text field and of math via a math editor. Critically, arbitrary text input is allowed in the math editor, to support the presence of variables in the student answer. While the expectation is that students will use this math editor to write the relevant mathematical expressions, and write the rest of the response outside of the math editor, in practice students often write prose inside of the math editor and math expressions outside of the math editor. Thus, we cannot look only at the MathML in a response to identify the mathematical statements produced by the student, and we cannot look only at the plain text to identify their explanations and supporting arguments. Because of this, we believe the best way to score MPT responses is by converting them to a normalized form.\nThis normalization process consists of three steps: first, we convert mathematical terms in the response into their symbolic equivalents, e.g. \"eight\" to \"8\", or \"plus\" to \"+\". Next, we need to account for prose written in the math editor. We identify MathML containing chains of variables being multiplied together that appear to spell out English words. When such a chain is found, it is removed from the MathML and converted to plain text by preserving the order of the variables and removing the multiplication operators. This replaces the variables in the MathML by their corresponding plain text word. Finally, we transform all remaining MathML into plain text by taking the in-order traversal of the expression tree defined by the MathML.",
|
| 8 |
+
"4 Explainable Scoring": "As outlined in Section 3, the rubrics for our MPT items are highly structured. We leverage this structure to create a new approach to the automated scoring of MPT items by essentially codifying the rubric in a machine-understandable way. The close alignment of our model with the rubric produces predictions that are inherently explainable.\nRules form the core building block of our approach. Rules encode short mathematical expressions and the transformations required to convert them into other lexically distinct but semantically identical forms. For example, a rule encoding \"2 + 3\" could generate \"3.0 + 2\" as an alternative form. These alternate forms account for different mathematical properties, principally commutativity and conversion between floats and integers (for whole numbers). To account for variables, we also allow single letters to serve as operands in our expressions.\nTo determine if a rule is present in a student response, we first extract all mathematical text from the normalized text of the response. This is to prevent superfluous words from obscuring the underlying mathematics. See Figure 1b for an example. Then, if any of the forms of a rule are present as a substring of the extracted math, that rule is considered to be present in the response.\nThe amount of prose in a response is highly itemdependent. To account for items where prose is important, we also include the ability to write regu-\nlar expressions as rules. Such a rule is found in a response if its constituent regular expression has at least one match in the response.\nAssembling these rules into a form that can automatically score responses is done as follows. We define a group to be a list of rules, and we consider a group to be present in a response if any of its constituent rules are present. This allows us to capture mathematics that are equivalent under the rubric but not captured by the lexical transformations of our rules, for instance, \"2 * 16\" and \"16 + 16\" could be two valid ways of writing an expected expression.\nWe then create evidence out of these groups. Evidence is a list of groups, and we consider evidence to be present in a response if all of its constituent groups are in the response. This allows us to capture rubric elements that require the student to cover multiple areas. For example, if a student needs to show two distinct values to achieve a Computation component, we can capture this notion by constructing evidence with two groups, one for each of those two distinct values.\nFinally, to mirror the structure of the rubric components, we collect evidence into scorable traits. A scorable trait contains lists of positive and negative evidence. If any positive evidence and no negative evidence is present in a response, then the scorable trait scores a 1. Otherwise, it scores a 0. We include this concept of negative evidence to account for misconceptions and other incorrect mathemat-\nics that can prevent a student from receiving full credit on a rubric component. For example, if an item asked the student to compute 4 divided by 2, the student could incidentally compute the correct value by subtracting 2 from 4.\nWe construct a number of scorable traits corresponding to the number of components in the rubric, and the final predicted score for a response is the sum of the individual binary trait scores. Because we know exactly which rules, groups, evidence, and scorable traits were found or not found when scoring, we can automatically construct an explanation of our predicted scores. See Figure 1 for an example of a scorable trait and the score and explanation it produces.",
|
| 9 |
+
"5 Automated Discovery of Rules": "Given the hierarchy of rules, groups, evidence, and scorable traits described above, one approach to developing a scoring model would be to define all of these elements manually. While manually constructed models perform well (per our experiments below), requiring manual effort to construct a scoring model prevents the adoption of this approach at any scale larger than a small handful of items. Thus, we would like to automate this process. However, our model is not differentiable, so approaches such as stochastic gradient descent can not be used.\nSimulated annealing is a highly flexible opti-\nmization technique that makes few assumptions about the objective function being optimized (Kirkpatrick et al., 1983). When applied to our modeling task, simulated annealing maximizes the performance of a model by iteratively adding or removing rules. If a change increases the model’s training set performance, we keep it. Otherwise, the change is stochastically accepted with a probability based on a temperature variable and the difference in performance between the new and previous states. As the procedure continues over many iterations, the temperature is slowly reduced according to a cooling schedule. The result of this is a process that initially makes many random changes, but that tends towards only making changes that maximize the performance of the model as the temperature decreases.\nIn practice, we evaluate the performance of our models using both accuracy and the unweighted average recall (UAR), and so we optimize against both of these metrics during the annealing process. That is, our goal is to maximize the following function:\nS(θ) = λ ∗UAR(ŷθ) + (1− λ) ∗Acc(ŷθ)\nwhere θ corresponds to the model parameters, i.e., the rules, groups, and evidence of the model, ŷθ to the predictions of the current model on the training set, and λ is a hyperparameter that controls the\nrelative importance of UAR versus accuracy. To use simulated annealing, we must define the ways in which an existing model can be altered to generate a new model. We begin by building a set of candidate rules. Candidate math expressions are generated by identifying sequences of alternating operands and operators in the math extracted from a response. In this work, we consider sequences of up to six operands. Once these expressions have been identified, we rank them according to their information gain. We keep the top n expressions as our set of candidate rules for use in annealing.\nWhen humans craft manual rules, they are able to write regular expressions. Automatically determining useful regular expressions in full generality is beyond the scope of this work, but providing our automated rules with some ability to reason about prose writing is important. For this reason, we consider all words in the responses, again rank by information gain, and then keep the top m as regular expression rules (that ultimately will match if the given word is present in the response).\nWhen annealing our rules, we allow for four transformations:\n1. Add a rule to a group.\n2. Remove a rule from a group.\n3. Replace a rule with a new rule.\n4. Move a rule from one group to another group.\nWe initialize our model to have a number of scorable traits equal to the maximum score for the item, and create a user-defined number of empty evidences and groups for each trait. To improve final model performance, we use random restarts during training. That is, we perform k simulated annealing runs, and keep the model with the best training set performance as our final trained model.\nTo avoid overfitting to our training data, we also include two regularization terms in our objective function. The first term, R(θ), penalizes the model by the total number of operands used by all rules. The second term, E(θ), penalizes the model for the number of non-empty evidences used by the model. Our final objective function is\nS′(θ) = S(θ) + γ ∗ (α ∗R(θ) + β ∗ E(θ))\nwhere α, β, and γ are hyperparameters that control the relative and overall regularization strength.",
|
| 10 |
+
"6 Experiments": "To the best of our knowledge, there is no publicly available dataset that features open CR math word problems with a large number of student responses per item. For example, the GSM8k Dataset used in Table 1 has only one response per item. Therefore, we use our own proprietary dataset of MPT items for our experiments. This dataset consists of 14 items covering algebra, arithmetic, and geometry, targeting grade levels from fourth grade to high school. The scoring scales for these items range from 0–2 to 0–4. See Table 2 for detailed per-item information.\nOur primary goal is to evaluate the performance of our rules-based model, both with manually crafted rules and automatically learned rules. The manual rules used in these experiments were crafted by human experts, who were allowed to view only a randomly sampled subset of the responses for each item. Responses used in this way during rule creation were also used for hyperparameter search for the simulated annealing approach, but were excluded from the dataset used in the final experiments. The response counts in Table 2 correspond to the counts used in our final experiments.\nWe perform a grid search for the cooling rate, number of iterations to run annealing for, and the overall regularization strength γ. Our pool of candidate rules consists of the top 500 expressions and top 50 words. We spend 1000 iterations at each temperature, create 3 positive evidences and 1 negative evidence for each trait, allow up to 10 groups per evidence, and set α = 0.0025, and β = 0.01. We use a geometric cooling schedule, and perform 5 random restarts. These settings are based on values that were found to work well during initial development. We use 5 stratified and randomized train/test splits when performing this hyperparameter search, with 25% of the data in the test split.\nPrior work has found that traditional AES approaches can work well for MPT, such as random forests (Erickson et al., 2020) and recurrent neural networks (Cahill et al., 2020). For this reason, we compare our rules-based scoring to three other conventional approaches: fine-tuned DistilBERT (Sanh et al., 2019), character n-gram random forests, and word n-gram random forests.\nFor both random forest models, we use regression random forests with 100 trees, and 33% of the features considered at each split. We keep all n-grams that occur in more than 5% of documents\nand in fewer than 95% of documents. For character n-grams, we consider n-grams ranging from 3 to 6 characters long. For word n-grams, we consider n-grams ranging from 1 to 4 words. We use scikit-learn’s implementations of random forests and count vectorizers (Pedregosa et al., 2011).\nFor the DistilBERT model, we finetune all layers using the Adam optimizer. We use a learning rate of 2e-5, a weight decay of 0.01, and train for 4 epochs. The training data is further split into a final training set and an evaluation set; we evaluate model performance on the evaluation set after each epoch, and we evaluate our final test-set performance on the model that achieved the best evaluation set performance. DistilBERT uses wordpiece tokens (Wu et al., 2016) with a 512 token context window. All of our responses fit within this window; the longest response in our dataset is 501 tokens long. Our DistilBERT fine-tuning utilizes Hugging Face (Wolf et al., 2020).\nFor both random forests and DistilBERT finetuning, all hyperparameters not mentioned here were left at their default values.\nFor each item, we create 30 stratified and randomized train/test splits, with 25% of the data in the test split, and train and evaluate all models on these splits. We evaluate model performance using both accuracy and the unweighted average recall (UAR). In our operational scoring, poor performance at any scorepoint can rule out the use of a model, and UAR captures this by considering the impact of poor performance at rare and common scorepoints\nto be equivalent. For all regression models, we generate final score predictions by rounding the model output to the nearest whole number.",
|
| 11 |
+
"7 Results and Discussion": "The results of our hyperparameter grid search for simulated annealing are shown in Figure 2. We see that performance is quite robust across all hyperparameter settings tested. Best performance is achieved by annealing for 25,000 iterations, with a cooling rate of 0.7 and a regularization strength of 0.5. These are the settings that we use for simulated annealing in the other experiments described in this section.\nThe mean UAR and accuracy of each model, averaging over all items and folds, is shown in Figure 3. Focusing on UAR, we see that the random forest using word n-grams performs noticeably worse than the other approaches. Character n-gram random forests and manually crafted rules perform well. Finally, we see that our annealing-based approach to automatically constructing rules performs slightly worse than the manually crafted rules, but slightly better than the DistilBERT model.\nWhen we compare accuracy trends, we see that our rules-based approaches perform no better than DistilBERT. This is due to performance at the lowest scorepoints - these tend to be common (and thus prominent in the calculation of accuracy), but the rules-based approaches tend to have slightly lower recall at the lowest scorepoint. This is not seen in the UAR figures because the rules-based models tend to perform slightly better on the higher (and rarer) scorepoints.\nIn Figure 4, we show the mean UAR of each model for all items. Our discussion here will focus on items 2 and 10; these items were chosen as examples where the annealing approach performs\nvery well and very poorly, respectively.\nFor Item 2, our annealing approach performs the best out of all models. This item describes the improvement in average speed of two athletes over the course of a training regimen, and asks the students to calculate at what week of training their average speeds will be equal. The rubric contains a computation component, requiring the students to calculate the correct week, and a modeling component, requiring students to show their work in calculating their answer. The annealing process successfully constructs evidence both for identifying when the correct answer is present, and for identifying work that supports that correct answer.\nIn contrast, for Item 10, the annealing process performs quite badly. Item 10 asks students to calculate the speed of a real car based on the performance of a scale model of that car. The rubric contains one computation component, for the correct final speed, as well as two modeling components, one for proper unit conversion and one for correctly scaling the speed to the full-size car. The manually crafted rules perform comparably to the character n-gram random forest for Item 10, indicating that is possible for our rules-based approach to perform relatively well on this item. However, our manual rules for this item make extensive use of regular expressions, both to capture information about units and to capture notions such as the student stating\nin prose that they multiplied by the scaling factor. These sorts of sophisticated regular expressions are not captured by our current candidate rule generation process.\nThe relatively lackluster performance of the DistilBERT model is surprising, given the dominance of transformer-based approaches in many areas of NLP. However, there is a substantial literature detailing how both recurrent and transformerbased neural models can struggle with mathematics (Huang et al., 2018; Cobbe et al., 2021; Hendrycks et al., 2021). This literature, in combination with our results here, suggests that fine-tuning off-theshelf neural models is not a particularly powerful approach for MPT scoring.\nIn light of these results, we conclude that our rules-based approach enables explainable automated scoring of MPT items without sacrificing performance, at the cost of requiring manual effort in designing the rules. However, we also have found that a simulated annealing-based approach to automatic rule creation can produce explainable models that are almost as effective as manually crafted rules, allowing for scalable and explainable MPT scoring.",
|
| 12 |
+
"8 Conclusion and Future Work": "We have presented a novel, explainable approach to scoring MPT items via handcrafted rules that\nperforms well, and have shown that such rules can be automatically discovered through simulated annealing.\nWhile our model is able to provide explanations of its scores, generating explanations is only the first step in the full explainability process. Explanations are of limited utility without the ability to convey model explanations to stakeholders such as test takers or test administrators. Determining how best to use the explanations produced by our models is an important area of future work.\nOur approach is heavily reliant on the assumption that the final score of a response is the sum of multiple binary components. For MPT items that are not structured in this way, it is unlikely that our approach would work well on its own, although it could possibly be combined with other approaches. We are actively investigating how best to extend our approach to more rubric types.\nThe success of our annealing process ultimately relies on our ability to generate useful candidate rules. While our current process works well, we have seen that for some items, we need to be able to construct more sophisticated rules. Determining how to improve the generation of our candidate pool is another promising area for future work.\nThe dataset we used in this work is mainly composed of algebra problems. While we do have some geometry and arithmetic items, how well our approach can generalize to other MPT item types is an area of future work. In particular, our items do not cover calculus, trigonometry, or other areas that require students to extensively reason about functions.",
|
| 13 |
+
"Acknowledgements": "We would like to thank Alicia Bouy for her assistance in constructing the manually-crafted rules, and Lee Becker and Joshua Southerland for their feedback during the writing process."
|
| 14 |
+
}
|
ACL_23_no_limitation/ACL23_1219.json
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1219",
|
| 3 |
+
"Title": "Generating Dialog Responses with Specified Grammatical Items for Second Language Learning",
|
| 4 |
+
"abstractText": "This paper proposes a new second language learning task of generating a response including specified grammatical items. We consider two approaches: 1) fine-tuning a pre-trained language model (DialoGPT) by reinforcement learning and 2) providing a few-shot prompt to a large language model (GPT-3). For reinforcement learning, we examine combinations of three reward functions that consider grammatical items, diversity, and fluency. Our experiments confirm that both approaches can generate responses including the specified grammatical items and that it is crucial to consider fluency rather than diversity as the reward function.",
|
| 5 |
+
"1 Introduction": "The use of dialog systems for language learning has attracted attention. Many studies have introduced dialog systems as training partners for language learners and verified their effectiveness. According to previous studies (Kim, 2016; Tegos et al., 2014; Ruan et al., 2019), the advantages of using dialog systems in language education include: they can be used regardless of time, i.e., are more available for learners, they can be easily integrated into chatbased applications that many people are familiar with, i.e., are more user-friendly, and they can be adapted to each learner using various information from chit-chat, i.e., are more supportive.\nNeedless to say, experiencing a substantial amount of production is critical in language acquisition. Nagata et al. (2020) showed that even a very primitive rule-based chatbot like ELIZA has the potential to increase learner’s sentence production. Their experiments also revealed that learners adopted words that appeared in the chatbot’s responses, suggesting that the expressions used by the dialog system had a positive impact on learners and that the system was effective in helping them learn unfamiliar words.\nConsidering these results, we propose a task of generating a response including the specified grammatical items. Here, grammatical items refer to such as to the present perfect, subjunctive, and relative clauses. Usually, they are gradually covered in a language learning course, typically through a school curriculum. Such responses can naturally expose learners to a variety of uses of a specific item and can give them experience of how to use the item in a variety of topics and situations, based on their own past experiences evoked in the conversation. In turn, we expect the learners to use the exposed constructions in their own production more as the exposed uses are linked tightly to their memories by encountering usage examples through dialog based on their own experiences.\nThe proposed task is formalized as follows. Given C = [c1, c2, ..., cn], a dialog context that is a sequence of n utterances between two interlocutors (the system and the learner), and I , a set of grammatical items specified to be included, the task is to generate r, a natural response that follows cn, on the condition that r includes an expression corresponding to each item i ∈ I . To the best of our knowledge, this is the first work to tackle this generation task for language learning.\nTo generate text that satisfies particular conditions, Lin et al. (2021) propose using auxiliary modules to guide pre-trained language models. Keskar et al. (2019) propose training language models with control code. Since these methods are based on supervised learning, they require annotated datasets. However, there is a lack of large labeled dialog datasets for grammatical items.\nIn this paper, we examine two approaches for generating responses containing the specified grammatical items without a large labeled dataset: 1) RL-based generation: fine-tuning a pre-trained language model using reinforcement learning (RL), and 2) Prompt-based generation: providing a large language model with prompt text with a task in-\n184\nstruction and a few examples. The experiments confirm both approaches are promising.",
|
| 6 |
+
"2.1 Dialog systems and adaptation in language learning": "According to Xiao et al. (2023), there are three main uses for dialog systems in language learning.\nOne way is language learning through general communication. As one of the educational applications of dialog systems, there is a growing body of research on introducing dialog systems in second language learning through free interaction with dialog systems. Alexa (Moussalli and Cardoso, 2020; Dizon, 2017; Dizon and Tang, 2020) and Google Assistant (Tai, 2022) were used. In most studies, learners favorably accepted the system as a dialog partner.\nAnother way is task-based language learning. The introduction of a dialog system into a task allows for more content-focused learning. Tasks can be varied, such as asking for the time of day at a particular location or ordering at a coffee shop (Wu et al., 2020; Timpe-Laughlin et al., 2020). Learners are allowed to interact and receive feedback throughout the task, which contributes to second language acquisition.\nThe third way is language learning based on structured pre-programmed dialog. To create a dialog on a specific topic, researchers design their system, rather than adapting a general dialog system. Many studies have been conducted with children. Some had three to six-year-olds learn to read through questions (Xu et al., 2021a,b), and had nine-year-olds answer their questions (Lee and Jeon, 2022). Another related survey is (Huang et al., 2022).\nA further related area is user adaptation to difficulty in language tutoring. Pandarova et al. (2019) worked on predicting the difficulty of fill-in-theblank questions in which the words to be entered were specified.\nOur study proposes a new task not addressed in these studies and provides new insights into methods for this task.",
|
| 7 |
+
"2.2 Reinforcement Learning": "Reinforcement learning is a machine learning framework that acquires an optimal action policy based on non-instantaneous evaluations given by a reward function for a set of actions. By considering\nthe output tokens as actions, language generation can be treated as a reinforcement learning problem. Given an appropriate reward function, policy gradient methods such as REINFORCE (Williams, 1992) can fine-tune a pre-trained generative neural language model without a training dataset. In this paper, we adopt self-critical sequence training (SCST) (Rennie et al., 2017). SCST is proposed for image caption generation and is known for its simplicity and effectiveness.\nThe design of the reward function varies from task to task, but unlike the loss function in supervised learning, it allows the use of nondifferentiable functions including the evaluation metrics used in text generation tasks such as BLEU and ROUGE (Paulus et al., 2018; Wu et al., 2018; Narasimhan et al., 2016).\nLanguage generation based on deep learning generally uses cross-entropy as the loss function, which means that the objective function and the evaluation measure will be different. By incorporating the evaluation measure in the reward function, the gap can be alleviated.",
|
| 8 |
+
"2.3 Large-scale Pre-trained Language Model": "In recent years, many researchers have studied methods for controlling the output of generative language models by providing prompts containing task instructions and examples as input (Li et al., 2022; Reynolds and McDonell, 2021; Dou et al., 2022).\nIn particular, GPT-3 (Brown et al., 2020) has achieved significant performance comparable to or better than other fine-tuned models in CoQA and TriviaQA in few-shot settings.",
|
| 9 |
+
"3 Method": "For the sake of simplicity, in this paper, we assume context C contains only the immediately previous utterance (n = 1). We also limit the number of specified items to 1 (|I| = 1).",
|
| 10 |
+
"3.1 RL-based generation": "For simplicity again, we train a different model for each grammatical item. In applications, we assume the models are to be switched given a learner’s need. For example, when a learning partner chatbot finds that the learner tends to make errors with a particular item, the chatbot can increase the frequency of opting the generation model for the item than the vanilla generation model.\nWe consider three sub-functions for the reward, Rg for inclusion of grammatical items, which is the main objective, Rd for greater diversity, and Rf for higher fluency. The latter two are to mitigate learning bias towards including grammatical items. When only Rg is used, the model easily starts to exploit a fixed utterance against any input context. We will examine several combinations of these functions in our experiment in the next section.\nReward on grammatical items Let Fi(s) ∈ [0, 1] be a soft classifier that evaluates whether a given sentence s contains a specified grammatical item i. When we train a response generation model for item i, we set Rg(s) = Fi(s).\nFor Fi(s), we use BERT (Devlin et al., 2019). We obtain hidden representation h[CLS] of the [CLS] token from the final layer of a pre-trained BERT model. Fi(s) is formulated as follows: Fi(s) = σ(w\n⊤h[CLS] + b), where σ() is the sigmoid function and w, b are the learnable parameters. In training, the BERT model is not frozen and fine-tuned together with the parameters.\nAlthough Fi(s) is trained in a supervised manner, the necessary data for this training is much more affordable than that for training a generation model. We will revisit this point in the next section.\nRewards on diversity and fluency We use Distinct-N (Li et al., 2016), an n-gram based diversity metric, as Rd. As Rf , we use the likelihood of the output r conditioned on the input, i.e., the dialog context C. The likelihood is computed by a pre-trained dialog model.",
|
| 11 |
+
"3.2 Prompt-based generation": "In the same way with the RL-based approach, we prepare a prompt template for each item i. The templates are to be switched by applications.\nFigure 1 shows a prompt template used in this study, which consists of an instruction indicating what the task is, some examples (called shots) and a query at the end. < c > in Figure 1 is replaced with an input context utterance. Given an input prompt, a left-to-right generative language model outputs a sentence r that follows the prompt.",
|
| 12 |
+
"4 Experiment": "We verified the effectiveness of both RL-based and prompt-based approaches using three items in the SCoRE corpus (Chujo et al., 2015): the present perfect, relational clause, and subjunctive.",
|
| 13 |
+
"4.1 Datasets": "In accordance with the assumption of n = 1, we extracted only the first utterance pair of each dialog from the Daily Dialog corpus1 (Li et al., 2017) to compose our dataset. The first utterance of each pair was used as a context C, and the second was used as a reference (used for analysis purposes). We split the pairs into three subsets: 10,618 for training, 500 for development, and 1,000 for test.\nWe used the SCoRE corpus to build Fi(·). We built a classifier for each of the three items above. Appendix A gives the details of the SCoRE dataset, classifier training, and performance. Note that the required data for training here need not be dialog data and can be much smaller than that for supervised training of a dialog language model.",
|
| 14 |
+
"4.2 Evaluation metrics": "We used three metrics for our evaluation. First, we defined the function δi(s), which returns 1 or 0 for sentence s by using Fi(s) with a threshold of 0.5.\nAs the first metric, we introduced G-ratio to measure the capability of the model to generate responses that include the specified grammatical item. G-Ratio indicates the percentage of outputs containing the item and can be automatically measured by using δi(s).\nConsidering our aim of exposing learners to various uses of grammatical items in dialog, the model should be able to return diverse responses. We adopted Distinct-N (N=2) as the second metric.\nFinally, we defined GOAL (Grammar Oriented Average Likelihood), which measures the fluency of only the generated sentences that contain the specified item using the output likelihood based on\n1https://huggingface.co/datasets/daily_dialog\na dialog language model Pm as follows:\nHTi = {s ∈ GTi |δ(s) = 1},\nGOAL(HTi ;Pm) =\n∑ s∈HTi Pm(s|c(s))\n|HTi | ,\nwhere GTi the set of the generated responses given test set T in terms of item i, and HTi is the set of responses in GTi that Fi(·) evaluated as containing the grammatical item. c(s) denotes the input context for output s.",
|
| 15 |
+
"4.3 Experimental setups": "For the RL-based approach, we used DialoGPT (Zhang et al., 2020), a GPT-2 based dialog language model trained on a Reddit corpus, as the initial model in SCST, the main body of Rf , and Pm. For decoding, we used top-k sampling (Fan et al., 2018) (k = 50). The model was evaluated every 10 batches using the development data, and training was stopped with a patience of 3. As training progressed , the number of sentences containing the target grammatical item increased, but many similar sentences were generated, resulting in a loss of diversity. Therefore, as we observed a tradeoff between G-Ratio and diversity, we adopted the product of the two as an indicator of early stopping.\nFor the prompt-based approach, we used GPT-3 davinci. We set the sampling temperature to 1 for GPT-3. Other settings are detailed in Appendix B.",
|
| 16 |
+
"4.4 Evaluation": "For the RL-based approach, ten sentences were generated using beam search with a beam width of 10 for each test case. Out of the ten, the sentence with the highest likelihood and the specified item is chosen as the output. If no sentence included the item, the first one was chosen. We compare the following five combinations of the reward functions: Rg, Rg +Rd, Rg ×Rd, Rg +Rf , and Rg ×Rf .\nFor the prompt-based approach, ten sentences were generated thorough the web API using a prompt for each test case, from which one was picked as above. We compared the following five variations, which combines 0, 1, and 3 task examples (called shots) and with/without task instructions: instr., 1-shot, 3-shots, instr.+1-shot, and instr.+3-shots. For example, “instr.” means 0-shot with instructions. “1-shot“ means 1-shot without instructions. “instr.+3-shot“ means 3-shot with instructions.\nAll metrics were applied to 1,000 outputs.",
|
| 17 |
+
"5 Results": "Table 1 shows the results for each grammatical item. Example outputs are shown in Appendix C. Rg × Rf showed the highest GOAL for the present perfect and the subjunctive, while Rg +Rf showed the highest GOAL for the relative clause.\nThe RL-based approach successfully improved G-Ratio in all cases. Although the Dist.-2 values got lower than before training (Baseline), this was expected in advance as the result of introducing a grammatical constraint in generation.\nIn the RL-based approach, a higher Dist.-2 tended to be obtained with the fluency reward function Rf than with the diversity reward function Rd except for the subjunctive, suggesting that the effect of Rd was limited. The reasons for this may be as follows. Even if sentences with a high Dist.-2 are more likely to be generated, it does not necessarily reflect the diversity of the model overall, and if the input sentences in the batch are similar, Dist.-2 in the output will naturally decrease, but the current reward function does not fully take this into account. In addition, taking fluency into account suppresses the abuse of fixed patterns (fixed patterns increase Rg but decrease diversity). For all grammatical items tested, GOAL improved when the reward function for fluency, Rf , was applied.\nIn the prompt-based approach, G-Ratio tended to be higher for inputs with both task instruction and shots. However, 3-shots sometimes gave worse results than 1-shot. This suggests that task instructions should be included in the input, but that increasing the number of shots may add noise or unintended bias to the language model, making it more difficult to obtain the desired output.\nComparing the two approaches, the promptbased one demonstrated higher diversity than the RL-based one, and a comparable G-Ratio. Though the GOAL scores for the RL-based approach were higher than those for the prompt-based approach, we must note that GOAL is favorable to the RLbased approach that, in this paper, uses the same DialoGPT model as GOAL. As far as we manually compared the concrete responses from GPT-3 and DialoGPT for a small number of randomly picked cases, we did not find significant differences.",
|
| 18 |
+
"6 Discussion": "Even though we want to expose more instances of a particular item to a learner, it is not natural to include the item in every dialog response. Therefore,\nwe do not need to pursue 100% for G-Ratio. We presented GOAL as a primary metric candidate for the proposed task. However, as noted in the previous section, it is not reliable when one wants to compare two results based on different language models. Taking the similarity to the reference sentences into account is one direction to mitigate this issue. Another strategy is combining GOAL with reference-free unsupervised dialog evaluation methods using follow-ups such as FULL (De Bruyn et al., 2022). Unlike GOAL, these evaluation methods do not measure the likelihood of the target utterances directly; they, however, still rely on a particular language model. A simple way to make this issue easier would be an ensemble approach using multiple language models or majority voting.\nConsidering the high diversity and the nature of training-free, so far the prompt-based approach seems to be advantageous, assuming the availability of a huge pre-trained model such as GPT-3. However, the RL-based approach may have merits in terms of its fine-grained, delicate, and implicit control than the prompt-based approach. (Besides, DialoGPT and GPT-2 did not work in the promptbased approach. See Appendix C.)",
|
| 19 |
+
"7 Conclusion": "We have proposed a new task of generating a response including the specified grammatical items for language learners. We examined two approaches and found that both are feasible.\nFuture directions include the expansion of the grammatical items. To push this task to practical use, locating appropriate places in conversations to include the items is also important.\nThis paper aimed to increase learners’ exposure to specific grammatical items, but another inter-\nesting direction is generating preceding utterances that encourage or facilitate learners to use specific grammatical items in their next utterances.",
|
| 20 |
+
"A Classifier for grammatical items": "We used a classifier that determines whether a grammatical item is included or not as a reward function for RL. The structure of the classifier is as described in §3.1, where the input sentences to be judged are estimated to determine whether they contain grammatical items or not by a linear layer and a sigmoid function based on the embedding of BERT’s [CLS] tokens.\nThe classifier requires a dataset for training. However, the required data need not be interactive, and can be smaller than for supervised learning of a language model. When data is not available, regular expression-based classification can be used as a substitute.\nIn this section, we describe the dataset used to train the classifier and the settings. The performance of the classifier is compared with rule-based classification using regular expressions. The regular expressions were created on the basis of the CEFR-J regular expression list (Ishii and Tono, 2018).\nA.1 SCoRE Corpus\nThe SCoRE corpus, in which grammatical items are manually assigned to sentences, was used to train the classifier. Therefore, the grammatical items were those included in the SCoRE corpus. The SCoRE corpus contains approximately 20 grammatical items, and Table 2 shows the number of data corresponding to the grammatical items used in this study. For example, in the subjunctive, I wish, if I were, if + verb past tense, if + had + verb past participle, etc. are included in the data.\nIn addition to positive examples with the target grammar item, negative examples without the item are required to train the classifier. Therefore, for the negative examples, we use sentences in the SCoRE corpus that are assigned grammatical items that are not the target ones.However, if all sentences that do not have the target grammar item are used as negative examples, there is a possibility that unsuitable data will be included, and the proportion of unsuitable data will be greatly biased. We constructed a dataset for training by extracting data from the negative examples in the dataset in such a way that there is no bias in the number of positive examples.\nFrom the data obtained, 80% was split into training data and 20% into test data. Finally, for the\npresent perfect, the training data and test data were 1,222 and 306, respectively, and for the relational clause and hypothetical, the training data and test data were 1,977 and 495, respectively.\nA.2 Hyperparameter for training the classifier\nWe used BERT (bert-large-uncased) to set the initial values for the classification model. Parameters were optimized by AdamW during training. The learning rate was set to 2e−5 and the coefficient of L2 regularization to 1e−2. The batch size was set to 10 and the number of epochs was set to 10. In this experiment, the classifier is the model that performed best on the test data.\nA.3 Classification Performance Table 3 shows the classification performance of the classifiers for each grammar item. The evaluation was conducted using the percentage of correct answers between the correct and predicted labels as the evaluation measure. In the experiment, the BERT-based classifier was used as the reward function for the other items because BERT had better classification performance than the regular expression.",
|
| 21 |
+
"B Hyperparameter in the experiment": "In top-k sampling in SCST, we set k to 50. For Distinct-N in Rd, N = 2. The parameters were optimized by AdamW during training, with a learning rate of 2e−5 and a coefficient of L2 regularization of 1e−2. The minimum output length was set to 10 in order to properly compute Distinct-N. The batch size was set to 10, with a maximum of 1100 iterations. For GPT-3, we set engine to davinci, max_tokens to 20, temperature to 1, n to 10, and stop to \"\\n\".",
|
| 22 |
+
"C Examples": "In this section, we provide generated sentences of compared methods. First, we discuss additional smaller models we experimented with in addition to the GPT-3. Next, we show samples of outputs for two inputs for several RL-based and prompt-based methods.\nC.1 Other Models in the Prompt-based Approach\nWe also tested the performance of GPT-2 and DialoGPT in the same settings as GPT-3. Table 4 shows the results. Comparing the performance of the three models in terms of G-Ratio, GPT-3, which has the largest model size, shows the best performance, while GPT-2 tends to perform better than DialoGPT. In GOAL, GPT-3 showed consistently high, but DialoGPT also showed high values in some settings. Note, however, that DialoGPT was used in the GOAL calculations and is a favorable indicator for this model. Also, GPT-2 and DialoGPT did not seem to produce higher quality responses than GPT-3, as far as we could visually confirm. (See Appendix C.2) Therefore, GPT-3 is superior to the other models in terms of both the G-Ratio and GOAL value, regardless of the grammatical items, and in terms of the quality of the response sentences.\nC.2 Samples Table 5, 6 show examples of output in the present perfect tense with different input contexts. Compared with the Daily Dialog corpus and DialoGPT, after learning, the response sentences are in the present perfect tense, and the responses of the method that performed well in our experiments are not too broken to be used as a dialog response. However, some of the methods showed unstable output, such as repetition of similar sentences or very few words."
|
| 23 |
+
}
|
ACL_23_no_limitation/ACL23_1220.json
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1220",
|
| 3 |
+
"Title": "UKP-SQuARE: An Interactive Tool for Teaching Question Answering",
|
| 4 |
+
"abstractText": "The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course. Additionally, the breadth of QA derived from this exponential growth makes it an ideal scenario for teaching related NLP topics such as information retrieval, explainability, and adversarial attacks among others. In this paper, we introduce UKP-SQuARE as a platform for QA education. This platform provides an interactive environment where students can run, compare, and analyze various QA models from different perspectives, such as general behavior, explainability, and robustness. Therefore, students can get a first-hand experience in different QA techniques during the class. Thanks to this, we propose a learner-centered approach for QA education in which students proactively learn theoretical concepts and acquire problem-solving skills through interactive exploration, experimentation, and practical assignments, rather than solely relying on traditional lectures. To evaluate the effectiveness of UKP-SQuARE in teaching scenarios, we adopted it in a postgraduate NLP course and surveyed the students after the course. Their positive feedback shows the platform’s effectiveness in their course and invites a wider adoption.",
|
| 5 |
+
"1 Introduction": "Question Answering (QA) is one of the overarching research topics in Natural Language Processing (NLP). QA pipelines have been developed to address different types of questions, knowledge sources, and answer formats, including extractive, abstractive, knowledge base, multiple-choice, generative, and open-domain QA. Such a massive number of QA systems and relevant NLP techniques are making QA lectures more important in NLP courses. However, despite QA being an application-oriented topic (e.g., chatbots, virtual assistants, etc.), classes are usually theoretically\ndriven. Thus, in this paper, we propose the use of the UKP-SQuARE platform as a tool for QA education. This platform integrates most QA formats, popular models, datasets, and analysis tools, such as explainability, adversarial attacks, and graph visualizations.\nCompared with conventional teacher-led classes, we propose a learner-centered class following the flipped classroom (Bishop and Verleger, 2013) with UKP-SQuARE as the driving tool of the lecture. This tool provides an interface for users to interact with different QA models and analysis tools. Therefore, students can actively learn about QA systems and get hands-on experience by interacting with models on the platform. Concretely, students can flexibly compare multiple architectures that model different QA formats, analyze their outputs with explainability tools, and even analyze their robustness against adversarial attacks. Prior studies have shown that flipped classroom lectures improve the learning process of students in programming courses (Alhazbi, 2016). Thus, we believe that teaching and learning QA through a live demo with this platform can also make NLP lectures more engaging, drawing students’ attention, and interest in the topics.\nTo investigate the effectiveness of UKPSQuARE in QA education, we adopted it for the first time in a postgraduate NLP course1 and conducted a survey afterward. The positive feedback from the students encourages us to continue adopting this platform and education method in more NLP courses. The contributions of this paper are: i) a novel interactive learner-centered methodology to teach QA and relevant NLP topics, ii) extending the UKP-SQuARE platform for teaching QA, and iii) the design of a syllabus for interactive QA lectures.\n1Master’s level course\n195",
|
| 6 |
+
"2 UKP-SQuARE": "UKP-SQuARE (Baumgärtner et al., 2022; Sachdeva et al., 2022; Puerto et al., 2023) is an extendable and interactive QA platform that integrates numerous popular QA models such as deeepset’s roberta-base-squad22, SpanBERT (Joshi et al., 2020) for HotpotQA, and QAGNN (Yasunaga et al., 2021). It provides an ecosystem for QA research, including comparing different models, explaining model outputs, adversarial attacks, graph visualizations, behavioral tests, and multi-agent models. In addition, this platform provides a user-friendly interface3 that enables users to interact. Users can run available models, deploy new ones, compare their behaviors, and explain outputs.",
|
| 7 |
+
"3 Learning Question Answering with UKP-SQuARE": "In this section, we present the syllabus of a lecture focused on QA and relevant NLP topics that use the platform UKP-SQuARE following the flipped classroom methodology (Bishop and Verleger, 2013). The flipped classroom is an effective learnercentered educational methodology in which students study pre-recorded lectures and materials in advance to engage in more interactive and collaborative learning activities in class. UKP-SQuARE can be the driving tool for the flipped classroom in QA education. With our platform, lecturers can introduce the topics by interacting with the students and then proceed to an in-depth explanation of the technical details behind the methods of each topic. We propose dividing the lecture into three topics in the QA field: basic QA concepts, trustworthy QA, and multi-agent QA systems. With these topics, students can learn about QA and related NLP topics such as information extraction, explainability, adversarial attacks, and multi-agent systems.",
|
| 8 |
+
"3.1 Learning Basic QA Components": "QA systems include two main components, i.e., Readers and Retrievers. Readers are QA models responsible for obtaining answers from the context retrieved by retrievers. In UKP-SQuARE, students can easily learn various readers (QA models) within different QA formats and information retrieval techniques via interacting with the interface.\n2https://huggingface.co/deepset/ roberta-base-squad2\n3https://square.ukp-lab.de/",
|
| 9 |
+
"3.1.1 Contrasting Different QA Formats": "With UKP-SQuARE, students can get first-hand experience by interacting with multiple models on our platform. The home readings would include descriptions of the main QA datasets and their baselines. In class, the lecturer can show the different QA formats with real demonstrations of the models and explain on the fly the architectural differences needed to model each QA format. An example is shown in Figure 1 where a span-extraction QA model, i.e., Span-BERT, and a multiple-choice QA model, i.e., CommonsenseQA model are presented to show the difference between these two QA formats. Such interactions can make theoretical explanations of the architectures easier to digest and, therefore, the class more engaging.",
|
| 10 |
+
"3.1.2 Learning Information Retrieval": "To learn Information Retrieval (IR) methods, the user interface of UKP-SQuARE offers a compelling approach to help students differentiate between different IR methods, e.g., lexical retrieval and semantic retrieval, and understand how they affect the final performance of QA models. The home readings would include book chapters or slides describing the main IR methods such as TF-IDF (Sparck Jones, 1988), BM25 (Robertson et al., 1995), Sentence-BERT (Reimers and Gurevych, 2019), and Dense Passage Retrieval (DPR; Karpukhin et al., 2020). Like the above section, the lecturer can guide students to find the difference between lexical retrieval (e.g., BM25) and semantic retrieval (e.g., DPR) via playing with UKP-SQuARE by themselves. As shown in Figure 2, for the question When was Barack Obama’s inauguration?, the BM25 retriever returns a passage covering all keywords but irrelevant to the question, while the DPR retriever returns the correct document, which contains the answer to the question. By providing this example in class, students can easily understand that DPR retrieves semantically similar passages while BM25 only retrieves passages that contain the query tokens and, thus, may retrieve unrelated passages. This could be further explored by comparing two open-domain QA models implementing these retrieval methods and the same reader model to demonstrate the error propagation due to irrelevant passages. This active learning method can prevent the issue of students losing attention that commonly occurs in traditional lectures (Felder and Brent, 2003).",
|
| 11 |
+
"3.2 Learning Trustworthy QA Systems": "In addition to learning basic QA components, it is important to understand how to identify and evaluate trustworthy QA systems. This involves several related NLP topics, such as explainability, transparency, and robustness. UKP-SQuARE provides such analysis tools to facilitate students’ learning process of trustworthy QA systems.",
|
| 12 |
+
"3.2.1 Explainability Methods": "The exponential adoption of AI is pushing regulators to adopt policies to regulate its use. One of the key points they aim to address is the explainabil-\nity of these methods to make AI safer4. Thus, it is of utmost importance to include explainability methods on AI courses in Universities. In terms of the explainability of QA models, UKP-SQuARE includes BertViz (Vig, 2019) and a suite of saliency map methods to facilitate the understanding of the model’s decision-making process. Saliency maps employ attribution-weighting techniques such as gradient-based (Simonyan et al., 2014; Sundararajan et al., 2017) and attention-based (Jain et al., 2020; Serrano and Smith, 2019) methods to determine the relative importance of each token for the model prediction. The descriptions of these methods would form part of the home readings and to make the classes more active, the class would be driven by real examples of saliency maps using our platform and their interpretation. In this way, students can learn how to explain the output of a QA model based on saliency maps.\nAn example of a saliency map is shown in Figure 3. The color level of the highlighted text reflects its importance for the answer. As we can see, of what celestial body? is the most important part of\n4https://digital-strategy. ec.europa.eu/en/policies/ european-approach-artificial-intelligence\nthe question, while sun gets the most attention in the context, which is the final answer. This means the model successfully understands the main point of the question and can link them to the context. Making this type of interpretation can help students identify potential problems or biases in the models.",
|
| 13 |
+
"3.2.2 Behavioral Tests in QA models": "The next important component in trustworthy QA is behavioral tests of models. Machine learning models do not throw errors as regular software programs. Instead, an error in machine learning is usually an unwanted behavior, such as a misclassification that may pass inadvertently to a person (Ribeiro et al., 2020). This makes testing machine learning models challenging. To simplify the behavioral analysis of machine learning models, Ribeiro et al. (2020) proposes CheckList, a list of inputs and expected outputs that aims to analyze general linguistic capabilities and NLP models mimicking the unit tests in software engineering. The integration of CheckList into UKP-SQuARE offers a simple method to analyze the performance of QA models beyond traditional benchmarks, such as MRQA tasks (Fisch et al., 2019).\nAs illustrated in Figure 4, we test the SQuAD 2.0 RoBERTa Adapter and SQuAD 2.0 BERT Adapter using the CheckList in which multiple NLP capabilities are tested like coreference, negation, and robustness. As we can see SQuAD 2.0 BERT Adapter performs worse than RoBERTa Adapter in the above dimensions. Such an example can be used by the lecturer in class to introduce the idea of behavioral tests on the fly. In addition, the behavioral tests of UKP-SQuARE can be used to foster the students’ analytical skills. A potential assignment could be to train a QA model and deploy it on our platform to analyze it with the provided ecosystem of QA tools. In particular, thanks to the behavioral tests in UKP-SQuARE, students can provide a deeper analysis of their model based on the quantitative results of their test set and a quali-\ntative analysis based on the behavioral test results.",
|
| 14 |
+
"3.2.3 Adversarial Attacks": "Policymakers are also designing a regulatory framework that guarantees users that their AI models are resilient to adversarial attacks5. Therefore, AI curriculums should also include adversarial attacks to prepare students for these new regulations.\nUKP-SQuARE provides tools to conduct adversarial attacks, such as HotFlip (Ebrahimi et al., 2018), input reduction (Feng et al., 2018), and subspan (Jain et al., 2020). Thus, the home readings should include a theoretical introduction to these methods. Then, the lecture would use the platform to exploit the interactive nature of adversarial attacks. In particular, the need to analyze examples to understand different types of attacks makes this part of the topic especially practical. Therefore, the lecturer can introduce the topic through UKPSQuARE and delve deeper into the technical details afterward.\nAn exemplary case is that students can attack real models with examples by tuning different parameters, such as the number of flips in HotFlip, to see how the output changes when they subtly change the input data. In Figure 5, only flipping . (full stop) to wore can directly change the answer. In class, a small experiment can be set up by lecturers in which students need to manually manipulate the input to see if it can trick the model into making\n5See footnote 3\nincorrect answers and compare it with adversarial attack tools to deepen their understanding of those adversarial attacks and the importance of building up trustworthy QA systems.",
|
| 15 |
+
"3.2.4 Graph-based QA Models": "Knowledge Graph Question Answering (KGQA) systems can have strong explanatory power thanks to the reasoning paths that can be extracted from the graph. Such transparency can enhance the interpretability and trustworthiness of the system. UKPSQuARE currently offers QA-GNN (Yasunaga et al., 2021), a KGQA model that makes use of ConceptNet (Speer et al., 2017), and provides a visualization interface to explore the subgraph used by the model.\nAlthough a reasoning path in a graph may provide a clear explanation of a model’s prediction, we believe that interpreting graph-based models is not straightforward because, usually, that path contains many irrelevant nodes and edges that may obscure the actual reasoning of the model. Thus, we propose to teach KGQA models with real examples of graphs. In this way, the lecturer, or even the students themselves, have to show the process of cleaning the graph to obtain and interpret the reasoning path. This process would be much more valuable for the future endeavor of the students than using a set of slides with examples of preprocessed clean graphs because they will be able to reproduce what they learn in real-use cases in companies.",
|
| 16 |
+
"3.3 Learning Multi-Agent Systems": "Lastly, the current progress in QA is pushing toward creating robust models across multiple domains. To do this, there are two types of approaches: multi-dataset models and multi-agent models. While the former aims to train a single\nWhere would you find a basement that can be accessed with an elevator?\narchitecture on multiple datasets, the latter does the opposite. It trains multiple models (agents) on single datasets and combines the agents. UKPSQuARE is compatible with both approaches; therefore, it is an ideal platform to teach them.\nThanks to UKP-SQuARE, we can also follow a flipped classroom methodology to teach multiagent systems. After reading class materials explaining the models of this topic at home, the class time can be used as an explanation of the topic with a live demonstration of these models. In particular, we can easily show that multi-agent systems such as MetaQA (Puerto et al., 2021) select different agents depending on the input question. Figure 7 shows that the first answer selected by MetaQA, which is the correct one, is from an out-of-domain agent, while the second answer, which is not correct, is from the in-domain agent. This example illustrates the collaboration between agents achieved by multi-agent systems and can be an ideal way of starting the lecture on this topic before explaining the architectural details of MetaQA. Similarly, the platform can be used to introduce multi-dataset systems such as UnifiedQA (Khashabi et al., 2020), before delving into in-detail explanations of the model. In particular, the lecturer can explain the multiple accepted QA formats by UnifiedQA through real examples, and then, continue the explanation with the training details of the model with the support of slides.",
|
| 17 |
+
"3.4 Assignments with UKP-SQuARE": "In addition to the above teaching scenarios in class, we also propose a homework assignment based on UKP-SQuARE6 that leverages the insights and knowledge they acquire from the class. The students need to train their own QA model using the popular Hugging Face’s Transformer library (Wolf et al., 2020), deploy the model on our platform, and then write an in-detail report where they analyze their model from multiple perspectives. This report must include a quantitative analysis of the performance of their model on the test set and also a qualitative analysis that includes an explanation of the outputs of the model to a series of input questions, adversarial attacks that shows errors of their\n6https://colab.research.google.com/ drive/17qw1dLWmU5EDxf9TLR29zIG9-EGKmNxP? usp=share_link\nmodel, and an analysis of the possible behavioral errors obtain from CheckList. Furthermore, the students should also compare their model with other available models and identify the type of questions where their model fails. This would help them understand that models overfit the domain of their training data and, therefore, may fail in other domains. This assignment requires students to truly understand each component they learned during the class, which will help them consolidate their knowledge and develop a deeper understanding of the inner workings of different QA techniques. Additionally, the assignment can serve as a useful assessment tool, enabling teachers to gauge students’ understanding of the material and provide targeted feedback and support as needed.",
|
| 18 |
+
"3.5 User Study": "To quantitatively evaluate the effectiveness of UKPSQuARE in teaching the above QA techniques, we designed a questionnaire to collect feedback from students. The questionnaire was administered to a group of students who had completed a graduate NLP course that used our platform in both class time and for the assignment. All participants are 20-to-30 years-old graduate students in computer science. The questionnaire mainly focuses on two aspects: whether UKP-SQuARE deepens their understanding of techniques in QA systems and whether it makes it easier to get hands-on experience in UKP-SQuARE. The majority of questions require students to rate on a scale of 1 to 5. The complete questionnaire can be found in Appendix A.\nFigure 8 shows the Likert scale chart with the responses of seven students who participated in the survey. As we can see, students have very positive attitudes towards all aspects of UKP-SQuARE for their QA learning. All participants think that the platform makes the class more engaging and interesting. In particular, most of them (91%) think UKP-SQuARE helps them better distinguish different QA formats. For information retrieval, the majority of the responders do not think that the platform can help them understand better the difference between lexical retrieval and semantic retrieval. The main reason behind this is that the difference between lexical and semantic retrievers is challenging to distinguish only via visualization unless students actively compare the documents by themselves. Besides, it also requires students\nto have a good understanding of semantic similarity and lexical similarity. Therefore, we plan to improve it by showing the difference between vector similarity and keyword matching between questions and retrieved documents. Regarding explainability and adversarial attack tools, around two-thirds of students believe that the platform facilitates their learning process of these topics. When it comes to hands-on experience, the vast majority of students agree that UKP-SQuARE is easy to use. Our platform provides an infrastructure that dramatically lowers the bar for students to get hands-on experience. All students think that without UKP-SQuARE, they would spend more time finding suitable open-source software to compare different models, analyze the output, and conduct adversarial attacks. Moreover, the respondents estimated that without UKP-SQuARE, the average time spent on homework would increase from 2- 5 hours to more than 8 hours. One student also commented that doing experiments with the platform was straightforward and allowed him to try different ideas without any overhead. Therefore, although the survey sample is small and limits the conclusions, this overall positive feedback invites us to continue investigating how to conduct our QA and NLP classes more interactively with UKPSQuARE and suggests that our students would benefit from extending this interactive class to other NLP topics such as generative pre-trained large language models, prompting with reinforcement\nlearning from human feedback, word embeddings, parsing trees, and machine translation among others.",
|
| 19 |
+
"4 Related Work": "The most relevant tool is the AllenNLP demo7, which provides a user interface to the main components of the AllenNLP library (Gardner et al., 2018). This website includes an interface where users can interact with five extractive QA models. However, their goal is to have a showcase of their library rather than an extensive platform for teaching QA. Thus, their functionalities are limited. Most of their deployed models are outdated, only cover extractive QA settings, and do not provide information retrieval methods. Moreover, their explainability and adversarial attacks are not compatible with their transformer-based model. Furthermore, they do not provide graph-based models, which can be useful to explain graph neural networks and explainability methods based on graphs. Additionally, it cannot be used for our homework assignment because users cannot deploy and analyze their own models with explainability and adversarial attack tools as in our platform. However, they do provide demos for other NLP topics, such as Open Information Extraction and named entity recognition, and parsing trees, among others.",
|
| 20 |
+
"5 Conclusion": "In this paper, we present a novel method to teach question-answering to postgraduate NLP students following the learner-centered method of flipped classrooms. We propose to provide reading materials to the students before the class and use the UKPSQuARE platform as a driving tool to conduct the class. This platform integrates the most popular QA pipelines and an ecosystem of tools to analyze the available models. These tools include explainability methods, behavioral tests, adversarial attacks, and graph visualizations. We provide a series of use cases for teaching based on the provided models and methods by UKP-SQuARE, showing that classes can become much more interactive by using UKP-SQuARE than in conventional lectures. To evaluate the effectiveness of the platform and our methodology, we conducted a survey to collect feedback from students who took our class. The results show that most of the students think\n7https://demo.allennlp.org/ reading-comprehension/\nUKP-SQuARE accelerates their learning process and reduces the overhead to get hands-on experience. We plan to extend our platform to support prompting large language models, and therefore, we leave as future work creating a curriculum to teach prompting methods.",
|
| 21 |
+
"Acknowledgements": "We thank Max Eichler, Martin Tutek, Thomas Arnold, Tim Baumgärtner, and the anonymous reviewers for their insightful comments on a previous draft of this paper. This work has been funded by the German Research Foundation (DFG) as part of the UKP-SQuARE project (grant GU 798/29- 1), the QASciInf project (GU 798/18-3), and by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK) within their joint support of the National Research Center for Applied Cybersecurity ATHENE.",
|
| 22 |
+
"A Questionnaire": "The questionnaire includes two parts:\n• Whether UKP-SQuARE deepens their understanding of QA topic. Some exemplary questions are:\n– Does UKP-SQuARE help you understand different types of QA systems better (e.g. extractive QA, abstractive QA)?\n– Does the adversarial attack tool in UKPSQuARE help you understand the potential vulnerability of QA models better?\n– Does the explainability tool in UKPSQuARE help you understand better how the model generates answers based on the input?\n– Does using UKP-SQuARE in the classroom make the lecture more dynamic and engaging?\n• Whether UKP-SQuARE makes it easier to get hands-on experience. Some exemplary questions are:\n– How long did you spend on the assignment?\n– If you don’t use UKP-SQuARE, what will you use to finish your assignment (which involves comparing different models, and adversarial attacks)?\n– Without UKP-SQuARE, how long do you think you need to finish your assignment(including searching for platforms or building a small service by yourself)?\n– How easy it is to use UKP-SQuARE to do adversarial attacks against models?\n– How easy it is to use UKP-SQuARE to explain the model output?\n– If you don’t use UKP-SQuARE and you need to perform adversarial attacks on your model, would you be able to complete the assignment? If so, how much more difficult would it be?\n– If you don’t use UKP-SQuARE and you need to interpret the answers of your model using saliency maps, would you be able to do it? if so, how much more difficult would it be?\n– Does UKP-SQuARE UI help you compare models easier? (eg: compared to using Jupyter Notebooks)?"
|
| 23 |
+
}
|
ACL_23_no_limitation/ACL23_1224.json
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1224",
|
| 3 |
+
"Title": "Analyzing Bias in Large Language Model Solutions for Assisted Writing Feedback Tools: Lessons",
|
| 4 |
+
"abstractText": "This paper analyzes winning solutions from the Feedback Prize competition series hosted from 2021-2022. The competitions sought to improve Assisted Writing Feedback Tools (AWFTs) by crowdsourcing Large Language Model (LLM) solutions for evaluating student writing. The winning LLM-based solutions are freely available for incorporation into educational applications, but the models need to be assessed for performance and other factors. This study reports the performance accuracy of Feedback Prizewinning models based on demographic factors such as student race/ethnicity, economic disadvantage, and English Language Learner status. Two competitions are analyzed. The first, which focused on identifying discourse elements, demonstrated minimal bias based on students' demographic factors. However, the second competition, which aimed to predict discourse effectiveness, exhibited moderate bias.",
|
| 5 |
+
"1 Introduction": "Assisted writing feedback tools (AWFTs) are a promising example of educational applications using Natural Language Processing (NLP) algorithms that can innovate and accelerate student learning (Nunes, Cordeiro, Limpo, & Castro, 2022). Recent advances in large language models (LLMs) have increased AWFTs’ capabilities to process and provide feedback on student writing with human-like sophistication (Kasneci et al., 2023). The Feedback Prize competition series, hosted on Kaggle in 2021-2022, was an important step in advancing AWFTs potential by crowdsourcing innovative LLM solutions for\nassessing and evaluating student writing that were open science (The Learning Agency Lab, n.d.).\nThe competitions were a success with over 6,000 teams participating and over 100,000 opensource algorithms developed. (The Learning Agency Lab, n.d.) However, these algorithms have not been reported outside of the Kaggle interface, limiting knowledge of their use and minimizing potential adoption into educational applications. Additionally, the algorithms have not been assessed for bias, which may limit their effectiveness in a classroom setting, especially if that bias is aimed towards student populations that have been historically marginalized. The purpose of this study is to report initial performance for the winning Feedback Prize models and to disaggregate performance accuracy in demographic factors including race/ethnicity, economic disadvantage, and English Language Learner (ELL) status.",
|
| 6 |
+
"2 PERSUADE Corpus": "The first two competitions in the Feedback Prize series were based on the PERSUADE (Persuasive Essays for Rating, Selecting, Analyzing, and Understanding Discourse Elements) corpus, a collection of ~25,000 argumentative essays written by students in the U.S. in grades 6 through 12 (Crossley et al., 2022). The essays were annotated by experts for discourse elements and the effectiveness of the discourse elements. Discourse elements refer to a span of text that performs a specific rhetorical or argumentative function, while discourse effectiveness is a rating of the quality of the discourse element in supporting the writer's overall argument. The effectiveness scale included Ineffective, Adequate, and Effective ratings. The annotation scheme for discourse elements is based on an adapted or simplified version of the Toulmin argumentative framework (Stapleton & Wu, 2015).\n242\nThe discourse elements that were annotated for each essay were:\n• Lead. An introduction begins with a statistic, a quotation, a description, or some\nother device to grab the reader’s attention and point toward the thesis.\n• Position. An opinion or conclusion on the main question.\n• Claim. A claim that supports the position.\n• Counterclaim. A claim that refutes another claim or gives an opposing reason\nto the position.\n• Rebuttal. A claim that refutes a counterclaim.\n• Evidence. Ideas or examples that support claims, counterclaims, rebuttals, or the\nposition.\n• Concluding Statement. A concluding statement that restates the position and\nclaims.\nThe essays were annotated using a rigorous, double-blind rating process with 100 percent adjudication, such that each essay was independently reviewed by two expert raters and adjudicated by a third rater. Overall inter-rater agreement for discourse elements assessed using a weighted Cohen’s Kappa was 0.73, which indicates relatively high reliability. While the experts who annotated the corpus for discourse elements also rated each element's effectiveness in supporting the writer’s argument, misalignment in segmentation between the raters in the discourse elements make it difficult to calculate inter-rater reliability for the effectiveness labels.",
|
| 7 |
+
"3 Feedback Prize 1.0 Models": "The first Feedback Prize competition,\n(Feedback Prize 1.0: Evaluating Student Writing) was hosted on Kaggle and involved the tasks of\nsegmenting essays into smaller sections and\nassigning each section a discourse label such as lead, position, claim, and evidence. To evaluate\nperformance, submissions were assessed based on\nthe word overlap between ground truth and predicted outputs. A model prediction was\nconsidered correct (true positive) if there was at\nleast a 50% word overlap between the machine-\nsegmented section and the human-segmented\nsection, as well as a match between their discourse\nlabel. False negatives were unmatched ground truths, and false positives were unmatched\npredictions. The final score was calculated by\ndetermining the number of true positives, false\npositives, and false negatives for each class (i.e.,\ndiscourse label) and taking the macro F1 score across all classes.\nThe analysis in this paper examines the second-\nplace, third-place, and sixth-place winning\nsolutions from this competition. Overall, the\nwinning solutions were broadly based on\nensembles of large-scale, pre-trained Transformers, paired with custom pre-processing and post-\nprocessing techniques to improve accuracy. The\nfirst-place model was not analyzed because its complexity made it difficult to replicate and\nimpractical in educational settings. The overall\nmacro F1 score did not differ significantly between the second-place, third-place, and sixth-place\nsolutions, with values of .740, .740, and .732,\nrespectively.\nTo assess potential bias in the models, performance accuracy was further disaggregated\nby demographic factors (race/ethnicity, English\nLanguage Learner status, and economic disadvantage) and discourse effectiveness\n(Ineffective, Adequate, Effective). Specifically, T-\ntests and ANOVAs indicated that the average true positive rate (TPR) per essay of the second-place,\nthird-place, and sixth-place models significantly\nvaried based on demographic factors, but the effect sizes were small (see Tables 1-3). None of the t-\ntests or ANOVA tests reported any results with a p-\nvalue < 0.01 and a Cohen’s d > 0.2. For instance, the t-test comparing TPR differences between ELL\nand non-ELL writing showed a p-value of 0.03 and\nCohen’s d of 0.103 for the second-place model,\nsuggesting a negligible difference in model\nperformance.",
|
| 8 |
+
"4 Feedback Prize 2.0 Models": "The second Feedback Prize competition (Feedback Prize 2.0: Predicting Effective Arguments) also hosted on Kaggle required models to predict the effectiveness rating of discourse labels, using multi-class logarithmic loss as the evaluation metric. More specifically, for each discourse label, the model had to submit the probabilities (or the likelihood) that the label belongs to each of the three effectiveness ratings (Ineffective, Adequate, Effective). The closer the predicted probabilities were to the actual true label, the higher the model score would be. Feedback Prize 2.0 also prioritized computationally efficient algorithms, with a prize-incentivized “Efficiency Track” that evaluated submissions for both accuracy and speed.\nFeedback Prize 2.0 comprised a smaller subset of the data from the first competition (around 6,900 out of the 26,000 essays), due to a need for greater balance in effectiveness scores. In the complete PERSUADE corpus, only 4% of discourse elements were labeled Ineffective while 80% were labeled Adequate and 16% were labeled Effective. The subset used in Feedback Prize 2.0 corpus had a distribution of 18% Ineffective, 24% Effective, and 58% Adequate, resulting in greater balance.\nThe analysis presented in this paper examines the performance of the winning models (first, second, and third place) in the Efficiency Track on the competition test set. A common trend among winning solutions from the Efficiency Track was to fine-tune a single pre-trained Transformer model on the competition dataset to minimize space and runtime requirements. The authors did not analyze the winners from the non-efficiency track because performance was similar, but computational demands were much higher. The\nanalysis consists of two parts. The first part examines the accuracy of the models in predicting the three original effectiveness ratings (Ineffective, Adequate, Effective). In the second part, the winning models' predictions were evaluated by grouping Ineffective and Adequate labels into a Non-Effective label, creating a binary outcome variable (Effective, Non-Effective). This analysis recoded the labels 'post hoc,' after the model submitted probabilities for all three original ratings. In both analyses, the model's predicted label was determined as the label with the highest predicted likelihood among the outputted probabilities.",
|
| 9 |
+
"4.1 Analysis of accuracy using original effectiveness ratings": "The first part of the Feedback Prize 2.0 bias analysis found that the selected winning models\nshowed higher levels of bias for certain students compared to the winning models from Feedback Prize 1.0. This disparity can be attributed to patterns in the label distribution of the data. The data sample for the Feedback Prize 2.0 competition had a more balanced representation of minority and historically disadvantaged students in the overall sample, but there were roughly twice as many discourse elements labeled Ineffective from economically disadvantaged students and almost three times as many Effective discourses from nondisadvantaged students.\nAs a result, effective writing discourses from white, non-ELL, and economically advantaged\nstudents were more likely to receive higher ratings\nand the models amplified the existing\ndisproportionate representation of effective writing found in the human-rated dataset. As shown in\nFigure 1, the first-place model was more accurate\nin identifying effective discourses in non-ELL writing (76% vs 27% accurate) with a statistically\nsignificant difference in likelihood scores (p-value\n~0.000) and a larger effect size (Cohen's d ~0.671), as shown in Table 4. As shown in Table 5, the first-\nplace model was also less accurate in predicting\neffective writing for economically disadvantaged students, and a t-test revealed that the difference in\nlikelihood scores for effective discourses was\nstatistically significant (p-value ~0.000) and the effect size was moderate (Cohen's d ~0.263).\nSimilarly, accuracy disaggregated by the\nrace/ethnicity of each student writer also showed statistically significant differences (p-values ~\n0.000), but with small effect sizes (Cohen's d ~\n0.15), as shown in Table 6 and Figure 2.",
|
| 10 |
+
"4.2 Analysis of accuracy using binary label of effectiveness": "the low sample size of Ineffective discourses in the\ndataset by recoding the effectiveness label as a binary variable. This involved combining\nIneffective and Adequate discourses into a Non-\nEffective label. The goal was to examine whether similar levels of bias persisted in the recoded label.\nCombining Adequate and Ineffective discourse\nlabels into a Non-Effective category did achieve\ngreater balance in performance accuracy for the Non-Effective label, but there remained bias in the\nprediction of Effective discourses because white,\nnon-ELL, and advantaged students remain overrepresented in this category, as shown in\nFigure 3.",
|
| 11 |
+
"5. Discussion": "The winning solutions across the first two Feedback Prize competitions reported a degree of accuracy comparable to that of humans, which is an important indicator of the models’ strength. Additionally, since the models are open-source, they can quickly be adapted into educational applications to not only assess student writing at a summative level but to also provide fine-grained feedback to students at the formative level.\nHowever, as noted in the analyses above, the winning solutions from the second competition that focused on predicting effective arguments showed a moderate degree of bias among factors related to race/ethnicity, economic status, and English Language Learner (ELL) status while the winning solutions from the first competition, which focused on annotating discourse elements, showed minimal bias.\nIt appears the models from Feedback Prize 2.0 amplified the biases inherent in the data despite not being explicitly trained with demographic\ninformation. Data bias in label distribution, label agreement, and demographic representation in the PERSUADE corpus may have contributed to the model bias, but it is unclear how well these factors could be addressed given current writing achievement disparities in the U.S. educational system (National Center for Education Statistics, 2012). Using a binary classification for effectiveness (i.e., recoding the data as Effective or Ineffective) helped to mitigate the bias in the models to some degree. However, the use of models from Feedback Prize 2.0 for educational applications should be handled with care, especially when dealing with students from diverse populations.\nThese analyses demonstrate the importance of assessing algorithms for bias prior to wide-scale adoption. The results point to future work in building educational NLP applications like AWFTs to identify potential data biases in label distribution, agreement, or demographic representation before adoption to reduce bias in algorithmic outputs and help ensure fairness in systems. As can be seen with the PERSUADE corpus, bias will likely be present in any dataset that accurately represents populations in the United States because of achievement disparities in the educational systems."
|
| 12 |
+
}
|
ACL_23_no_limitation/ACL23_1229.json
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1229",
|
| 3 |
+
"Title": "GrounDialog: A Dataset for Repair and Grounding in Task-oriented Spoken Dialogues for Language Learning",
|
| 4 |
+
"abstractText": "Improving conversational proficiency is a key target for students learning a new language. While acquiring conversational proficiency, students must learn the linguistic mechanisms of Repair and Grounding (R&G) to negotiate meaning and find common ground with their interlocutor so conversational breakdowns can be resolved. Task-oriented Spoken Dialogue Systems (SDS) have long been sought as a tool to hone conversational proficiency. However, the R&G patterns for language learners interacting with a task-oriented spoken dialogue system are not reflected explicitly in any existing datasets. Therefore, to move the needle in Spoken Dialogue Systems for language learning we present GrounDialog: an annotated dataset of spoken conversations where we elicit a rich set of R&G patterns.",
|
| 5 |
+
"1 Introduction and Motivation": "Many conversations are impromptu back-and-forth interactions that often have no prior preparation or review. As a result, conversational breakdowns (Benner et al., 2021; Li et al., 2020) may occur due to minor misinterpretation, mishearing, misspeaking, or a general lack of common ground (Traum, 1994). Interlocutors use Repair mechanisms (Albert and de Ruiter, 2018) to detect and resolve communicative problems during conversations; and Grounding mechanisms to establish common ground. For example, we often ask our interlocutors to repeat what they said, explain themselves, request clarifications, etc. Such processes arise proactively or when the initial communication attempt has failed, during which modification and revision to the previous utterances are needed to proceed the conversations naturally.\nAccording to Long (1983), R&G is meaningful in the following perspectives: 1) repair the dis-\n*This work was done while first author was an intern at ETS.\ncourse when breakdown occurs and 2) avoid conversational breakdowns. Table 1 shows an example dialogue between low-proficiency (LPS) and high-proficiency (HPS) English speakers, where LPS paraphrases themselves to repair the discourse when trouble occurs. Besides, speakers usually try their best to avoid breakdowns in conversations. Based on Long (1983), there are plenty of strategies they can adopt to prevent the breakdowns during communications: 1) relinquish topic control; 2) simplify topic by asking \"yes-no\" questions; 3) confirm comprehensions of speakers before proceeding, etc.\nFrom the perspective of a language learner, dialogues serve as important media in language acquisition and learning (Eszenyi and van der Wijst, 2006). When language learners chat with highproficiency speakers, language learners make considerable efforts to ground what they have to say (Eszenyi and van der Wijst, 2006). More specifically, the low-proficiency speakers (LPS) attempt to negotiate the meanings of conversations with high-proficiency speakers (HPS). According to Foster and Ohta (2005) and Cook (2015), interactional processes including negotiation for meaning and various kinds of repair and grounding are among the many ways learners gain access to the second language acquisition. Besides, LPS can also en-\n300\nhance their language skills, general communication skills and cultural knowledge during the conversations with HPS (Eszenyi and van der Wijst, 2006).\nWhile R&G is common in nearly all conversations, it is particularly important for language learners as learners are still building up the full understanding of the language. They may also bring R&G influences of their primary language into the language they are learning. It is also possible that low-proficiency speakers (or language learners) employ additional or different R&G mechanisms than high-proficiency speakers of a language. Therefore, there is a lot to know about R&G mechanisms from low-proficiency speakers.\nIn this paper, we present a dataset that can help linguists and other researchers with several novel linguistic tasks such as identifying R&G patterns. Further, while repair and grounding is an important linguistic mechanism, it is rarely reflected explicitly in the design of spoken dialogue systems that aim to help people learn a new language. Our dataset can fill this gap by allowing researchers to model dialogue state tracking with R&G, generating responses with R&G turns, etc.\nWe collected this dataset by connecting a high-proficiency speaker and a low-proficiency speaker on a crowd sourcing platform. The highproficiency speaker played the role of a human resources (HR) assistant in a wizard-of-oz style and was tasked to convey information about an interview. The low-proficiency speaker played the role of an interviewee and was tasked with finding specific information about the same interview through their conversation with the highproficiency speaker. While R&G may occur as a course of natural conversation, we further induced it by giving the interlocutors some conflicting and incomplete pieces of information. We collected the voice of the low-proficiency speaker and the text responses of the wizard.\nTo the best of our knowledge, GrounDialog dataset is the first task-oriented dialogue dataset specifically tailored for repair and grounding in spoken conversations between high-proficiency and low-proficiency speakers. Each dialogue in the dataset is transcribed by human experts and contains vocal markers and disfluencies, such as \"uh\" and \"um\". It is annotated with R&G types, intents, and slots that are relevant to dialogue state mapping tasks. Hence, GrounDialog can be used to develop a task-oriented conversational agent, equipped with\nthe R&G ability to detect communicative trouble, and adopt certain strategies to repair the discourse when trouble occurs.\nThe rest of the paper presents related work, details of the data collection process, the data annotation scheme, analyses of the data, and initial model benchmarks.",
|
| 6 |
+
"2 Related Work": "As indicated in Dorathy and Mahalakshmi (2011), task-based language teaching (TBLT) puts emphasis on the utilization of tasks as the critical element in the language classroom given that tasks can offer better contexts for active language acquisition and second language promotion. From the perspective of dialogue systems, it is the task-oriented dialogue (ToD) that can help language learners achieve their proficiency goals through task completion. Previous dialogue systems have shown great promise in increasing second language acquisition proficiency. Bibauw et al. (2019) provide an overview of all spoken dialogue systems for language learning. TimpeLaughlin et al. (2022) have compared learning language via role-play with a spoken dialogue system versus human, and found that spoken dialogue systems are a feasible alternative to human interaction in the role-playing context. Divekar et al. (2021) have found that interaction with spoken dialogue systems in immersive contexts improved students proficiency and decreased their anxiousness while using a foreign language thereby indicating there may be increased willingness to communicate with automated humanoid interlocutors. All this points to evidence that spoken dialogue systems are an effective tool for language acquisition.\nMany spoken dialogue systems for the use of language learning have been built using off-the-shelf intent and slot detectors, and dialogue state managers (Bibauw et al., 2019). Divekar et al. (2018) have found some repair and grounding mechanisms in their dialogue system for language learning such as systems being able to respond to learners’ questions like \"what do you mean\" or \"what can I say next\" in a rule-based system. However, quick scaling up for such systems can only come with datasets.\nSeveral datasets exist to help build task-oriented dialogues such as Schema-Guided-Dialogue (SGD) (Rastogi et al., 2020), MultiWoZ (Budzianowski et al., 2018), Dialogue State Tracking Challenges (DSTC) 1-3 (Williams et al., 2013; Henderson et al.,\n2014a,b) and DSTC 4-5 (Kim et al., 2017). Besides, there are other frequently used speech-based ToD data, including Fluent Speech Commands (FSC) 1, Audio-Snips (Coucke et al., 2018), Carnegie Mellon Communicator Corpus (CMCC)(Bennett and Rudnicky, 2002) and Let’s Go Dataset 2.\nHowever, existing task-oriented dialogue datasets do not reflect the language learning perspective as there are no constraints in their collection process that one interlocutor must be a low-proficiency speaker. Moreover, most datasets are also a result of a text-based interaction (Wang et al., 2019; Chen et al., 2021; Liang et al., 2021). This also means that the existing datasets will not contain R&G patterns specific for language learners interacting with a task-oriented spoken dialogue systems.\nTherefore, we present a new dataset, namely GrounDialog, which will be the first dedicated ToD dataset specifically tailored for R&G in HPS-LPS conversations. Besides, the dataset can address the need for R&G in spoken form in specific scenarios that do not exist in the text-based exchange.",
|
| 7 |
+
"3 Data Collection Set-up": "Our goal was to collect conversations between highproficiency (HPS) and low-proficiency speakers (LPS). To accomplish this, we use Amazon Mechanical Turk (AMTurk) to recruit and connect pairs of HPS and LPS for our study. To identify whether a participant is HPS or LPS, we provided the participants descriptions of CEFR levels (Council of Europe, 2001) and asked them to self-identify their proficiency level 3. For the purposes of this study, turkers who identify themselves as Beginner, Elementary, Intermediate, and Upper Intermediate i.e., A1-B2 levels were regarded as LPS; whereas those selecting Advanced and Proficient i.e., C1-C2 are considered as HPS. An assumption of our study is that we draw the line between HPS and LPS arbitrarily at B2 and trust the turker’s self-reported proficiency to be accurate. With this setup, we can end up with nearly equal size of HPS and LPS, which can ease the turker-pairing process for our data collection. A detailed explanation of the data collection process and conversational task for both HPS and LPS is shown below. Subsequently, we\n1https://fluent.ai/fluent-speech-commands-a-dataset-forspoken-language-understanding-research/\n2https://dialrc.github.io/LetsGoDataset/ 3The complete pre-chat survey form is shown in appendix\nA\nwill present the general statistics of the collected dialogues and users. The study was approved by the IRB of the institute conducting this research. All participants were adults and provided consent before starting data collection. All collected data released with the paper is anonymized to our best abilities.",
|
| 8 |
+
"3.1 Conversational Task": "In order to collect the conversational data that fits our purpose of having a conversation between an automated interlocutor and human, we follow the Wizard-of-Oz set-up (Kelley, 1984). The set-up has also been validated by many previous studies (Wen et al., 2016; Asri et al., 2017; Budzianowski et al., 2018). In general, two turkers (i.e. one HPS and the other LPS) were paired to communicate with each other. We contextualize their task into a pre-interview setting, where an HR hiring manager talks to an interviewee. Specifically, we set LPS to be the interviewee and HPS to be the HR hiring manager. We assign different goals for each role: the interviewee needs to find out the answers to a set of interview-related questions (e.g. interview time, duration, location, etc.), whereas the HR manager is given the information LPS will need and asked to be in charge of scheduling an appointment with the connected interviewee. To induce more repair and grounding turns in the conversation, we provided overlapping but inconsistent information to the interlocutors. For example, the interviewee is instructed that the interview is going to be 30 minutes, whereas the HR manager has 45 minutes in their task specification. We assumed that the difference in information will lead to the interlocutors being confused, asking clarification from each other, and resolving the situation by picking a time (Foster and Ohta, 2005).",
|
| 9 |
+
"3.2 Dialogue Interface": "To establish a stable live connection between two turkers, we adapted VisDial AMT Chat (Das et al., 2017) to connect two humans, enable voice input/output, and connect to an off-the-shelf textto-speech service.\nTo simulate a Wizard-of-Oz like setting, we enable the LPS to directly record their speech, whereas the HPS input texts into a chat box and their responses are converted into speech using an off-the-shelf Text-To-Speech. The synthesized speech is played on the LPS side. In this way, the LPS could get a feeling of being connected to a\n\"chatbot\", even though the responses are actually written by a human. The instructions for the LPS said that they will be connected to a human or a chatbot. In this way, we left it ambiguous for the LPS to decide for themselves whether they are talking to a chatbot or not. The HPS were told that they would appear as a bot so as to elicit bot-like communication from them. The example dialogue interfaces together with the instructions for both HPS and LPS are shown in Figure 8.",
|
| 10 |
+
"3.3 Data Statistics": "In total, we collected 42 dialogues, including 1, 569 turns, from 55 unique turkers, where there are 29 high-proficiency speakers (HPS) and 26 lowproficiency speakers (LPS). Dialogues collected in our dataset are fairly long, with an average number of 37.4 turns per dialogue. Figure 2 presents a distribution over the sentence lengths for both HPS and LPS. The average sentence lengths are 10.02 and 8.55 for HPS and LPS respectively. We collected a total of 793 spoken utterances from LPS, and 777 textual responses from HPS.",
|
| 11 |
+
"3.4 User Statistics": "After completing the conversational task, we asked each turker to input their demographic information through a post-chat survey form 4.\nSpecifically, for the turkers who did fill in our survey after the chat, there are 35 males and 16 females, with the age spanning from 22 to 63. The majority of the turkers are from India (45%) and the Unitied States (37%). Also, the self-identified English proficiency levels based on CEFR (Council of Europe, 2001) for the collected users are shown in Figure 1. As mentioned before, we take C1-C2 as high-proficiency speaker, and A1-B2 as low-proficiency speaker.",
|
| 12 |
+
"3.5 Speech data and transcriptions": "There are 793 audio recordings collected from the accepted LPS5, of which 586 audio files are transcribed by SpeechPad 6, a reliable third-party transcription service, and the remaining 207 files are manually transcribed by the lead authors to inspect the quality of the data. The details of the concrete quality inspection process can be found in appendix\n4Out of 55 unique turkers, four of them did not fill in the post-chat survey.\n5LPS is accepted based on the speech quality and conversation completeness with HPS.\n6https://www.speechpad.com/\nC. The minimum, maximum and mean duration for the audio files collected from LPS are 1.38s, 38.82s and 6.8s, respectively.",
|
| 13 |
+
"4 GrounDialog Corpus": "The primary goal of the data collection was to gather free-form conversations with repair and grounding (R&G) patterns, between highproficiency (HPS) and low-proficiency (LPS) English speakers. For this work, we constrain ourselves to the domain of job interviews, where an HR hiring manager attempts to schedule an upcoming interview with an interviewee candidate and answers any related questions. We leave the conversations in other domains to our future work.\nTo analyse the R&G patterns in the collected data from MTurk, we inherit R&G types from previous studies (Dobao and Martínez, 2007; Eszenyi and van der Wijst, 2006; Long, 1983; Foster and Ohta, 2005; Schegloff, 1997; Clark, 1996). The complete list of R&G types is shown in table 2. A detailed explanation of R&G annotation scheme is described below. In addition, similar to other taskoriented dialogue datasets (Budzianowski et al.,\n2018; Rastogi et al., 2020), we also annotated the intents and slots for our GrounDialog corpus. To ease our annotation process, we adopted Inception (Klie et al., 2018), which is an open-source annotation software platform.",
|
| 14 |
+
"4.1.1 Repair and Grounding": "R&G can occur over several dialogue turns. It contains the context of the initial communication attempt, questions, and finally a resolution. We tagged these in our dataset as: Context, Question, R&G type and R&G complete. The definition for each item type is defined as follows:\n• Context: the initial utterance as the context of the R&G.\n• Question: the utterance that triggers the disfluency of the conversation between the two speakers.\n• R&G type: the R&G type as defined in table 2.\n• Complete: the utterance that signals the completion of the R&G process.\nNote that R&G type is the required item for each R&G annotation, whereas Context, Question and Complete are optional. This is due to the fact that 1) some R&G types can be initiated without the Context and Question and 2) R&G process maybe not always completed as the conversation moves on.",
|
| 15 |
+
"4.1.2 Intent and Slot": "Based on the unified dialog acts ontology defined in He et al. (2022), we proposed ontologies for both intent and slot for our GrounDialog corpus. The full ontology is shown in table 3. The more detailed descriptions for each intent and slot are shown in appendix D.",
|
| 16 |
+
"4.2.1 Repair and Grounding Annotations": "The annotations for R&G, Intent and Slot are completed by the lead author. To ensure the quality of the annotations, the lead author and the second author manually inspected each item through comprehensive discussions. The questionable annotation items were corrected if the lead author agreed with the second author.\nFigure 3 shows the distribution of different R&G types (a) and R&G related annotations (b) in GrounDialog corpus. There are 269 annotations for R&G types, among which 155 are from HPS and 114 are from LPS. As you can see in figure 3 (a), approximately 30% of the R&G types annotated in HPS utterances are Proactive Grounding (PG). This is due to the fact that the HR manager tends to ask questions that proactively fill in the communication gap and encourage the interviewee candidate to engage in the conversations. For example, in cases when the interviewee candidate forgot to ask questions related to the location of the interview, the HR manager would ask Do you know how to get to the company?. On the other hand, as expected, LPS used more Clarification Request (CR) in their speech in order to negotiate and confirm critical information for the interview. The example CR is shown in table 2.\nAfter including Context, Question and R&G complete, we gathered 604 R&G related annotations, which is nearly 40% of all the dialogue 7. It can be observed in figure 3 (b) that both HPS and LPS leverage R&G for smoother communication, indicating the potential usefulness of our task setup in terms of negotiation of meaning in natural HPS-LPS conversations.",
|
| 17 |
+
"4.2.2 Intent and Slot Annotation": "As for the intent annotations in GrounDialog, there are 1, 884 in total, with the number of intents in HPS and LPS being 878 and 1, 006, respectively. Figure 4 (left) demonstrates the distribution of intents annotated in the corpus for both HPS and LPS. As you can see, the top two intents are inform and request, which is similar to larger dialogue datasets like Budzianowski et al. (2018). In our dataset, almost 90% of dialogue utterances have one or two intents indicating the potential of training a language understanding module with our corpus.\nFigure 4 (right) presents the distribution of slots annotated in both HPS and LPS responses. There are in total 612 slot annotations, within which 497 slots are annotated from HPS and 115 slots are from LPS. In our GrounDialog corpus, the HPS (i.e. HR managers) tend to give out information in multiple sentences. An example HPS utterance providing concrete location details of the interview is shown below:\n7Each R&G related annotation is associated with a single utterance. Therefore, the R&G ratio of our dataset is approximately calculated as: 604 / 1569 ≈ 40%.",
|
| 18 |
+
"ID R&G type Description Dialogue Example from GrounDialog": "Ways to commute to our company: from Penn Station; exit via southwest corner of the station, walk along the Broadway for 3 minutes. The company is on the right side of the road.\nIn the example, four values for the Location slot are in bold. This is also the reason why nearly 45% of the slots in HPS responses are Location. In general, HPS produced much more slots compared\nto LPS, which corresponds to the difference in the number of inform intent produced in HPS and LPS responses.",
|
| 19 |
+
"4.3 GrounDialog for Language Learning": "As the major focus of this work, it is beneficial to take a deeper look at the R&G related annotations in GrounDialog, and discuss the potential utilities of the dataset for language learning.\nAs we have analyzed in the previous section, nearly 40% of the utterances are related to R&G. Figure 5 also presents the distribution of number of R&G annotations per dialogue. Almost 80% of the dialogues have at least four R&G related annotations, showing the richness of R&G patterns in GrounDialog. In general, GrounDialog encapsulates 12 R&G types in the natural HPS-LPS conversations under our task set-up. According to Figure 3(a), the top three R&G strategies for HPS are proactive grounding (PG), self-clarification (SCL) and check understanding (CU), whereas LPS mostly uses clarification request (CR), selfparaphrase (SP) and self-repetition (SR). This indicates that GrounDialog explicitly encourages LPS\nto request clarification, rephrase or repeat previous utterances in cases when the initial communication with HPS failed.\nBesides, we specifically annotated R&G complete to mark the sentences that signals the completion of a R&G process. Based on Figure 3(b), among all 269 R&G annotated in GrounDialog, 174 of them are actually completed, leading to a 65% completion rate. Figure 6 shows the distribution of number of R&G complete per dialogue. Nearly 80% of dialogues have at least three R&G complete, again suggesting the richness of R&G patterns. Also, given the high frequency of R&G related annotations in figure 3(b), we can imply that HPS tends to initiate the R&G much more often compared to LPS in GrounDialog.\nFrom the language learning perspective, learners need R&G patterns to deepen their understanding of the language. For this purpose, GrounDialog can be used to train a chatbot that can generate responses conditioned on our R&G ontology to initiate R&G process, repair the communication gaps, and ground the meanings of conversations for\nthe language learners.",
|
| 20 |
+
"5 GrounDialog as a Benchmark for R&G in Task-oriented Dialogue": "GrounDialog is designed as the first dedicated taskoriented dialogue dataset incorporating R&G patterns in HPS-LPS conversations. To show the potential usefulness of the corpus, we break down the dialogue modelling task into two sub-tasks and report a benchmark result for each of them: R&G detection and dialogue state tracking. Specifically, we performed few-shot learning following recent\nadvances in large language models (Brown et al., 2020; Wei et al., 2022), by prompting two most popular large language models, namely ChatGPT and GPT-4 8, with our carefully engineered prompts for both tasks. The details for each prompt are shown in appendix E.",
|
| 21 |
+
"5.1 R&G Detection": "We show that by using the R&G annotations in GrounDialog, an R&G detection model can be trained to determine 1) if communication disfluencies occur; and 2) which type of R&G strategy (as defined in table 2) to choose in order to fix the potential disfluencies incurred in conversations.\nSimilar to previous section, we prompted GPT-4 for this experiment with the specific prompt defined in appendix E. Note that we tested on 40 out of 42 dialogues, excluding the two we used to design the prompt. For the utterances that do not need R&G, we ask the model to predict \"None\". The overall detection accuracy is shown in table 4 on the rightmost column 9. As we can see, prompting GPT-4 can achieve over 62% accuracy on the test dialogues, showing the potential of GrounDialog in training neural models in detecting R&G patterns in natural human-human conversations.\n8We used gpt-3.5-turbo for ChatGPT and gpt-4 (default 8k version) for GPT-4.\n9We do not report the results for ChatGPT since it failed to follow the prompt instructions.",
|
| 22 |
+
"5.2 Dialogue State Tracking": "A good conversational system requires robust natural language understanding (NLU) and dialogue state tracking (DST) modules. For our benchmark results, we specifically prompted ChatGPT and GPT-4, both of which are popular ground-breaking large language models (LLMs) these days, with our domain-specific prompts. We follow the evaluation metrics for slot extraction in MultiWoz 1.0 (Budzianowski et al., 2018), where overall slot accuracy and joint goal accuracy are reported. For intent classification, we report the general classification accuracy. Table 4 demonstrates the performance of both models in terms of both sub-tasks. As we have only eight slot types in GrounDialog, both models achieved fairly high scores in slot accuracy and joint goal accuracy, with GPT-4 slightly outperforming ChatGPT. With regard to classifying intents, both models achieved over 60% accuracy, even though we have a larger group of intents to classify. These results demonstrate the potential utility of GrounDialog in building a good taskoriented conversational agent with solid NLU and DST modules.",
|
| 23 |
+
"6 Conclusion and Future Work": "In this paper, we collected and annotated a new dataset GrounDialog, which is the first dedicated task-oriented dialogue dataset specifically designed for studying repair and grounding in spoken conversations between high-proficiency and lowproficiency speakers. We described the data collection procedure, annotation schemes, and presented a series analysis over the data. In addition, we demonstrated the potential and utility of GrounDialog by performing two tasks: R&G detection and dialogue state tracking. The results showed that GrounDialog can be used to train a conversational agent with the R&G capability. It could be further used to detect communicative gaps, which can be addressed in dialogue design.\nIn future, we plan to extend GrounDialog to a much larger dataset potentially covering multiple domains other than job interviews. Besides, we will use GrounDialog as a benchmark for a shared task to build task-oriented dialog agent with R&G ability. We will also conduct comprehensive user studies to determine the R&G patterns that are most useful in improving learner’s conversational proficiency during language learning. Further, we plan to present findings from the speech data so\nresearchers can use speech signals along with text to identify repair and grounding related turns.",
|
| 24 |
+
"A Pre-chat English proficiency self-identification survey": "See Figure 7 below.",
|
| 25 |
+
"B Dialogue interface and instructions for High-proficiency and Low-proficiency speakers": "See Figure 8 below.",
|
| 26 |
+
"C Audio data quality inspection": "This section details the process to inspect the quality of collected audio data. First of all, due to the fact that some collected audio contains long pauses (usually more than 10 seconds without any valid speech), we listened to each audio that is longer than 15 seconds carefully. Then we used ffmpeg10 to truncate the inspected audio which indeed contains long pause to the extend where the audio is natural and continuous. Next, for each audio data, we applied an internal automatic speech recognition tool to detect if the audio is silent all the time. As a result, we discarded all silent audio, and submit the remaining data to SpeechPad 11 for transcriptions.",
|
| 27 |
+
"D Descriptions of Intent and Slot": "In this section, we explain different types of intent and slots, and show some examples for better understanding. Specifically, we followed the conventions defined in (He et al., 2022). The descriptions for each intent and slot are shown in Table 5 and 6, respectively.\n10https://ffmpeg.org/ 11https://www.speechpad.com/",
|
| 28 |
+
"E Large Language Models prompts for Dialogue State Tracking and R&G Detection": "The prompts we used for experiments in section 5 are shown in Table 7, 8 and 9, respectively. The intent classification and slot extraction task are conducted on a single utterance, whereas R&G detection is conducted on a complete dialogue."
|
| 29 |
+
}
|
ACL_23_no_limitation/ACL23_123.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "123",
|
| 3 |
+
"Title": "Multi-CLS BERT: An Efficient Alternative to Traditional Ensembling",
|
| 4 |
+
"abstractText": "Ensembling BERT models often significantly improves accuracy, but at the cost of significantly more computation and memory footprint. In this work, we propose Multi-CLS BERT, a novel ensembling method for CLS-based prediction tasks that is almost as efficient as a single BERT model. Multi-CLS BERT uses multiple CLS tokens with a parameterization and objective that encourages their diversity. Thus instead of fine-tuning each BERT model in an ensemble (and running them all at test time), we need only fine-tune our single Multi-CLS BERT model (and run the one model at test time, ensembling just the multiple final CLS embeddings). To test its effectiveness, we build Multi-CLS BERT on top of a state-of-the-art pretraining method for BERT (Aroca-Ouellette and Rudzicz, 2020). In experiments on GLUE and SuperGLUE we show that our Multi-CLS BERT reliably improves both overall accuracy and confidence estimation. When only 100 training samples are available in GLUE, the Multi-CLS BERTBase model can even outperform the corresponding BERTLarge model. We analyze the behavior of our Multi-CLS BERT, showing that it has many of the same characteristics and behavior as a typical BERT 5-way ensemble, but with nearly 4-times less computation and memory."
|
| 5 |
+
}
|
ACL_23_no_limitation/ACL23_1231.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1231",
|
| 3 |
+
"Title": "Recognizing Learner Handwriting Retaining Orthographic Errors for Enabling Fine-Grained Error Feedback",
|
| 4 |
+
"abstractText": "This paper addresses the problem of providing automatic feedback on orthographic errors in handwritten text. Despite the availability of automatic error detection systems, the practical problem of digitizing the handwriting remains. Current handwriting recognition (HWR) systems produce highly accurate transcriptions but normalize away the very errors that are essential for providing useful feedback, e.g. orthographic errors. Our contribution is twofold: First, we create a comprehensive dataset of handwritten text with transcripts retaining orthographic errors by transcribing 1,350 pages from the German learner dataset FD-LEX. Second, we train a simple HWR system on our dataset, allowing it to transcribe words with orthographic errors. Thereby, we evaluate the effect of different dictionaries on recognition output, highlighting the importance of addressing spelling errors in these dictionaries.",
|
| 5 |
+
"1 Introduction": "Early L1 learners typically write by hand, even in the digital age, and handwriting remains important (Ray et al., 2022; Danna et al., 2022; Mathwin et al., 2022). Automatic feedback on error types in learner language is available (Laarmann-Quante, 2017; Berkling and Lavalley, 2015), but faces the practical problem of having to digitize the handwriting first. Current handwriting recognition (HWR) systems yield very good results (Kizilirmak and Yanikoglu, 2022; Xiao et al., 2020; Li et al., 2021) with one crucial problem: they typically normalize away the orthographic errors (Neto et al., 2020) that are important for giving useful feedback to learners. In Figure 1, when humans read this handwritten word, they look at the shapes of the letters to form hypotheses. The first letter(s) could be a d or a cl and we decide about this informed by a hypothesis about the whole word. In this case, we see that it is probably supposed to be dounut, so the first letter is a d. We see that there is an extra letter u at\nthe third position which we ignore for forming our hypothesis about the word, but still recognize so that we could give a learner appropriate feedback about it.\nAutomatic handwriting recognition systems are typically trained and evaluated on handwritten text along with transcripts that do not contain orthographic errors. Many HWR systems contain a language model component (Scheidl et al., 2018) that is used to further normalize the output. As a result, HWR systems yield ‘clean’ transcripts without any orthographic errors (right branch in Figure 1) that cannot be used to give feedback on orthographic errors. Instead, we need HWR systems outputting transcripts that retain orthographic errors (middle branch in Figure 1).\nIn this paper, we tackle this problem by first creating a dataset of handwritten text with transcripts retaining orthographic errors. For that purpose, we created comprehensive transcription guidelines (Gold et al., 2023) that precisely define our transcription goal. This is necessary as handwritten text contains other artifacts beyond orthographic errors, such as strikethroughs or inserts that we need to transcribe. In total, we transcribe 1,350 handwritten pages from German learners and thus create a dataset that is comparable in size to widely used English datasets like IAM (Marti and Bunke,\n352\n2002) and CVL (Kleber et al., 2013). Given this dataset, we are then able to quantify to what extent existing baseline systems are unable to transcribe handwritten text, especially if we only use the underlying character recognition probabilities. We compare this with training the HWR system on parts of our data, enabling it (in theory) to learn to correctly transcribe words with orthographic errors.\nFurthermore, we change the dictionary used in the HWR system to also include systematic learner errors created by an automated generator. Note that providing the actual feedback is outside the scope of this paper. Here, we focus on analyzing the problem of turning an image of handwritten text into a digitized transcript, which is currently the main obstacle to applying existing feedback methods on a scale.",
|
| 6 |
+
"2 Existing Datasets": "For training and evaluating a handwriting recognition system that retains orthographic errors, we need a dataset combining images of learner handwriting with transcripts containing orthographic errors. To our knowledge, no such dataset exists.\nIAM and CVL are mostly in English and are often used to evaluate handwriting recognition systems. IAM in its version 3.0 is an extensive dataset and consists of about 1,500 pages with more than 13,000 text lines written by 650 adults, with different segmentation levels and corresponding transcripts. CVL is comparable to IAM with about 1,600 pages from 310 adult writers. The set consists of six English and one German text and thus has a slightly increased alphabet as the German Umlauts (ä, ö, and ü) are included. In comparison to IAM, it is only transcribed word-wise, ignoring most punctuation marks or strikethrough words, although a segmentation of text lines is available.\nThe Growth-In-Grammar GIG dataset (Durrant and Brenchley, 2018) is a learner dataset that retained orthographic errors. However, the corresponding image data is not available.\nIn contrast to GIG, FD-LEX (Becker-Mrotzek and Grabowski, 2018) is another learner dataset with published image data. In comparison to IAM and CVL where the participants copied a presented text by hand, this dataset consists of texts that were freely written based on a picture or a short story, and thus, more errors were made. Albeit, the transcripts from the FD-LEX dataset normal-\nize orthographic errors and ignore other noise (e.g. strikethroughs).\nIn conclusion, none of the existing datasets fulfills our need for available image data and a transcript containing orthographic errors.",
|
| 7 |
+
"3 Dataset Creation": "As no suitable dataset is available, we need to build one. We decided to use the German learner corpus FD-LEX as a starting point, as it already contains scans of learner handwriting with a sufficient number of orthographic errors. Looking at the example in Figure 2, we can see additional typical challenges for automatic handwriting recognition e.g. strikethroughs and inserts.\nFD-LEX was built as a corpus for analyzing the writing competence of learners. It covers two different German school types: Gymnasium (GYM) (‘academic track school’) and Integrierte Gesamtschule (IGS) (‘comprehensive school’) from two grades (5th and 9th) each. It has about 5,600 scanned color pages from about 940 children and is thus exceeding the IAM (1,500 pages) and CVL (1,600 pages) datasets in size. A detailed listing can be seen in Table 1. As stated, the transcript provided with the corpus was created under another focus (e.g. normalizing orthographic errors), thus we had to transcribe it anew.",
|
| 8 |
+
"3.1 Transcription Guidelines": "We first created transcription guidelines (Gold et al., 2023) to formulate rules on how to deal with different situations while creating an authentic transcrip-\ntion of the written form.1 Following the guidelines should yield an exact transcript of the handwritten forms while at the same time allowing conversion into readable text automatically. This approach ensures that the transcribed text accurately reflects the writing skills of the learner and enables researchers to identify any patterns or issues related to spelling deficiencies.\nWe now describe the main issues covered in the guidelines:\nText/line alignment One line of text in the image must correspond to the line of text in the transcript.\nContent Only the handwritten content of the learner should be transcribed. This excludes the printed text of the paper sheet as well as drawn figures.\n1The transcription guidelines can be found at https://github. com/catalpa-cl/learner-handwriting-recognition.\nIndistinct characters must be placed within curly brackets {}. When in doubt between two characters, the transcription should reflect the character that is appropriate in the given context. Learners may attempt to deceive teachers when uncertain whether a word should begin with a capital letter2 or not, resulting in both versions being written on top of each other. In such cases, both letters should be enclosed in curly brackets and separated by a plus (+) sign, with the first letter in curly brackets being the correct one in the context.\nSpacing should be carefully analyzed and considered in the context of the individual writing style. In cases where a gap between characters of the same word is noticeably larger than the average space between words, the spacing should be transcribed within curly brackets to indicate the deviation from the norm: {S }chool.\n2Particularly, since nouns are capitalized in German.\nSpelling errors are transcribed exactly as they appear in the original text, without any correction or modification.\nStrikethrough characters, words, lines When a character or a word is struck through, the transcript should represent the number of characters with a hash sign (#). If a line is made invalid in the same manner, the line is transcribed with three hash signs (###).\nInserts Direct inserts should be transcribed enclosed in curly brackets with a less-than sign, like < text. Indirect inserts, which are written at a different location such as at the end of a page, can be indicated by an asterisk (*) and a number if there are multiple inserts. These indirect inserts should be transcribed where they appear in the image. To do this, an {insert1 *} tag is added in the line where the text should be inserted, and the actual insert content is transcribed at the location where it appears with: {insert1 text}.\nPunctuation marks, special characters, emoticons All punctuation marks have to be transcribed as they appear, with the only exception that they should align with grammar rules in regard to spacing: correct: (However,) incorrect: (However ,). Special characters are treated individually for e.g. tally marks3 are transcribed with an ampersand (&) {|&}.\nWhile using special signs and encoding (e.g. at inserts or tally marks, strikethroughs), a conversion between different target transcriptions can be achieved, e.g. a) for a line-wise transcript of the genuine content to be used for HWR; or b) for a coherent text where inserts are inserted and the textline alignment is broken up to be used for semantic analysis.",
|
| 9 |
+
"3.2 Annotation Process": "Following the guidelines, we re-transcribed about 1,250 pages, each by one annotator. To diversify our dataset, we transcribed the first 3 sets of each school type and grade (colored cells of Table 1). To assess the quality of the transcripts, some pages were transcribed by both annotators and the interannotator agreement (IAA) was computed. The double-annotation was done repeatedly during the whole transcription period and differences between the transcripts were discussed among annotators.\n3To keep track of word counts, the learners use vertical strokes after every ten words. We refer to them as tally marks.\nIn this way, a total of about 90 pages (subparts in green, see Table 1) were transcribed in parallel and both transcripts were merged into a gold transcription by an adjudicator. We achieved an IAA between both annotators of .98 on the character level and an IAA of .99 between both annotators and the gold label.4",
|
| 10 |
+
"3.3 Dataset Analysis": "Transcribing the data allowed us to examine the distribution of orthographic errors, i.e. spelling, word separation, and capitalization. For that purpose, we aligned our new transcripts with the original transcripts using word alignment and measured the word error rate (WER). As strikethroughs are words that were made invalid, they would only increase WER and thus were excluded from our analysis.\nIn Figure 3, it can be observed that there are many differences between our transcripts and the original transcripts, suggesting that the use of the original transcripts may not be ideal for HWR. Additionally, the results in Figure 3 show that the 9th grade had fewer errors compared to the 5th grade, while the GYM performed better than the IGS for both grades.",
|
| 11 |
+
"4 Baseline Experiments": "To track our recognition performance improvements, we create a baseline by training a straightfor-\n4While some characters may appear unclear to one annotator and the other annotator may see it differently, we decided to calculate the IAA by ignoring curly brackets.\nward handwriting recognizer on our dataset. Commonly, the performance of the recognizer is evaluated with two metrics, namely character error rate (CER) and word error rate (WER). While CER gives numerical feedback on how many characters have been misread by the recognizer, WER measures how many words are different from the gold-standard transcription. This means that lower values indicate better recognition performance. For the purpose of this paper’s focus on word-level analysis, we will concentrate on WER rather than CER.",
|
| 12 |
+
"4.1 Recognizer Setup": "For our experiments, we use a recognizer based on a convolutional neural network (CNN) architecture combined with a connectionist temporal classification (CTC) (Graves et al., 2006) for decoding. The designed architecture reduces the textline images from 2048x128 to 128x96 (Time-steps x Charset) in 7 CNN-layers, 2 BLSTMs, and a final dense layer. This architecture is based on Scheidl (2018), with CTC decoding and additional word beam search (WBS) for language-model decoding (Scheidl et al., 2018)5. We extended the character set used in the recognizer from 80 to 95 characters to cover all German Umlauts (‘Ä’, ‘Ö’, ‘Ü’, ‘ä’, ‘ö’, ‘ü’) and ‘ß’ as well as additional punctuation marks and special characters like ‘e’.\nWe use a text-line level recognizer and thus need a text-line segmentation. Thus, we first reduced the colored scans to gray level and removed ruled lines as proposed by Gold and Zesch (2022). To segment the full pages into text-lines we use a segmentation with the A∗ path finding algorithm. This algorithm works on a binary image and tries to find a path through the text lines while avoiding crossing handwritten strokes.",
|
| 13 |
+
"4.2 Baseline Setup": "To train the recognizer we first used as much data as possible and combined IAM (∼11,300 lines) and CVL (∼13,400 lines) with our dataset (∼12,200 lines). Furthermore, we use the gold transcripts which were transcribed by both annotators. These 91 pages (see Table 1) contain about 1,000 textlines and are referred to as test set in the following. With the described setup and the combined training data, the recognition performance results in a CER of 11.5% and a WER of 37.6% on our test set.\n5https://github.com/githubharald/SimpleHTR, https://github.com/ githubharald/CTCWordBeamSearch\nAs our dataset matches IAM and CVL in size, we decided to train the recognizer again based on our dataset only (without IAM and CVL). With this setup, we were able to improve the recognition performance slightly with a CER of 10.7% and a WER of 34.7% on our test set. With these recognition results, we decided to use this setup as our Baseline (Table 2).",
|
| 14 |
+
"5 Decoding with Dictionary Constraint": "Most research and publicly available databases for HWR pertain to adults. In these cases, spelling errors are typically ignored because they are estimated to be rare and not important to be kept in the output. Therefore, the predicted words can be mapped to a large dictionary of possible words, which has been shown to yield better recognition rates, as recognition errors can be eliminated this way (Scheidl et al., 2018).",
|
| 15 |
+
"5.1 Path Decoding and Word Beam Search": "The standard method to map the Neural Network (NN) results to a text string is the CTC (Graves et al., 2006). In a more detailed manner, the NN returns a matrix containing the probability distribution for each character along so-called time-steps along the line of text. The matrix is then further analyzed by a beam search decoder such as the vanilla beam search by Hwang and Sung (2016).\nHowever, without deeper knowledge, the beam search algorithm could randomly output an indistinguishably written character like ‘a’ as ‘o’, if the probability is the same. To avoid this, a commonly employed approach involves constraining the generated output to words that are contained in a predefined dictionary. This can be done with WBS as introduced by Scheidl et al. (2018).6 However, with traditional dictionaries which only contain correctly spelled words, spelling errors would be eliminated from the texts.",
|
| 16 |
+
"5.2 Lower Bound": "The ideal dictionary would consist of the vocabulary of the learners as well as the orthographic variants. To find out what the performance would be with such an ideal dictionary, i.e. to determine the lower bound for WER that would be possible with such a dictionary, we compiled a dictionary\n6Although the proposed algorithm of WBS includes a more sophisticated language model, we did not make use of it as the dictionary is increased enormously and thus increases the computational costs.\nfrom our transcripts of the test set. This means that this dictionary only contains words that appear in the texts to be recognized as well as the specific orthographic variants that are present in the texts.\nUsing this dictionary in the WBS decoder, we can reduce the WER from 34.7% to 25.0%. Compared to the baseline, this is an improvement of the WER of 10 percentage points, i.e. almost one-third. With the ideal dictionary, further recognition improvements could only be achieved by changing the model or training data. This means, that the achieved performance can be seen as the Lower Bound that we want to approach.",
|
| 17 |
+
"5.3 German Learner Dictionary": "For our purpose, we need a German dictionary covering the vocabulary of young learners in the first place. We decide to use childLex (Schroeder et al., 2015) for this purpose.7 The childLex corpus was created by extracting word forms from over 500 children’s books with a target age between 6 and 12 years. Although this age range does not cover the 9th-grade students from our dataset, it seems better suitable than a dictionary compiled from adult language. To slightly restrict the extensive vocabulary, we use a subset that comprises all word forms that occurred in at least ten different books (an arbitrary cutoff point)8. This is supposed to exclude rare and specialized words, which could distract the recognizer from choosing words that are generally much more likely to appear in a text. In total, the dictionary compiled this way contains about 45,000 word forms.\nUsing this dictionary in Word Beam Search, i.e. constraining the output possibilities to the dictionary words, resulted in a WER of 29.6%, which is an improvement of 5 percentage points compared to the baseline, see Table 2, row ‘WBS childLex’.",
|
| 18 |
+
"5.4 Specific Dictionary": "Since childLex is a generic dictionary compiled from books, it does not cover the whole vocabulary of the FD-LEX dataset. Therefore, we compiled another dictionary from the original transcripts of the FD-LEX dataset (in which orthographic errors were normalized) with a total of ∼11,850 words. Although the dictionary is smaller than the one compiled from childLex, it benefits from contain-\n7For the English community we want to mention a similar corpus https://www.sketchengine.eu/oxford-childrens-corpus/.\n8More precisely, if a word form is included, all related word forms with the same lemma are included as well.\ning only words which the learners wrote in relation to the topics of the dataset. For example, one of the texts is about an accident with a cyclist and therefore, 20 compound words containing the German word for ‘bicycle’ appear in the dictionary, whereas only 9 such words appear in the childLex dictionary. Overall, there is an overlap of about 7,150 words between the FD-LEX dictionary and the childLex dictionary.\nIncorporating the FD-LEX dictionary instead yielded a notable improvement in recognition performance at the word level compared to the baseline, achieving a WER of 31.3%, see Table 2, row ‘WBS FD-LEX’. However, it fell slightly short of the recognition accuracy obtained with the childLex dictionary.",
|
| 19 |
+
"6 Spelling Error Generator": "To approximate the Lower Bound (see Section 5.2), spelling variants must be added to the dictionary. Thus, we generate possible (systematic) spelling errors based on the procedure described in LaarmannQuante (2016). We generate possible misspellings for all words in the childLex and FD-LEX dictionaries. The error generation procedure works as follows: A correctly spelled word is automatically enriched with linguistic information such as phonemes, syllables, and morphemes, based on the web service G2P of the Bavarian Archive of Speech Signals (BAS) (Reichel, 2012; Reichel and Kisler, 2014)9, see also Laarmann-Quante et al. (2019a) for more information about these annotations. The information is then used to analyze (via a set of rules) which systematic errors could be made on this word. By systematic we mean that particular\n9https://clarin.phonetik.uni-muenchen.de/BASWebServices/ interface/Grapheme2Phoneme/\nprinciples of German orthography are violated, e.g. consonant doubling (*komen for kommen, eng.: ‘to come’)10, a syllabic principle, or final devoicing (*Walt for Wald, eng.: ‘forest’), a morphological principle (see Eisenberg, 2006 for the theoretical framework). We also generate errors reflecting the overuse of such principles, e.g. *Walld for Wald. Errors that cannot be explained via such principles (such as a seemingly random omission of a letter as in *Wad for Wald) are not generated because there is an infinite number of ways in which a word could be misspelled. We assume, however, that using the systematic errors in the sense described above, should capture most of the errors that the pupils commit because they are the major obstacles when learning how to spell in German.\nIn total, 57 different error categories can be generated (not all apply to each word, though, while some words may contain multiple instances of the same error category, e.g. when there are two doubled consonants in one word such as Wasserfall, eng.: ‘waterfall’). The error categories that can be generated can be found in Laarmann-Quante et al. (2019b).11\nOf course, more than one error can be committed within a word. We account for this by including all possible combinations of up to 2 systematic errors that apply to a word. Including all possible error combinations would lead to an exponential increase of misspellings to consider, most of which will be highly unlikely, though.",
|
| 20 |
+
"6.1 Coverage of the Dictionaries": "Applying the spelling error generation to all words in a dictionary results in an enormous increase in the number of word forms. As shown in Table 3, for the childLex dictionary, the number of words rises from 45,000 (row 2) to about 14 million (row 3). Likewise, FD-LEX with 11,000 words (row 4) rises to 3.6 million words (row 5).\nAs we see in the last column of the table, the original dictionaries only cover 74% (childLex) or 88% (FD-LEX) of the word forms present in the test set. Including the generated spelling errors, the coverage increases by 7-8 percentage points. However, even if FD-LEX and childLex and the spelling errors are combined (row 6 in Table 3), not all word forms are covered (90%).\n10We mark misspellings with an asterisk (*) in this paper. 11Under the levels PGI and PGII (‘Phoneme-Grapheme Correspondence Level’), SL (‘Syllabic Level’), and MO (‘Morphematic Level’)\nA manual inspection showed that one reason that not all vocabulary was covered, is that words may be capitalized at sentence beginnings in the texts, but the dictionaries do not contain capitalized variants of all words. However, including upperand lowercase variants for all words would nearly double the size of the vocabulary, which is computationally not feasible for WBS. However, it shall be mentioned that the inclusion of both letter cases increases the coverage rate to approximately 94% (row 7 in Table 3).\nWe further investigated the last 6% of missing coverage, which is 88 words. 30 of these were caused by incorrect word separation (14 words that were incorrectly written together; 9 interrupted words due to line-breaks; 5 separated words due to strict transcription (e.g. huge gap after the first character); and 2 miscellaneous cases). Another 24 words were not covered due to a missing letter and 3 times two letters were swapped. These are ‘unsystematic’ errors that were not generated. For 19 words, the errors were not covered by the generator but they appeared systematic in a sense that one may think of further rules to generate them in the future, e.g. if ‘i’ follows ‘l’ the learner tends to write ‘di’ instead of ‘li’. The few words left were not covered for various reasons, e.g. interference with transcription rules, more than 2 errors in the word, and 2 non-words (number plate of a car).",
|
| 21 |
+
"6.2 Influence of the Advanced Dictionaries": "In the following, we include the dictionaries (with and without generated spelling errors) in the decoding process of the HWR system with WBS to see if the recognition performance can be improved.\nThe results are shown in Table 2. We see in rows ‘WBS childLex’ and ‘WBS FD-LEX’ that including a dictionary (without spelling errors) already improves the recognition performance compared\nto the Baseline by 3-4 percentage points in terms of WER.\nHowever, adding spelling errors into the dictionary did not necessarily improve the performance. For childLex, the WER increases by 0.5 percentage points when spelling errors are added to the dictionary (compare rows 2 and 3). As discussed in Section 6.1, by adding spelling errors, the number of word forms included in the dictionary is increased extremely. Hence, chances are high that a wrong spelling variant or a spelling variant of another word is chosen. In contrast, the FD-LEX dictionary is more restricted to the vocabulary of the learners and thus could benefit from adding spelling variants: The recognition performance is increased by 1 percentage point when compared to the dictionary without spelling errors (see rows 4 and 5).\nThe best result was achieved by combining both dictionaries and their spelling errors. This way, the WER decreases to 25.9% and is thus within 1 percentage point of the Lower Bound.",
|
| 22 |
+
"7 Conclusion and Further Work": "In this paper we tackled the issue of retaining orthographic errors when automatically recognizing learner handwriting. This is a prerequisite for giving automated feedback on spelling performance based on handwritten texts.\nWe created a handwriting recognition dataset of German learner texts based on the FD-LEX dataset by transcribing 1,350 pages using new transcription guidelines. The utilization of a dictionary to restrict the output resulted in an improvement of our baseline. Furthermore, our results indicate that incorporating generated spelling errors leads to an improvement in recognition performance at the word level, with the error rate decreasing from 35% to 25%, representing a decrease of 10 percentage points.\nAlthough we were able to cover 94% of the originally used words using a spelling error generator, the huge number of words in the dictionary raises questions about its practicality. Therefore, one of the next goals should be to allow more probable errors while avoiding overwhelming the dictionary. Therefore, further analysis is necessary to determine which errors were made by learners in FDLEX and which ones were addressed by the generated errors. This information can be used to reduce the size of the error set by eliminating unnecessary or rare errors. Additionally, an analysis of com-\nmon error combinations can aid in generating more targeted errors while avoiding redundant ones.\nFurthermore, the focus of this study was not on improving the recognition model itself. However, recognition improvements could be made by implementing a more sophisticated model like full page recognition as introduced by Bluche et al. (2017).",
|
| 23 |
+
"Acknowledgments": "This work was partially conducted at “CATALPA - Center of Advanced Technology for Assisted Learning and Predictive Analytics” of the FernUniversität in Hagen, Germany."
|
| 24 |
+
}
|
ACL_23_no_limitation/ACL23_1235.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1235",
|
| 3 |
+
"Title": "Automated Evaluation of Written Discourse Coherence Using GPT-4",
|
| 4 |
+
"abstractText": "The popularization of large language models (LLMs) such as OpenAI’s GPT-3 and GPT-4 have led to numerous innovations in the field of AI in education. With respect to automated writing evaluation (AWE), LLMs have reduced challenges associated with assessing writing quality characteristics that are difficult to identify automatically, such as discourse coherence. In addition, LLMs can provide rationales for their evaluations (ratings) which increases score interpretability and transparency. This paper investigates one approach to producing ratings by training GPT-4 to assess discourse coherence in a manner consistent with expert human raters. The findings of the study suggest that GPT-4 has strong potential to produce discourse coherence ratings that are comparable to human ratings, accompanied by clear rationales. Furthermore, the GPT-4 ratings outperform traditional NLP coherence metrics with respect to agreement with human ratings. These results have implications for advancing AWE technology for learning and assessment.",
|
| 5 |
+
"1 Introduction": "Recent advances in large language models (LLMs; Brown et al., 2020), and in particular OpenAI’s GPT-4 model (Eloundo et al., 2023; OpenAI, 2023), have led to a paradigm shift with regard to what machines can generate, such as coherent writing. We are now witnessing the potential power and exponential growth of AI in education, though the impact of LLMs used for educational purposes is still largely unexplored. For instance, applications not intended for educational purposes, such as ChatGPT, are being used in educational contexts – everyone with access to the internet can now ask ChatGPT to complete writing tasks, from generating outlines and ideas, to summarizing documents, to essay writing. With these novel capabilities, we can see immediate advantages, such as leveraging GPT-4 for instructional purposes (e.g., automatic\nitem generation, see Attali et al., 2022), and disadvantages (e.g., increased plagiarism, see Eliot, 2022). In addition, we are learning about current potential shortcomings of LLMs (e.g., hallucinations or low-quality content generation) due to miscalibrated expectations of what LLMs can do or the pitfalls of non-optimized prompt engineering.\nTo further our understanding of one innovative application of AI in education, this paper presents an exploratory evaluation of LLMs for automated writing evaluation (AWE). Specifically, it is the first study to our knowledge to examine GPT-4’s ability to provide a rating (score) and rationale for one aspect of writing quality – discourse coherence quality – in test-taker written responses to an online, high-stakes writing assessment item. Discourse coherence is notoriously challenging to satisfactorily assess using AWE, and as such, there is great value in determining whether state-of-the-art AI can be used to improve upon prior options. We believe that the method described in the paper should be generalizable to similar datasets that are publicly available. However, caution in the use of GPT-4 ratings is warranted due to limited reproducibility, the possibility of bias, and limited insight into the underlying processes that determine the ratings.",
|
| 6 |
+
"2 Background": "In the field of AI in education, AWE is one of the most widely researched and mature areas. AWE systems evaluate written text quality (Shermis and Burstein, 2003, 2013; Attali and Burstein, 2006) and are widely used for high-stakes writing assessment and instruction. These systems are informed by theoretical writing subconstructs (i.e., factors contributing to writing quality) described in human scoring rubric criteria such as grammatical accuracy, lexical sophistication, relevance, and discourse coherence. These rubric criteria are developed and used by educational testing organizations for scoring purposes and are often informed by\n394\neducation policy (e.g., Common Core Standards, 2010 and Council of Europe, 2020). AWE systems typically provide a holistic score that indicates the overall quality of writing, given a set of rubric criteria. The performance of these scores (accuracy) is then reported through human-system agreement, a well-studied evaluation measure that is typically quite high on modern systems (e.g., Bridgeman, 2013).\nIn recent years, large language models (and earlier models pretrained on unlabeled text) have been leveraged to good effect in various ways to improve AWE performance through the use of “transformers”, a type of deep learning neural network. For example, Lagakis and Demetriadis (2021) found that the best AWE performance was achieved through a model incorporating linguistic features with the BERT language model (Devlin et al., 2019). More recently, Mizumoto and Eguchi (2023) explored the capabilities of GPT-3 to holistically rate testtaker essays in the TOEFL11 corpus (Blanchard et al., 2013). The researchers showed Human-GPT3 agreement rates to be reasonable (exact agreement 54.33%, adjacent agreement 89.15%). The model’s performance was then further improved by combining GPT ratings and a range of lexical, syntactic, and cohesion features, resulting in substantial Quadratic Weighted Kappa (QWK) of 0.61. Methodologically, it is important note that in their study, the same prompt was used in all conditions, and this prompt did not include examples or ask for rationales for the ratings. To our knowledge, there have been no similar studies with the newer GPT-4 or with comparing different prompt configurations to elicit ratings.\nWhile AWE systems show strong performance for holistic scoring, scores for discourse coherence quality alone have been a challenging area of NLP research (Hearst, 1997; Barzilay and Lapata, 2008; Burstein et al., 2013; Somasundaran et al., 2014; Lai and Tetreault, 2018). Although some discourse features can be considered “surface-based,” for example, pronoun referents and transition terms used in a text, operationalizing aspects of coherence such as the relationship between ideas is less straightforward and involves labor-intensive annotations or less easily interpretable LLM-derived features. In particular, it may be difficult to tell whether LLM-generated “analyses” of a text actually reflect the same aspects of writing that superficially similar human-written analyses describe.\nFurther complicating coherence assessment is the fact that different disciplines, from linguistics (Halliday and Hasan, 1976) to cognitive psychology (Graesser et al., 2004), to education research (Van den Broek et al., 2009), share slightly different views about how coherence is constructed by readers of a text. However, a common thread is that discourse coherence pertains to the textual continuity or flow of a text, that is, the overall sense of unity and meaning that is conveyed by a text. Within the construct of discourse coherence, assessment rubrics often directly or indirectly refer to subconstructs such as clarity (how easy to understand ideas and purpose; readability; and impact of lexis/grammar on coherence); flow (sequence/progression of ideas; use of linking words; and referencing); structure (appropriacy of paragraphing; introducing/concluding; and connection between topics); and effect on reader (naturalness of cohesion; appropriacy of cohesive features; repetitiveness; and helpfulness to reader for understanding the response).",
|
| 7 |
+
"3 Methods": "In this section we describe the dataset of test-taker responses and the processes for evaluating them through human and automated means.",
|
| 8 |
+
"3.1 DET coherence (DET-Coh) dataset": "The DET coherence (DET-Coh) dataset contains test-taker written responses from the operational Duolingo English Test (DET). The DET is a highstakes English language test whose primary use is for higher-education admissions. One of the writing tasks, Writing Sample, is an independent writing task in which test takers respond to a prompt requiring them to produce a persuasive or narrative extended piece of writing in five minutes (see Cardwell et al., 2023, for further details). Writing Sample is scored using AWE; the scoring model includes features to assess the writing subconstructs of Content, Discourse coherence, Grammar, and Vocabulary.\nIn total, there are 500 written responses in the DET-Coh dataset, sampled from the operational DET during a 7-month span in 2022. DET-Coh was deliberately constructed and stratified so that it contains an equal distribution of males and females, as well as an equal distribution of the seven most common first-language groups in the DET test-taker population (Chinese, Arabic, Spanish, Telugu, En-\nglish, Bengali, Gujarati). An approximately even distribution of proficiency levels was also ensured based on DET automated scoring models. These levels align with the levels of the Common European Framework of Reference (CEFR; Council of Europe, 2001, 2020), an international standard for describing language ability, ranging from level A1 (basic) to C2 (proficient) on a six-point ordinal scale.",
|
| 9 |
+
"3.2 Human scoring": "Test-taker writing responses were scored by four expert raters, each with second language (L2) teaching qualifications, extensive L2 teaching experience, and L2 assessment experience with international proficiency exams. Of the original 500 responses, 20 were double rated collaboratively for standardization, and 80 were rated independently by pairs of raters to assess interrater agreement. The interrater agreement for these 80 items was 0.72 exact agreement and 0.93 QWK, indicating excellent agreement. Having established rater reliability, the remaining 420 responses were rated by a single rater each.\nAll ratings were based on writing coherence task rubrics created for this study (see Appendix A, Table 2, for full rubric text). The rubric was developed using a 6-point, holistic scale that was based on the six levels/descriptors from the CEFR, other coherence research studies, and publicly-available rubrics from testing organizations. A rating of 0 was also given to blank or bad-faith responses in which the test taker did not attempt to respond to the prompt. In addition, one rater produced paragraph-long rationales for 12 of the ratings (two at each scale point) for the purposes of few-shot prompting (6 responses) and qualitative analysis (6 responses).",
|
| 10 |
+
"3.3 GPT-4 ratings and rationales": "To elicit GPT-4 coherence ratings and rationales, we used the OpenAI Python API. The full prompt given to GPT-4 for each student response consisted of the following ordered elements:\n• Task – a short paragraph explaining the task of rating the coherence of a written text written by a language learner in response to a prompt\n• Rubrics – see Section 3.2 for description\n• Guidelines – bullet point guidelines relating to expected terminology and style\n• Examples – six training items removed from the dataset (one from each scale point), accompanied by expert ratings and/or rationales (depending on the condition) for the ratings based on the rubrics\n• Prompt – the prompt the test taker responded to\n• Response – the test taker’s response\nBased on these elements, GPT-4 was called to complete three different conditions: 1) rating then rationale (rating-first), 2) rationale then rating (rationale-first), and 3) rating only (rating-only).",
|
| 11 |
+
"3.4 NLP coherence metrics": "As a baseline, coherence ratings were predicted using a set of simple NLP features based on CohMetrix (Graesser et al., 2004):\n• Binary overlap between sentence pairs: overlap of arguments, nouns, or word stems between two sentences\n• Proportional overlap between sentence pairs: overlap of content words as a proportion of all content words in a sentence pair\n• Coreference overlap: number of coreferent mentions between two sentences found using a neural coreference model (Lee et al., 2018)\n• LSA similarity: measure of the similarity between two sentences calculated using an LSA model trained on a large sample of writing responses\nTwo versions of each feature were computed, one considering only adjacent sentence pairs (“local”), and one considering all pairs of sentences in a response (“global”). For each response, we fit a linear regression model using the features and human ratings for all other responses, then predicted the rating for the held-out response.",
|
| 12 |
+
"4.1 Rating comparison": "Ratings from GPT-4 and the baseline model are compared to the human ratings on all items not included in the prompt (Table 1); for double-rated items the second rating was used. The findings show that the baseline linear regression model is moderately predictive of the human ratings, reaching an adjacent agreement score of 0.82 and Spearman correlation (ρ) of 0.47 despite its simplicity.\nAll GPT-4 conditions significantly outperform this baseline model, obtaining a correlation of 0.82 with the human rating in the rationale conditions.\nInspired by Mizumoto and Eguchi (2023), we also experimented with a linear regression model that includes the GPT-4 rating as an additional feature along with the baseline features, potentially combining the strengths of the two models. However, unlike that work, we found that the combined model performs almost identically to the GPT-4 ratings on their own and so do not analyze it further.\nThe rationale-first condition could be interpreted as a form of chain-of-thought (CoT) prompting (Wei et al., 2022) which has been shown to improve performance on reasoning tasks. That work also hypothesized that showing examples with the reasoning after the answer in the prompt could improve performance, by drawing attention to relevant aspects of the tasks, but found it performed similarly to the baseline and worse than CoT prompting. By contrast, we find that GPT-4’s agreement is slightly improved by the use of rationales, regardless of their position. However, there are no significant differences between the agreement rates of any of the GPT-4 configurations, with all versions showing overlapping confidence intervals. These findings suggest that there is not a CoT effect for this task.\nWe focus on the rating-first condition for error analysis. GPT-4’s ratings have less variance than human ratings (0.37 vs 0.42), especially producing fewer 1, 5, and 6 ratings (most samples rated 1 by\nhumans are rated 2 by GPT-4). This behavior is actually in-line with a well-documented tendency of human raters, the central tendency effect, in which raters avoid the extremes of rating scales (McNamara et al., 2019). One hypothesis to account for this pattern is that GPT-4 is imitating trends found in its pre-training data. When GPT4’s ratings differ from human ratings (n=143), they are also slightly but significantly lower on average (µ = 3.17 for GPT-4 in the rating-first condition vs µ = 3.41 for the comparable human rating, p=0.04 with Welch’s t-test). In the rating-first condition, GPT-4 mentions “spelling” in 43% of rationales where its rating differs from the human rating, versus only 30% of equally rated rationales. Speculatively, this may indicate an oversensitivity to spelling errors; human raters may be better able to discern the intended word while GPT-4’s tokenbased representation may prevent such recognition.",
|
| 13 |
+
"4.2 Rationale comparison": "The six human-generated rationales were compared to GPT-4 rationales in terms of their content and style. Figure 1 provides an example of a response with a 3 rating (CEFR B1; human and GPT-4 rating in agreement), answering a prompt about the advantages and disadvantages of using books, movies, and TV shows to learn about different cultures. Figure 2 shows the accompanying human and GPT-4 rationales. Of note, the trends exemplified in this set of examples hold true for all six pairs of human-\nGPT4 rationales we analyzed.\nComparing the content of the two rationales, there is a great deal of consistency, with both addressing the clarity, flow, structure, and effect on the reader. For example, both rationales describe how the writer’s position is initially presented and provide a specific example. The two rationales also note the same main weakness relating to the lack of development of the second point. The two rationales then move on to describe how discourse markers are used to achieve local coherence, even highlighting the same two examples of Firstly and Secondly. Examples of coherence negatively affected by language inaccuracies are then given, though different examples are used to exemplify this point in the two rationales. Finally, both rationales summarize the reason for the overall satisfactory effect on the reader.\nLikewise, in terms of style, the GPT-4 rationale has clearly adopted the examples and followed the guidelines from the prompt. The rationales use\nterminology such as the writer (rather than the author/student/learner), are written in the 3rd person, and are within the desired length range. The overall format of the rationale is also consistent, starting with an overall statement of coherence, moving to discuss each of the coherence subconstructs in turn, then closing with an overall description of the effect on the reader.\nTo further illustrate how GPT-4 rationales discuss and incorporate key concepts from the rubrics, we conducted a simple corpus analysis of key words. First, a frequency list was compiled of the most common words (tokens) in the rationales. We restricted this list to content words (nouns, verbs, adjectives, and adverbs) and only counted the first occurrence of each word in each rationale. Of interest, we noted commonly used words related to discourse coherence including ideas (n=509), developed (n=406), impact (n=297), inaccuracies (n=278), and [discourse] markers (n=264). Figure 3 presents a concordance of the first ten oc-\ncurrences of the most frequent of these key words, ideas, to provide the context in which this term is being used. Here we see that ideas are described in a number of ways, for example, relevant, appropriate, basic, and incoherent, all of which are descriptors used in the rubrics. As importantly, these ideas are discussed in terms of how they are presented and arranged in the response, and specific examples of test-taker ideas are listed, that is, there is a focus on content and meaning, not just mechanical use of linguistic features.",
|
| 14 |
+
"5 Discussion and conclusions": "This study examined the effectiveness of using GPT-4 for assessing written discourse coherence of test-taker responses on a high-stakes English proficiency test. We found that GPT-4 is able to rate the coherence of writing samples with a good degree of accuracy in terms of agreement with the goldstandard human ratings; regardless of the exact order of the prompt (rating-first or rationale-first), the exact agreement rates were >0.5 and the QWK >0.8. Prompts eliciting rating-only performed slightly worse, though not significantly so. Importantly, all permutations of the GPT-4 prompt greatly outperformed a baseline NLP model composed of traditional coherence features. Human-GPT-4 agreement rates could likely be improved with further tailoring of the prompt; for example, based on the qualitative analysis, we might suggest additional guidelines to lower the weighting that GPT-4 assigns to spelling errors as it may be overvaluing their importance.\nStudies such as this one have important implications for the field of AWE. There is often a tension between designing features that are easily interpretable but provide limited signal (e.g., the number of discourse markers) versus features which are less clearly aligned with human rubrics but which may provide more predictive power (e.g., perplexity of\nthe response under a language model). The promise of ratings based on GPT-4 is that they may bridge this gap by providing quantitative features which seemingly are based on aspects of language of importance to the language assessment community. In the future we therefore expect to see research in a similar vein which looks at further optimizing prompts to elicit ratings and clear, interpretable rationales, especially for subconstructs of writing which have historically been a challenge to measure through automated means. In using LLMs in this manner, we could reduce the “epistemic opacity” of AWE processes (Ferrara and Qunbar, 2022), that is, modern automated assessment could become less of a black box, thereby improving stakeholder confidence in the results. Nevertheless, although these results are encouraging, it is important to recognize that the interpretability promised by generated rationales is limited: GPT-4’s rationales may not accurately reflect the process used to assign the ratings. In particular, rationales may present rationalizations for decisions actually grounded in biasing features, as was found to be true of CoT explanations in Turpin et al. (2023). Rationales should therefore not be treated as offering insight into the process of generating ratings, even when they provide true and relevant information about the response.\nThe fact that rationales do not reflect a “thought process” by GPT does not, however, reduce their value in all contexts. As suggested in Mizumoto and Eguchi (2023), rationales can support language learning by providing instantaneous feedback. In the context of test takers of the DET, rationales such as the ones in this report are particularly useful because they are based on task- and constructspecific rubrics. For example, test takers completing a practice test would greatly benefit from feedback tailored to the writing subconstructs, such as discourse coherence, that will be assessed under\noperational test conditions. GPT-4 could also then be further beneficially exploited by querying it to produce an improved version of the test taker’s own response; in other words, a personalized model answer.\nFigure 4 is an example of one such model answer, revising the response from Figure 1. The same prompt as before was used for generating this revision, with the following amendment:\nNow, write a revised version of the following response with improved coherence according to the rubric. Stick closely to the original in content, and do not rewrite too extensively; simply improve the organization and complete unfinished ideas.\nIn this revision, we see that the test taker’s ideas are maintained, for example, the benefits of learning about how other cultures eat and dance. In addition, the appropriate use of some discourse markers from the original are left intact. In contrast, key coherence weaknesses from the original are addressed, most notably the lack of development of disadvantages and the language inaccuracies which impacted clarity. There remains some repetitiveness in the revision of language from the task prompt, but this issue did not prevent the revised response from being independently rated a 5 (CEFR C1) by both GPT-4 and a human rater. As such, this revision would seem a reasonable goal for this particular test taker.\nOn a broader level, the focus of our study, including the importance of transparency, is in line with the larger field of educational AI application development where responsible AI is a key focus (ATP, 2021; Dignum, 2021; ITC-ATP, 2021; Burstein, 2023; Department for Science, Technology & Innovation, 2023). As novel ideas, applications, and research questions emerge around the use of LLMs for educational purposes, it is essential that research communities investigating the use and impact of AI for education build a research agenda. In light of the need to ensure responsible use of AI in education, researchers need to anticipate and pressure test possible uses of AI for education to ensure fairness.",
|
| 15 |
+
"Acknowledgements": "We thank the raters for their contribution to the DET-Coh dataset."
|
| 16 |
+
}
|
ACL_23_no_limitation/ACL23_1239.json
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1239",
|
| 3 |
+
"Title": "ACTA: Short-Answer Grading in High-Stakes Medical Exams",
|
| 4 |
+
"abstractText": "This paper presents the ACTA system, which performs automated short-answer grading in the domain of high-stakes medical exams. The system builds upon previous work on neural similarity-based grading approaches by applying these to the medical domain and utilizing contrastive learning as a means to optimize the similarity metric. ACTA is evaluated against three strong baselines and is developed in alignment with operational needs, where low-confidence responses are flagged for human review. Learning curves are explored to understand the effects of training data on performance. The results demonstrate that ACTA leads to substantially lower number of responses being flagged for human review, while maintaining high classification accuracy.",
|
| 5 |
+
"1 Introduction": "Automated Short Answer Grading (ASAG) has been a longstanding educational application of NLP. The task of classifying the free-text responses to short-answer questions (SAQs) as correct or incorrect is made challenging by the fact that the same concept may be expressed in a myriad of different ways. The problem has received considerable attention, with several competitions organized on the topic such as a SemEval shared task by Dzikovska et al. (2013) or the ASAP 2 Kaggle competition1.\nMost broadly, the ASAG literature defines two scoring approaches: an instance-based approach, where a system is trained on a portion of the data and outputs a predicted score for a given new response, and a similarity-based approach, where each new response assumes the label of an annotated response it is matched to using some similarity metric (Bexte et al., 2022). In early work, pre-neural similarity-based approaches were shown to lag behind the less interpretable instance-based approaches (Sakaguchi et al., 2015). Since then,\n1https://www.kaggle.com/c/asap-sas\nneural similarity-based approaches have shown increasing promise by learning response (or questionresponse) embeddings and matching the pairs using cosine similarity (e.g. Schneider et al. (2022)). Bexte et al. (2022) proposed that the similaritybased approach can be further improved if the similarity metric is appropriately optimized. In their work, a pretrained Sentence-BERT model (Reimers and Gurevych, 2019) is fine-tuned on answer pairs and then a k-nearest neighbors classifier is used to match a new response based on its similarity to the labeled ones. These advances have led to a considerable improvement over the instance-based approach not only in terms of accuracy, but also in terms of interpretability and the need for less annotated data for training.\nIn this study, we present the ACTA system (Analysis of Clinical Text for Assessment), where we build upon the work of Bexte et al. (2022) by exploring the use of contrastive learning (Chopra et al., 2005) as a way to optimize the performance of similarity-based approaches and by applying the approach to the clinical domain. The contributions of this paper are as follows:\n• Exploration of the similarity-based ASAG approach in the clinical domain, which is characterized by a number of challenging idiosyncrasies such as complex terminology, extensive use of abbreviations, misspellings, etc.\n• Comparison of the results to three baselines: majority class, a similarity-based approach without finetuning, and a previous scoring system designed for the clinical domain.\n• System and evaluation design constructed in alignment with operational needs, where responses that do not satisfy a given confidence threshold are flagged for human review.\n• Exploration of learning curves with various training set sizes, as well as experimentation with various confidence thresholds.\n443",
|
| 6 |
+
"2 Data": "We perform experiments on two datasets containing short free-text responses to clinical test items.\nSet 1 consists of SHARP items (Short Answer Rationale Provision items) – an item format where examinees see a patient chart and are asked to provide a free-text response regarding the most likely diagnosis (e.g., “plantar fasciitis\", “dermatomyositis\" ), most appropriate next steps (e.g., “Administer corticosteroids then do arterial biopsys\"), causes (e.g., “Homocysteine and MMA levels in blood\"), etc.2 A total of 44 items were administered in a pilot involving 177 4th-year US medical students. Each student saw each item, resulting in a total of 7,788 responses (of which 2,807 were unique).\nSet 2 consists of short-answer questions, which present a vignette3 describing a clinical case. Similar to Set 1, the Set 2 responses included diagnoses, causes, and treatments, among other categories of responses. These items were administered to 8,162 US medical students as part of their Internal Medicine school subject exam. There were 71 Set 2 items, where each item was seen by an average of 176 examinees (SD = 12.620), resulting in a total of 12,508 free-text responses (5,696 unique).\nResponses from both sets were scored as correct or incorrect by content experts (physicians and nurse practitioners) using a scoring rubric for each item. For Set 1, two subject matter experts scored the items together as part of developing scoring guidelines for future pilots (hence agreement statistics for independent scoring cannot be reported). Another group of physicians reviewed the scores and confirmed agreement with the scoring procedure. For Set 2, four judges scored the items. Kappa coefficients (based on unique responses) for the six possible pairs of judges ranged from 0.89 to 0.92, indicating strong agreement. Scoring resulted in 5,201 correct responses (66.78%) for Set 1 and 8,086 (64.64%) for Set 2.",
|
| 7 |
+
"3 Method": "We use contrastive representation learning (Chopra et al., 2005) to encode responses into embedding vectors such that responses with the same score have similar embeddings and responses with dif-\n2Other aspects of the SHARP item format that refer to subsequent steps for measuring clinical reasoning are not described here.\n3See Ha et al. (2020) for a detailed description of the use of vignette-based SAQs in medicine.\nferent scores have very different ones. For any given two responses, the degree to which they are matched can then be measured by the cosine similiarty between their embedding vectors. Similar to Bexte et al. (2022), we use Sentence-BERT (a.k.a. SBERT) to derive the embeddings for each response, since the model introduces a modification of the pretrained BERT network that “reduces the effort for finding the most similar pair from 65 hours with BERT / RoBERTa to about 5 seconds\" (Reimers and Gurevych, 2019).\nFirst, we pair up every response with every other response for the same item. Each pair is assigned a label of 1 if both responses have the same score (both correct or both incorrect), 0 otherwise. For each pair, the two responses are passed to SBERT independently, producing two sentence embedding vectors (one for each response).\nThe contrastive loss encourages the model to minimize the embedding distance when responses have the same score, and maximize the distance otherwise. To do that, the cosine similarity and the cosine distance between the sentence embedding of the first response e1 and the sentence embedding of the second response e2 are defined as:\nsimilarity(e1, e2) = eT1 · e2\n||e1||||e2||\ndistance(e1, e2) = 1− similarity(e1, e2)\nThen, the contrastive loss is defined as\nL(e1, e2, label) = label · (distance(e1, e2))2+\n(1− label) ·max(0,margin−distance(e1, e2))2\nwhere margin is a hyperparameter, defining the lower bound distance between responses with different scores. One advantage of contrastive loss over cosine similarity loss is that it goes to 0 for negative pairs when the distance is farther than the margin. When dissimilar inputs are sufficiently distant there is no more pressure on the model to keep pushing them apart, which could allow the model to focus on improving the most erroneous cases.\nDuring inference, the trained model is used to compute the cosine similarity between the sentence embedding of the new response and the sentence embedding of every annotation (i.e., responses of the same item in the training set). If the highest\ncosine similarity is less than a given threshold, the new response is labeled as unmatched and flagged for human rater review. Otherwise, the new response assumes the score of the annotation that it has the highest cosine similarity with. For detailed training parameters, see Appenidx A.",
|
| 8 |
+
"4 Experimental setup": "Baselines: We compare the approach proposed in ACTA to three baselines: a majority class baseline (always predicting a correct response); ACTA No finetuning – a similarity-based approach using SBERT, where the model was not trained to optimize the similarity metric. We use all-MiniLML6-v24, which has been pretrained on 1B sentence pairs, as our backbone model for both SBERT-notraining and SBERT. Finally, the INCITE system (Sarker et al., 2019), which is specifically developed to score clinical text by capturing a variety of ways clinical concepts can be expressed. INCITE is a rule-based modular pipeline utilizing custombuilt lexicons, which contain observed misspellings for medical concepts and non-standard expressions, as well as common concepts and abbreviations from online resources. The tool performs direct and fuzzy matching between a new response and an annotated response (or a lexicon variant of it) using a fixed or dynamic Levenshtein ratio threshold (in our case - .95). Full details about the INCITE system are available in Sarker et al. (2019).\nLearning curves: We compare the approaches by experimenting with different training set sizes and evaluating on the same test set of 20% held-out data (1,5K responses for Set 1 and 2,5K for Set 2). This provides insight on an important practical consideration - how much training data is enough\n4https://huggingface.co/sentence-transformers/allMiniLM-L6-v2\nto train a reliable and accurate model (Heilman and Madnani, 2015). To emulate an operational scenario, the division of training and test sets (and the increase in training data) are based on the chronological order in which the responses were received.\nEvaluation metrics Another practical consideration is to directly answer two questions of operational significance: \"How accurate is the system for responses that it is able to score?\" and \"How many responses do human raters still need to score manually?\". To address these, we present two separate metrics – F1 for matched responses and total number of unmatched responses – as opposed to capturing the number of unmatched responses through the measure of Recall. This setup allows the selection of more strict or liberal thresholds depending on the intended use, e.g., high-stakes summative assessment where high precision is paramount vs. formative assessment, where there can be a trade off between precision and wider response coverage.\nThresholds: A conservative similarity threshold of .95 is selected apriori to ensure high confidence that the matched responses are scored correctly. All items below that threshold are considered unmatched and are sent for human scoring. We first present detailed results for this threshold. Next, we experiment with a variety of other thresholds and compare their effect on the two evaluation metrics.",
|
| 9 |
+
"5 Results": "The majority class baseline was .79 for Set 1 and .794 for Set 2. The remaining results for a threshold of .95 are presented in Table 1. As can be seen, all three systems (INCITE, ACTA No finetuning, and ACTA Finetuned) achieve very high F1 scores for the responses they were able to match for Set 1 (lowest F1 was .977 for ACTA Finetuned and .984 for INCITE). For the much larger Set 2, we see a higher F1 score range of .97 - .99 for ACTA\ncompared to .88 - .90 for INCITE. The F1 score remains high when evaluation is performed using 5-fold cross validation (not shown in the tables): the average ACTA Finetuned F1 across folds for Set 1 is .985 with an average number of unmatched responses across folds = 49.8. For Set 2 the F1 score is .98 with an average number of unmatched responses across folds = 88.8. Overall, the results suggest a consistently high level of confidence in ACTA’s output for all matched responses.\nWhen looking at the unmatched responses, we see dramatic differences between the three systems. When training on more than 40 examinees, INCITE and ACTA No finetuning have significantly more responses that require human review and increasing the amount of training data leads to small improvements. ACTA Finetuned leaves fewer unmatched responses and continuously improves with the addition of more training data. These results show the when finetuned using contrastive loss, ACTA can ultimately save more human effort than INCITE and that the gains increase with data size.\nNext, we experiment with different matching thresholds by replacing the .95 value with a range of values: .98, .90, .85, .80, .75, .70, and .65. F1 remains high even with lower thresholds: For Set 1, the lowest F1 is .937 (threshold = .65 when training\non data from 20 examinees). For Set 2 it is .95 for the same configuration (for detailed F1 results for each threshold, see figures 1 and 2). The number of unmatched responses, however, decreases significantly (see Figures 3 and 4) – there are either 0 or 1 unmatched responses in both sets across all training configurations for threshold .65. This shows that with more liberal thresholds, the need for human scoring almost disappears (except the need for continuous quality verification). Selecting the right trade-off between F1 and number of responses that need to undergo human review remains an operational decision.",
|
| 10 |
+
"6 Conclusion": "This study showed that a similarity-based clinical ASAG system finetuned using contrastive loss outperforms the INCITE and ACTA No Finetuning baselines. Lowering the similarity threshold value significantly decreases the number of unmatched responses, while – contrary to expectation – the F1 score remains high at > .93 across conditions. The condition of weakest supervision – training on 20 examinees from Set 1 with a similarity threshold of .65 – shows that 880 annotated responses are\nsufficient to score all 1.5K test set responses with F1 = .93. Similarly, when training on 20% of the data from Set 2 with threshold of .65, all 2.5K test set responses are scored with F1 = .95.\nThe evaluation setup allows operational experts to balance the confidence threshold with a minimum necessary F1 score, where items with more errors can have more stringent similarity thresholds and vice-versa. The threshold may also vary depending on intended use: formative exams may tolerate a lower F1 to gain wider coverage, while summative assessments may have stricter criteria.\nIn addition to its accuracy and wider coverage of responses, the interpretability of ACTA as a similarity-based system is an important advancement in clinical assessment compared to instancebased ASAG systems (e.g., Ha et al. (2020)). Interpretability holds special significance in the realm of automated scoring, as the value of the scores depends on the trust placed by various stakeholders (such as faculty, students, and residency selection programs, among others) in their fairness, reliability, and validity.\nLike many other products, automated scoring tools are complex systems that have a significant impact not only because of their technical capabilities but also due to how they are used and the way their results are interpreted. Misusing these tools or interpreting their outputs incorrectly can lead to serious ethical issues. In a summative context, the models described in this article are intended to be used as hybrid systems, where human raters always review borderline cases. In a formative context, it is crucial to carefully examine the relationship between the use of the system and its impact on learning outcomes, as this is essential evidence for validity.\nNext steps include exploration of the effects of different \"gaming\" strategies (e.g., intentionally providing generic instead of specific answers) and potential differential functioning across demographic groups. Notably, ACTA is intended as a hybrid system, where cases of examinees who perform near or below the passing standard are reviewed by human experts.",
|
| 11 |
+
"A Appendix": "batch_size = 32; log_every_n_step = 100; lr = 0.00002; margin = 0.5; max_length = 512; model_name_or_path = \"sentencetransformers/all-MiniLM-L6-v2\"; num_epochs = 1; num_training_participants = 142; num_workers = 8; threshold = 0.95; warmup_ratio = 0.1; weight_decay = 0.01"
|
| 12 |
+
}
|
ACL_23_no_limitation/ACL23_1241.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1241",
|
| 3 |
+
"Title": "Training for Grammatical Error Correction Without Human-Annotated L2 Learners’ Corpora",
|
| 4 |
+
"abstractText": "Grammatical error correction (GEC) is a challenging task for non-native second language (L2) learners and learning machines. Datadriven GEC learning requires as much humanannotated genuine training data as possible. However, it is difficult to produce larger-scale human-annotated data, and synthetically generated large-scale parallel training data is valuable for GEC systems. In this paper, we propose a method for rebuilding a corpus of synthetic parallel data using target sentences predicted by a GEC model to improve performance. Experimental results show that our proposed pre-training outperforms that on the original synthetic datasets. Moreover, it is also shown that our proposed training without human-annotated L2 learners’ corpora is as practical as conventional full pipeline training with both synthetic datasets and L2 learners’ corpora in terms of accuracy.",
|
| 5 |
+
"1 Introduction": "Grammatical error correction (GEC) is one of the essential processes needed to produce sentences in a grammar-based language, and it is a challenging task for non-native second language (L2) learners and learning machines as well. Each language has its own grammar, however, data-driven language learning by a machine does not use the grammar, but corpora, more preferably, large-scale corpora. While classifiers that predict some token from candidates for a certain position in a sentence have been developed in the past (Li et al., 2019), sequence-to-sequence models have become more popular for GEC because the task is regarded as a sequence-to-sequence one and the models are flexible in editing sentences and covering various error types.\nIn sequence-to-sequence models, Felice et al. (2014) and Junczys-Dowmunt and Grundkiewicz (2014) treat the task as a statistical machine translation (SMT) problem and produce state-of-the-art\nperformance on the CoNLL2014 shared task. Neural machine translation models (Sutskever et al., 2014), which consist of an encoder and a decoder, also have been investigated to improve their capabilities. In particular, the Transformer (Vaswani et al., 2017), which is an encoder-decoder model incorporating a self-attention mechanism, has become popular and various improved versions have been investigated. One of its alternative architectures is the Copy-Augmented Transformer, which has become popular for GEC (Hotate et al., 2020).\nAnother modification to the Transformer architecture is altering the encoder-decoder attention mechanism in the decoder to accept and make use of additional context. For example, Kaneko et al. (2019) use the BERT representation of the input sentence as additional context for GEC. GECToR (Omelianchuk et al., 2020) employs a BERT-like pre-trained encoder stacked with a linear layer with the softmax activation function, and treats the GEC task as a token labeling problem. Addressing training data for GEC models, Kiyono et al. (2019), Grundkiewicz et al. (2019) and Choe et al. (2019) employ synthetically generated pseudo data for pre-training of GEC systems prior to fine-tuning on human-annotated corpora for the Building Educational Applications (BEA) 2019 shared task (Bryant et al., 2019).\nThis paper addresses the effectiveness of synthetic parallel data, which is generally used as a consequence of the insufficiency of humanannotated L2 learners’ corpora. We propose a method of substituting target sentences in synthetic parallel data with alternatives and rebuilding synthetic datasets to boost GEC training. Experiments demonstrate that pre-training on synthetic datasets rebuilt by the proposed method outperforms pretraining on the original synthetic datasets. Moreover, our synthetic datasets can be effectively employed not only to pre-train, but also to fine-tune GEC models, that is, training on synthetic data only\n455\nall through the pipeline. The GEC model’s training without L2 learners’ corpora is as practical as conventional training with both synthetic datasets and L2 learners’ corpora in terms of accuracy.",
|
| 6 |
+
"2.1 Generating synthetic training data": "Supervised machine learning requires as much genuine training data as possible, and the same is true for GEC. Training data or corpora for GEC may be created with annotations by trained native speakers of the language or by grammarians. This fact makes it difficult for us to produce larger-scale genuine data, so researchers are compelled to use limited resources to train their learning models (Bryant et al., 2019). Therefore, synthetically generated large-scale parallel training data contributes to GEC systems along with the human-annotated data.\nSynthetic parallel training data consist of erroneous sentences generated by corruption models from error-free sentences. In general, the corruption models can generate unlimited versions of erroneous sentences from a given error-free one, with the ability to vary the versions in the number of errors, error types, etc. Back-translation (Sennrich et al., 2016) provides monolingual training data with synthetic source sentences that are obtained from automatically translating the target sentence into the source language for NMT. Kiyono et al. (2019) apply back-translation to GEC and achieves state-of-the-art performance on the CoNLL2014 and BEA2019 test datasets.\nPIE synthetic data (Awasthi et al., 2019) is often used in state-of-the-art GEC models proposed by Omelianchuk et al. (2020); Sorokin (2022), etc. Seq2Edits (Stahlberg and Kumar, 2020) is a sequence-to-sequence transducer which consists of a Transformer encoder and decoders, and can predict span-based edit operation probabilities for GEC. Stahlberg and Kumar (2021), furthermore, propose tagged corruption models using both Seq2Edits and a finite state transducer to match the observed error type distribution of the BEA2019 dev dataset, and generate synthetic data for pretraining GEC models.",
|
| 7 |
+
"2.2 Problems in synthetic training data": "Given some noise to an error-free (grammatically correct) sentence, a system can generate a different version of the sentence which is generally regarded\nas a grammatically incorrect sentence. However, it does not always become an incorrect sentence. Table 1 shows some examples of inappropriate edits on the PIE-9M1 and the C4-200M2 synthetic datasets. The PIE model (Awasthi et al., 2019) and the tagged corruption model (Stahlberg and Kumar, 2021) each applies deletion to the source sentence, removing an adverb. In the PIE-9M synthetic dataset, the system removes the word also from the source sentence y1 to generate the erroneous sentence (Corrupted), and the edit to correct the sentence is missing also to recover from the error. However, the removed word is not necessarily required for the sentence x1 because it is an additive adverb, so the corrupted sentence x1 itself is an error-free sentence whose edit should be nooperation. The table also shows the same case in the C4-200M synthetic dataset. Note that Source is a target sentence to be outputted from a GEC model and Corrupted is a source sentence inputted to the model. The examples are cases where the original error-free sentences (Source) are inappropriate for the target sentences.\nLarge-scale synthetic parallel training datasets are often used to pre-train a GEC model prior to its fine-tuning on small-scale genuine datasets. The genuine datasets for the fine-tuning are annotated by trained native speakers of the language with respect to L2 learners’ mistakes because the GEC model is expected to correct L2 learners’ mistakes in text. Synthetic data for pre-training, therefore, should also match the data characteristics of L2 learners’ grammatical mistakes as shown in humanannotated datasets to be employed in the final training. The corruption mechanism produces unexpected inappropriate edits on synthetic data that differ from human errors. Finally, synthetic data, itself, is one of the key resources for building better GEC systems.",
|
| 8 |
+
"3 Erroneous synthetic data rebuilt by GEC models": "In this section, we further examine the problem described in the previous section and propose to rebuild conventional synthetic datasets, which are often employed by researchers, in order to create effective synthetic parallel training datasets for pre-training. A trained GEC model can be\n1https://github.com/awasthiabhijeet/PIE/ 2https://github.com/google-research-datasets/C4_200M-\nsynthetic-dataset-for-grammatical-error-correction/\nrepresented by g(xi), where xi(= (xi1, · · · , xin)) is the ith erroneous input sentence with tokens xij(1 ≤ j ≤ n), g(xi) is the ith predicted output sentence: ỹi = (ỹi1, · · · , ỹim). We train the model g with given datasets of incorrect and correct sentence pairs: D = {(xi,yi)|i = 1, · · · , N}, where the size of D is N , so as to decrease the difference (loss) of yi between ỹi.",
|
| 9 |
+
"3.1 Process of generating synthetic data": "Fig.1 shows a general process for generating synthetic parallel data consisting of an incorrect and correct sentence pair. The sentence yi is an errorfree sentence from a large-scale corpus such as Wikipedia, BookCorpus (Zhu et al., 2015) and the Colossal Clean Crawled Corpus (C4) (Raffel et al., 2020), and a corruption model produces some grammatical errors in the sentence yi resulting in an erroneous sentence xi. The sentences xi and yi are the input sentence to a GEC model and the sentence that should be inferred by the model, respectively. The arrow from yi to xi is a noising process to add the errors, and the reverse dotted arrow is a de-noising process to restore the erroneous sentence to the correct form. In some cases, however, the target sentence of the noisy or erroneous sentence xi should not be the unedited sentence yi, but another sentence ŷi.\nThe noising and de-noising processes of the corruption models, therefore, often have irreversibility, and the hypothetically correct sentence ŷi does not always match the unedited error-free sentence yi. On the other hand, the process of generating a correct sentence ŷi from the erroneous sentence xi by human annotators on genuine parallel data matches the correction process, and can create a\ndataset D̂ = {(xi, ŷi)|i = 1, · · · , N} which is significantly reliable as long as the annotators do not make mistakes. Even in human-annotated data, there can be plural candidates for the correct sentence ŷi, but, the dataset D̂ is still reliable (Bryant et al., 2019).",
|
| 10 |
+
"3.2 Proposed method for rebuilding synthetic data": "We address synthetic data for GEC models and propose a modification where hypothetical target sentences are not original unedited sentences yi, but sentences predicted from corrupted ones by a conventional GEC model. In other words, we rebuild the synthetic data D̃ = {(xi, ỹi)|i = 1, · · · , N} from D = {(xi,yi)|i = 1, · · · , N} which are usually used in pre-training of GEC models. This idea is similar to Rothe et al. (2021).\nWe employ a conventional GEC model g(xi) to generate hypothetical target sentences ỹi. One would expect that the predicted sentences ỹi from corrupted sentences xi by a GEC model would match the corrected sentences ŷi: ỹi ≃ ŷi for xi,\nand build an appropriate synthetic dataset : D̃ ≃ D̂. The conventional GEC model we employ in this paper is GECToR (Omelianchuk et al., 2020), where the number of labels is 5, 004. GECToR has achieved state-of-the-art results on GEC, however, the version of the model we employ achieves F0.5 scores of 64.0 and 71.8 on the CoNLL 2014 and BEA 2019 test datasets, respectively. As the GEC systems, of course, are still under development by researchers, we have to compromise on the quality of synthetic data rebuilt by our proposed method. Table 1 also shows examples of hypothetical target sentences ỹi, which contain grammatical errors, generated by the GEC model.",
|
| 11 |
+
"3.3 Synthetic data rebuilt by the GEC model": "To predict ỹi from xi we employ a newer version of the trained GECToR model3 which has a RoBERTa encoder based on the results of Omelianchuk et al. (2020) and the inference hyperparameters, confidence bias and minimum probability threshold, are set to 0.2 and 0.5, respectively. As synthetic data to be examined, we use the above-mentioned PIE9M and C4-200M in the experiments; the former is widely used for pre-training GEC models and the latter is generated by attempting to match the error type frequency distribution to the development dataset. Note that the C4-200M dataset is downsized to 9M sentences to match the size of the PIE-9M in the experiments.\nTable 2 shows the fundamental statistics of the synthetic datasets rebuilt by the proposed method, compared to the original ones. The average numbers of tokens per sentence in the rebuilt datasets D̃s are not significantly different from those of the original datasets Ds. To compare statistical relationships between sentences xi and yi, we generate m2 formatted information using the ERRor ANnotation Toolkit (ERRANT)4(Bryant et al., 2017) and calculate the average number of edits per sentence. Applying the proposed method to the PIE-9M and C4-200M train datasets, the procedure reduces the average number of edits (corruptions) per sentence, resulting in about 0.8 and 2.8 fewer than the original datasets, respectively. We also indicate the dataset Ď, which has a comparable average number of edits with the dataset D̃. The erroneous sentences x̌i are generated from the corrupted sentences xi in the PIE-9M dataset by recovering edits\n3https://github.com/grammarly/gector/ 4https://github.com/chrisjbryant/errant/\npartly to adjust its average of edits to that of the dataset D̃. The dataset Ď is used in the experiments in the next section to prove that the effectiveness of our method does not depend on the number of edits per sentence empirically.\nStahlberg and Kumar (2021) have tried to match their synthetic data characteristics to L2 learners’ error characteristics with respect to the frequency of occurrence of the error types for the reason that the trained model is mainly expected to correct L2 learners’ sentences. We further examine whether our method can regulate the frequency of occurrence with respect to grammatical error types in the synthetic datasets to match the L2 learners’. Fig.2 shows the frequency distribution of occurrence with respect to grammatical error types in our rebuilt synthetic datasets D̃s, comparing the original synthetic datasets Ds, PIE-9M and C4-200M, and L2 learners’ corpus, the Cambridge English Write & Improve (W&I+LOCNESS) v2.15(Bryant et al., 2019; Granger, 1998). The proposed method changes the frequency of error occurrence, and we expect that the frequency distribution of D could approach that of the L2 learners’ corpus by the proposed method. Note that the L2 learners’ corpus for comparison is employed in stage III training of GEC models, which is the final fine-tuning stage in the experiments, and the corpus for the final stage of training is of utmost importance.\nTo investigate the similarity between two frequency distributions, we calculate KullbackLeibler (KL) divergence, which is a measure of how different two probability distributions are from each other, defined as\nDKL(P ||Q) = ∑\nx∈χ P (x) log\n( P (x)\nQ(x)\n) , (1)\nwhere P and Q are discrete probability distributions and χ is the sample space. We consider the frequency distributions as the probability distributions, and the sample space χ is 24 error types defined by ERRANT. Table 2 also shows the average level of information, i.e., entropy. The entropy measures uncertainty of the types of grammatical errors that will occur in a sentence.\nComparing each entropy value of the proposed synthetic datasets D̃s with that of their original ones Ds, the proposed method approaches the entropy of the PIE-9M synthetic data and that of the W&I LOCNESS dataset DWI , while there is no\n5https://www.cl.cam.ac.uk/research/nl/bea2019st/\nsignificant difference from the C4-200M dataset. In the PIE-9M synthetic dataset, the proposed method also approaches the frequency distribution of the types of grammatical errors to that of DWI . Regarding the C4-200M dataset, on the other hand, the proposed method moves the frequency distribution away from that of DWI , however, the two datasets rebuilt by the proposed method, D̃s, have almost the same value of KL divergence from DWI . The table also refers to the values of KL divergence from the CoNLL2014 dataset for evaluating the GEC models. Note that the CoNLL2014 dataset is small-sized and consists of 1,312 sentences.",
|
| 12 |
+
"4 Experiments": "To empirically investigate the effectiveness of the proposed method and the capabilities of a GEC model trained on synthetic data rebuilt by the method, we train the GEC model choosing the hyperparameters described below. The GEC model is fundamentally trained through the three stage pipeline adopted in Choe et al. (2019), Omelianchuk et al. (2020), Stahlberg and Kumar (2021), etc.: stage I is a pre-training stage on a synthetic dataset, stage II is a training stage on a human-annotated dataset and stage III is a finetuning stage on a smaller human-annotated dataset more consistent with the target domain of GEC.",
|
| 13 |
+
"4.1 Training model and datasets": "In the experiments, we employ RoBERTa (Liu et al., 2019)(roberta-base6) and train the model on the datasets indicated below. Hyperparameters in the training stage are set to the same values as on the website7 (Omelianchuk et al., 2020), and choosing a set of labels to be predicted by the model is done in the same manner as described there. We also employ three different PIE-9M and three different C-200M datasets.\nStage I (Pre-training) Either the PIE-9M or the C-200M is used in stage I as a conventional method. Each dataset consists of 9M sentence pairs, which we randomly split into two sets: 95% train and 5% dev datasets. The data splitting creates 8.42M sentence-pair synthetic parallel datasets Ds. We apply the proposed method to the above datasets Ds to create the proposed synthetic parallel datasets D̃s. We also create the dataset Ď which has a similar average number of edits per sentence by recovering some edits randomly and partially to adjust to the statistics of the proposed datasets. The statistical information for all the synthetic parallel datasets is shown in Table 2. Note that all text in the C-200M dataset is tokenized using spaCy and the en_core_web_sm model8.\n6https://huggingface.co/models/ 7https://github.com/grammarly/gector/ 8https://spacy.io/\nStage II (Training) We employ L2 learners’ human-annotated corpora used in the BEA2019 shared task. The corpora consist of W&I+LOCNESS v2.1, the First Certificate in English (FCE) v.2.1 (Yannakoudakis et al., 2011), the National University of Singapore Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013) and the Lang-8 Corpus of Learner English (Lang-8) (Mizumoto et al., 2011; Tajiri et al., 2012) shown in Table 3. We split the corpora into 98% train and 2% dev datasets because they are small-sized and train data of a larger size is preferable. Table 3 shows the characteristics of each corpus and the overall corpus for stages II and III.\nStage III (Fine-tuning) We choose W&I+LOCNESS, one of the corpora in stage II, as an L2 learners’ corpus consistent with the target domain of GEC. This selection is based on Choe et al. (2019) for the restricted track and Omelianchuk et al. (2020). In addition to the L2 learners’ corpus, the synthetic dataset rebuilt by the proposed method is downsized to 34K sentence pairs for fine-tuning of the models pre-trained on the same synthetic data. The sentence pairs of the downsized synthetic dataset are chosen randomly from the 9M sentence pairs.",
|
| 14 |
+
"4.2 Results": "We trained the GEC models on either of the original PIE-9M, C4-200M or our rebuilt synthetic datasets in stage I followed by training in a combination of stages II and III. Both stages II and III use the\nL2 learners’ corpora or our rebuilt 34K synthetic datasets described in Sec. 4.1. To evaluate the performance of the trained models, we let each model correct grammatical errors in the sentences of the CoNLL2014 and BEA2019 test datasets. Note that we set the confidence bias and the minimum probability threshold to zeros for inference after stages I and II as on the website. We evaluated the performance of the models for the CoNLL2014 and BEA2019 test datasets using M2scorer9 and by submitting the corrected sentences to the server\n9https://github.com/nusnlp/m2scorer/\nreferred to by the BEA2019 shared task website10, respectively.\nTable 4 shows comparisons of GEC performance with metrics, precision (P), recall (R) and F0.5 scores for the test datasets, indicating train datasets each model used in stages I, II and III. The results for the PIE-9M synthetic dataset are summarized as follows. The baselines are the underlined results of the model trained on the conventional datasets, that is, Original+BEA2019 through stages I, II and III, resulting in F0.5 = 62.9 and F0.5 = 70.5 for the CoNLL2014 and BEA2019 test datasets, respectively. While the pre-trained Original performs F0.5 = 51.2 and F0.5 = 51.1, the pre-trained Proposed performs F0.5 = 61.2 and F0.5 = 66.7, respectively. For the partial pipeline training of stages I and III, the Original+BEA2019 performs F0.5 = 62.4 and F0.5 = 70.3, and the Proposed+BEA2019 performs F0.5 = 62.8 and F0.5 = 70.1, respectively. Proposed+PIE-34K, which was pre-trained and fune-tuned only on the rebuilt PIE-9M and PIE-34K synthetic datasets, performs F0.5 = 62.9 and F0.5 = 71.5, respectively. Proposed+C4-34K was pre-trained and finetuned only on the synthetic datasets as well, however, the training employed two different synthetic datasets, PIE and C4. For the full pipeline training of stages I, II and III, the Original+BEA2019 performs F0.5 = 62.9 and F0.5 = 70.5, and the Proposed performs F0.5 = 63.6 and F0.5 = 70.6, respectively. The results regarding the C4-200M synthetic dataset are also shown in the same manner in the figure.",
|
| 15 |
+
"5 Discussion and related work": "This paper addresses the quality of synthetic parallel data due to the insufficiency of human-annotated L2 learners’ corpora and the effectiveness of training only on synthetic data. Note that the quality does not address grammatical correctness, but the validity of source-target sentence pairs for training and how well the data fits the characteristics of L2 learners’ mistakes. The overall results indicate that our method is more effective for the PIE-9M dataset than the C4-200M dataset, and it implies that the C4-200M dataset is of better quality.\nHere, we discuss the experiments on the PIE9M dataset, which more likely needs the technique. The stage-I training by the proposed method outperforms the conventional training by 10.0 and\n10https://www.cl.cam.ac.uk/research/nl/bea2019st/\n15.6 with regard to F0.5 for the CoNLL2014 and BEA2019 test datasets, respectively. It results in only 1.7 and 3.8 less than the baselines, which were trained through the full pipeline, stages I, II, and III.\nFurthermore, the stage-II training reduces the performance of the pre-trained models on the proposed method’s synthetic dataset. This suggests that the proposed method’s synthetic datasets could be of higher quality than the overall L2 learners’ corpora while each synthetic dataset itself could be inferior to the L2 learners’ corpora.\nUnfortunately, the baseline of the training replaced with the rebuilt synthetic dataset does not improve its performance. Our synthetic datasets can be employed all through the pipeline of training, that is, training without L2 learners’ corpora. The results show that GEC model training without L2 learners’ corpora is as practical as conventional training with both L2 learners’ corpora and synthetic datasets in terms of accuracy. Note that the version of the model employed to rebuild synthetic data in the experiments achieves the scores of 64.0 and 71.8 on F0.5 for the CoNLL 2014 and BEA 2019 test datasets, respectively.\nTo summarize the achievements, the proposed method :\n1. outperforms pre-training on the original synthetic datasets.\n2. provides notably good training performance without human-annotated L2 learners’ corpora.\nTrained GEC models can be used not only for predicting correct sentences but also for generating better synthetic data, and systems incorporating the proposed method are not limited to the synthetic data and model used in this paper.\nAddressing training data for GEC models, Grundkiewicz and Junczys-Dowmunt (2014) introduce the WikEd Error Corpus generated from Wikipedia revision histories, corpus content and format. The corpus consists of more than 12 million sentences with a total of 14 million edits of various types. Kiyono et al. (2019), Grundkiewicz et al. (2019) and Choe et al. (2019) employ synthetically generated pseudo data for pre-training of GEC systems prior to fine-tuning on human-annotated corpora for the BEA2019 shared task(Bryant et al., 2019).\nMita et al. (2020) focus on human annotators’ errors in official datasets when they rewrite incorrect sentences to remove grammatical mistakes and denoise the target sentences of the official datasets using some trained GEC models with a perplexity criterion. Rothe et al. (2021) also apply the similar technique to the LANG-8 corpus, which is a large corpus of texts written by L2 learners with userannotated corrections, and correct human errors by the GEC models.\nOur proposed method is effective not only for correcting human annotators’ errors, but also for adjusting source-target disparity to match the domain. Stahlberg and Kumar (2021) build a large synthetic pre-training dataset with error tag frequency distributions matching Seq2Edits (Stahlberg and Kumar, 2020). Parnow et al. (2021) trained a generator to generate increasingly realistic errors (in the form of token-based edit labels) and a discrimina-\ntor to differentiate between artificially-generated edits and human-annotated edits. Stahlberg and Kumar (2021) propose tagged corruption models using both the Seq2Edits and a finite state transducer to match the observed error type distribution of the BEA2019 dev dataset, and generate synthetic data for pre-training GEC models. Yasunaga et al. (2021) apply BIFI algorithm (Yasunaga and Liang, 2021) and LM-Critic to synthetic data to generate better datasets for GEC. LM-Critic chooses the most likely grammatical sentence from multiple sentence candidates based on the sentence occurrence probabilities generated by a language model.",
|
| 16 |
+
"6 Conclusion": "In this paper, we have addressed the effectiveness of synthetic parallel data and have proposed a method for rebuilding a corpus of synthetic parallel data using target sentences predicted by a GEC\nmodel. While the original target sentences in synthetic parallel data are guaranteed to be error-free, the target sentences predicted by a GEC model contain grammatical errors because the GEC model has been developed through research and is not perfect in its performance. However, pre-training on our proposed synthetic data outperforms that on the original synthetic data, and our pre-trained GEC model showed performance only slightly lower than the conventional fine-tuned GEC model. In addition, our proposed method can provide notably good training performance without humanannotated L2 learners’ corpora.\nThe proposed method’s target sentences by an imperfect GEC model work better than the original error-free target sentences although the former may contain grammatical errors. The reason why this paradoxical result happens needs to be determined. In future work, we plan to investigate further reconfiguration and modification of synthetic parallel data, and fine-tune training using such data to improve the performance of GEC. Investigation of the source-target relationships on training data mentioned above should also be carried out to clarify the effectiveness of the proposed method.",
|
| 17 |
+
"Acknowledgements": "We gratefully thank Martin Chodorow at CUNY Hunter College for his valuable suggestions and feedback. Furthermore, we would like to thank the reviewers for their insightful comments. This work was supported by JSPS KAKENHI (Grant Numbers JP18K00904 and JP21K00806)."
|
| 18 |
+
}
|
ACL_23_no_limitation/ACL23_1245.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1245",
|
| 3 |
+
"Title": "“Geen makkie”: Interpretable Classification and Simplification of Dutch Text Complexity",
|
| 4 |
+
"abstractText": "An inclusive society needs to facilitate access to information for all of its members, including citizens with low literacy and with non-native language skills. We present an approach to assess Dutch text complexity on the sentence level and conduct an interpretability analysis to explore the link between neural models and linguistic complexity features.1 Building on these findings, we develop the first contextual lexical simplification model for Dutch and publish a pilot dataset for evaluation. We go beyond previous work which primarily targeted lexical substitution and propose strategies for adjusting the model’s linguistic register to generate simpler candidates. Our results indicate that continual pre-training and multi-task learning with conceptually related tasks are promising directions for ensuring the simplicity of the generated substitutions. Our code repository and the simplification dataset are available on GitHub.2",
|
| 5 |
+
"1 Introduction": "Reading is a foundational skill for acquiring new information. Many sources of information are only available in written form, including educational material, newspaper articles, and letters from municipalities. Although many people learn how to read as a child, not everyone becomes equally skilled at it. In the Netherlands alone, more than 2.5 out of 14 million people over 16 years old are low-literate, meaning that they experience challenges with reading or writing.3 As a result, they face obstacles in achieving academic success, seeking employment\n*Equal contribution. +The experiments were conducted when all authors were affiliated with Vrije Universiteit Amsterdam. 1The colloquial Dutch expression \"Geen makkie\" in the title can be translated as \"not easy\" or \"not a walk in the park\". 2https://github.com/clap-lab/makkie/ 3https://www.lezenenschrijven.nl/ reading-and-writing-foundation\nopportunities, and keeping up-to-date with current events.\nOne way to address this problem is to reduce text complexity. Texts that contain many infrequent words and complex sentence structures are difficult to read, especially for readers with low literacy and language learners. Automated natural language processing tools for text complexity assessment can help both in assisting editors in the selection of adequate texts and by signaling potential comprehension problems to copywriters. By estimating text complexity, we can select texts that are sufficiently easy for a particular target audience or simplify texts that are too difficult.\nRecent neural models for text complexity assessment have obtained good results in classifying texts into discrete categories of complexity (Deutsch et al., 2020; Martinc et al., 2021). The global classification label can be a first indicator but it does not point to specific parts of the input that are complex, leaving it to the human editor to identify the necessary simplifications. In this work, we first explore Dutch complexity prediction on the sentence level (as opposed to full-text classification in previous work) and then zoom in even further.\nThe complexity of a text is affected by an interplay of various factors, including its structural characteristics, domain, and layout. A crucial component is the choice of the lexical units and their complexity. A system for lexical simplification can support humans in detecting lexical complexity and suggest simpler alternatives. In the sentence children bear the future, and our resolution to support them determines the world they inherit, a lexical simplification model could propose to substitute bear with simpler words such as carry, hold, or shape. These suggestions can assist human writers in revising and simplifying their text.\nPrevious approaches to Dutch lexical simplification generated substitution candidates by naively substituting words according to a static alignment\n503\nof synonyms without considering the context of the sentence. This approach does not account for ambiguous words and synonyms that only maintain semantic coherence in a subset of contexts. In the example above, resolution can be interpreted as intention, but in the context of TV screens, it refers to sharpness. In order to ensure meaning preservation, lexical simplification needs to be context-sensitive.\nContributions We fine-tune BERTje (de Vries et al., 2019), a Dutch pre-trained transformer model, to predict sentence-level complexity and use interpretability methods to show that it captures relevant linguistic cues. We visualize the local attribution values of the model’s predictions in a demo to point end users to complex parts of the sentence. In order to facilitate the simplification process, we introduce LSBertje, the first contextual model for lexical simplification in Dutch. We explore three approaches to adapt the linguistic register of the model, to re-enforce a preference for simplicity in the generated substitutions.",
|
| 6 |
+
"2 Related Work": "We discuss complexity assessment and lexical simplification as separate consecutive stages in line with related work.",
|
| 7 |
+
"2.1 Complexity Assessment": "Text complexity is affected by the words we choose and the way we combine them into meaning. The complexity of individual words is determined by features such as length, frequency, morphological complexity, abstractness, and age of acquisition. At the sentence level, syntactic features such as parse tree depth, syntactic ambiguity, and the number of subordinate clauses affect complexity. Features that indicate lexical variety, such as the type-token ratio, can also serve as a proxy for complexity (Schwarm and Ostendorf, 2005; Feng et al., 2009; Vajjala and Meurers, 2012).\nTraditional surface-based metrics such as the Flesch-Kincaid score are widely used to automatically assess text complexity, but they only consider length characteristics and do not take into account the various intricate factors that influence text complexity. In contrast, featurebased machine learning models leverage numerous features to predict complexity labels, surpassing the capabilities of surface-based metrics (CollinsThompson and Callan, 2005). Nevertheless, handengineering effective features is an expensive and\ntime-consuming process (Filighera et al., 2019). Neural models for classifying complexity do not rely on hand-engineered features and show marginal improvements over feature-based models (Deutsch et al., 2020; Martinc et al., 2021), but they lack interpretability. In this study, we analyze if neural models leverage relevant linguistic cues when predicting binary complexity labels for Dutch sentences and can therefore reliably detect sentences that qualify for a simplification procedure.",
|
| 8 |
+
"2.2 Lexical Simplification": "Lexical simplification characterizes a substitution operation on the lexical level with the goal of reducing the complexity of a sentence and making the text accessible to a wider audience. Lexical simplification of a sentence is typically performed as a pipeline of four consecutive stages: complex word identification, substitution generation, substitution selection and substitution ranking (Sikka and Mago, 2020; Thomas and Anderson, 2012; Paetzold and Specia, 2017b). In this work, we focus on the first two stages.\nComplex Word Identification In the initial stage, words with simplification potential need to be identified. Traditional approaches for this subtask use curated lists of complex words (Lee and Yeung, 2018) or word frequency resources to flag words below a certain frequency threshold as complex (Sikka and Mago, 2020). In the most recent shared task for complex word identification (Yimam et al., 2018), feature-based machine learning techniques using length and frequency features obtained the best results. More recent approaches express lexical complexity on a continuous scale (Shardlow et al., 2021) as a binary classification is too simplistic for most educational scenarios. We explore the applicability of gradient-based interpretability techniques for complex word identification (Danilevsky et al., 2020; Sundararajan et al., 2017).\nSubstitution Generation The generation of substitution candidates has traditionally been performed with lexical resources such as WordNet (Miller, 1995; Carroll et al., 1998). In a more datadriven approach, simple-complex word pairs have been extracted from a parallel corpus that aligns sentences in Wikipedia with their counterparts in Simple Wikipedia (Kauchak, 2013; Paetzold and Specia, 2017a). These static approaches are unable\nto generate substitution candidates for words that do not occur in the resources or that are spelled differently. In addition, they are prone to generate semantically incoherent candidates since the substitutions are not context-sensitive.\nContext-Aware Substitution Generation For meaning-preserving simplification, it is important to consider the context of the complex word. Paetzold and Specia (2016b) propose to use the part of speech of a word to narrow down its meaning. Their approach relies on proximity in a static embedding space to find simplifications, which are then disambiguated with respect to their part of speech. As a result, the relatively simple noun bear is represented by a different vector than the rather complex verb bear. This syntactically informed approach leads to improvements over noncontextualized models, but it still falls short in capturing more fine-grained differences in meaning; even the verb bear can be used in a semantic spectrum ranging from bearing/delivering a child to bearing/having a resemblance.\nTo capture such subtle distinctions, recent approaches use contextualized language models such as BERT (Devlin et al., 2019) to generate substitutions tailored to the specific context. Alarcón et al. (2021) search the contextual embedding space of a complex word to find context-aware simplification candidates. They find antonyms of the complex word among the generated candidates, which is detrimental to the goal of preserving the meaning of the complex sentence. Qiang et al. (2020) introduce LSBert, which uses a prompting strategy based on BERT’s masked language modeling objective to generate context-aware lexical simplification candidates for English sentences. They generate simplifications by masking the complex word. In order to enforce semantic coherence of the masked word, Qiang et al. (2020) feed the input sentences as a duplicated pair and apply the masking operation only on the second sentence. In the recent shared task on multi-lingual lexical simplification (Saggion et al., 2022), approaches that use pre-trained language models produced very competitive results. In all three languages covered in the shared task, English, Spanish, and Portuguese, state-of-the-art results were obtained. In this work, we evaluate the LSBert lexical simplification approach and adapt it to Dutch.",
|
| 9 |
+
"2.3 Complexity Assessment and Simplification for Dutch": "Work on complexity and simplification for Dutch is sparse. Vandeghinste and Bulte (2019) analyze complexity classification at the document level using feature-based classifiers, but there is currently no known work on neural sentence-level complexity classification for Dutch. Regarding lexical simplification, Bulté et al. (2018) develop a pipeline using various resources. However, systematically evaluating the pipeline is challenging as there is no existing benchmark dataset for lexical simplification in Dutch.",
|
| 10 |
+
"3 Complexity Classification": "We train a neural classifier for determining binary labels of Dutch sentence complexity and compare its performance to several feature-based classifiers. We then analyze if the neural model captures relevant complexity cues.",
|
| 11 |
+
"3.1 Experimental Setup": "Data We contrast articles from the Dutch newspapers De Standaard and Wablieft in line with Vandeghinste and Bulte (2019). The two newspapers cover similar topics and events. As Wablieft targets an audience that prefers simpler language, the articles are significantly shorter (on average, there are 164 words in Wablieft articles vs 383 words in De Standaard articles). The source of an article (Wablieft vs De Standaard) can therefore be easily determined by its length.4 However, identifying the source is just a proxy for identifying the linguistic characteristics that determine complexity. To go beyond this superficial approach, we instead train our models to predict the complexity of individual sentences.\nThe corpus contains 12,683 articles from Wablieft and 31,140 articles from De Standaard.5 We create a balanced dataset by randomly selecting 12,000 articles from each newspaper and preprocessing them using the same steps as Vandeghinste and Bulte (2019). We split the articles into individual sentences and only keep the first sentence of each article to keep the dataset balanced. We label all sentences from Wablieft articles as easy\n4Our BERTje model could distinguish the two types of articles with 99% accuracy when fine-tuned to predict complexity labels for the entire articles.\n5The data does not include any meta information such as author names and time stamps of publication, which could reveal the source of the article.\nand all sentences from De Standaard as complex. We use 80% of the data for training, 10% for validation, and 10% for testing. The validation set was used for checking model accuracy at each epoch. Statistics regarding the length and frequency of the words in both types of sentences are shown in Table 1.\nModels We fine-tune a pre-trained transformer model for Dutch sequence classification (BERTje, de Vries et al. (2019)) available from Huggingface and add a linear output layer with ReLU activation and dropout (0.5). The model is optimized using ADAM with a learning rate of 1e-6 and crossentropy loss.\nWe use Support Vector Machines (SVM) as our feature-based classification models. We employ the scikit-learn implementation with all default parameters (Pedregosa et al., 2011).\nComplexity Features Our complexity features can be grouped into three categories: length characteristics, frequency effects, and morpho-syntactic properties. Word frequencies are obtained as standardized Zipf frequencies using the Python package wordfreq (Speer et al., 2018). The package combines several frequency resources, including SUBTLEX lists, e.g. Brysbaert and New (2009), and OpenSubtitles (Lison and Tiedemann, 2016). The morpho-syntactic features are computed using the Profiling-UD tool (Brunato et al., 2020). We calculate all features on the sentence level and train our feature-based models on different combinations of these features. An overview of the features is given in Table 3.",
|
| 12 |
+
"3.2 Results": "Table 2 shows the prediction accuracy of the finetuned BERTje model and several feature-based SVM classifiers for sentence-level complexity classification. We see that the neural model outperforms all feature-based models by 10 percent or\nmore. For the feature-based classifiers, the best results can be obtained by all types of features (frequency + length + morpho-syntactic), but the morpho-syntactic features only improve the frequency and length-based classifiers with 1 percent accuracy. This might be caused by the fact that the morpho-syntactic features are correlated with length (e.g., parse tree depth naturally increases as the sentence length increases). We conclude that frequency and length are the most predictive features for Dutch sentence-level complexity classification, which is in line with previous work for English (Vajjala Balakrishna, 2015).\nPrediction Confidence To gain more insight in the linguistic cues that the neural model relies on, we analyze model confidence with respect to the complexity features that our feature-based models were trained on. Table 3 shows the Spearman correlation between complexity features and model confidence for the complex class. We see that the model allocates higher probability values to the complex class when word length, sentence length, dependency link length, or the number of lowfrequency words increases. As the classification is binary, the inverse relationship can be observed for the easy class.\nSince the correlation values in Table 3 are relatively low, we analyze the corresponding scatter plots. Figure 1 depicts the correlation between model confidence for the complex class and the maximum dependency link of the input sentences. We see that low to medium values for the maximum dependency link length do not clearly affect model confidence, but that high dependency link values always lead to high confidence. We observe the same pattern for the other complexity features. This suggests that the model considers relevant complexity features when making its predictions, but that the evidence needs to be strong enough\n(i.e., the sentence should be sufficiently complex).",
|
| 13 |
+
"3.3 Complex Word Identification": "Our results indicate that the fine-tuned BERTje model is a reliable tool for sentence-level complexity classification. It can show an editor which sentences qualify for simplification. Nevertheless, binary complexity classification is an overly simplified operationalization that lacks educational usability. We go one step further and combine the model with feature attribution methods and analyze its utility for the first component of the lexical simplification pipeline: complex word identification.\nWe implement a demo that explains the predictions of our neural complexity classifier. Users can type Dutch input sentences, which are classified as either easy or complex. Words that contributed positively or negatively to the model’s prediction are highlighted, as shown in Figure 2. We use Captum (Kokhlikyan et al., 2020) for extracting token-level attributions. Additionally, the sentence-level com-\nplexity features from Table 3 are calculated and shown to the user, which give a more fine-grained perspective on the complexity of the input sentence (see Appendix Figure 4).\nAttribution Methods Selecting the right attribution method is not straightforward. Different attribution methods produce varying, sometimes even contrasting explanations for model predictions (Bastings et al., 2022). Atanasova et al. (2020) find that gradient-based techniques produce the best explanations across different model architectures and text classification tasks. We therefore include three gradient-based attribution methods in our demo: Gradient, InputXGradient, and Integrated Gradients. The vanilla Gradient method estimates feature importance by calculating the gradient (i.e. the rate of change) of a model’s output with respect to a given input feature (Danilevsky et al., 2020). InputXGradient additionally multiplies the gradients with the input, and Integrated Gradients integrates the gradient of the model’s output with respect to the input features along a chosen path between a feature x and a baseline x’ (Sundararajan et al., 2017). We use the [PAD] token as our baseline.\nLinguistic Plausibility of Attributions Explanations of the complexity predictions are most useful for end-users of the demo (e.g. teachers) if the attribution scores are linguistically plausible. This means that the scores should match our expectations of what makes a sentence complex or easy to understand. Given the intended use of the demo for complex word identification, we analyze the linguistic plausibility of the attributions with respect to lexical complexity. We expect short and frequent words to receive high attributions when the model predicts that a sentence is easy to understand, while longer and less frequent words should receive high attributions when the model predicts that the sentence is complex.\nTo better understand the differences between our selected attribution methods and to analyze the linguistic plausibility of the observed patterns, we calculate the Spearman correlation between lexical complexity features and attribution scores. Since our model uses subword tokenization, both attribution scores and complexity features are calculated on the subword level. We exclude the special tokens [CLS] and [SEP] from our analyses.\nTable 4 shows that Integrated Gradients is the only method for which the correlations have the ex-\npected directionality, i.e. when the model predicts the easy class, high attributions are assigned to short/frequent words, and when the model predicts the complex class, high attributions are assigned to long/infrequent words. For InputXGradient, we see the opposite pattern, and for Gradient, the directionality of the correlations is the same for both the easy and complex class. The inconsistency of the three attribution methods is surprising but in line with previous findings (Bastings et al., 2022). More user-centered analyses are required to identify their practical benefits.\nTo further explore the linguistic plausibility of the attribution scores, we calculate average attribution scores with respect to part-of-speech tags. We again find that the most plausible attributions are generated by the Integrated Gradients approach. In Figure 3, we see that nouns, adverbs, and adjectives are assigned relatively high importance scores when the model predicts the easy class. Prepositions, conjunctions, and complementizers receive higher importance when the model predicts the complex class. This is plausible since function words often signal a complex sentence structure, while easier sentences typically contain more content words. Additionally, we observe that subwords, which indicate the presence of compound words,\nreceive higher scores when the model predicts the complex class. This is helpful for lexical simplification, as compound words are often challenging to read. Finally, we observe that determiners receive high scores when the model predicts the easy class, which aligns with lexical complexity since determiners are short and frequent.",
|
| 14 |
+
"4 Context-Aware Simplification": "In the second step of the simplification pipeline, we generate context-aware simplifications for Dutch.\nLSBertje We present LSBertje, the first model for contextualized lexical simplification in Dutch. We base LSBertje on LSBert (Qiang et al., 2019, 2020) by altering its language-specific components to Dutch. We replace the language model that generates simplifications with the Dutch BERT model, BERTje. We also replace the stemmer used in filtering with the snowball stemmer.6\n6nltk.org/api/nltk.stem.snowball.html",
|
| 15 |
+
"4.1 Dutch Evaluation Data": "Dutch evaluation data for lexical simplification does not yet exist. To evaluate our approach, we develop a pilot benchmark dataset using authentic municipal data. We select sentences from a collection of 15,334 sentences from 48 municipal documents based on the presence of a complex word from a list curated by domain experts and based on their word count (less than 20 words). We exclude incomplete sentences such as headers, sentences without verbs, or with less than four words. From the remaining 6,084 sentences, we randomly sample 250 of complex words from the list and find a sentence for the dataset for 108 of the complex words. Eight sentences where simplification was not possible were removed because: 1) they were part of a named entity, 2) the sentence was incomplete or 3) a simple sense of the word was used. This resulted in 100 sentences.\nThe sentences were simplified by 23 native speakers of Dutch who pursued or obtained an academic degree. They were shown a sentence with the highlighted complex word and five simplification options that LSBertje generated. The annotators could select from these options and propose additional simplifications. For five sentences, no annotator could come up with a lexical simplification candidate. The remaining 95 sentences contained an average of 2.9 simplification candidates, with a maximum of 7.",
|
| 16 |
+
"4.2 Results and Analysis": "Table 5 shows that the LSBertje model yields good simplification performance for our dataset. The potential metric shows that the model was able to predict at least one correct simplification candidate in 85% of the sentences. It should be noted that the English benchmark datasets come with a greater variety. In our dataset, a sentence is annotated with 2.9 simplifications on average, whereas BenchLS lists 7.4 substitutions. These size differences can explain the slightly lower potential score and the higher recall for Dutch.\nTo evaluate the simplicity of the generated substitutions, we assess their frequency using the SUBTLEX-NL corpus (Keuleers et al., 2010) and find that 517 out of 650 generated words occur with higher frequency than the original word. This indicates that the generated simplifications are indeed simpler.",
|
| 17 |
+
"5 Register Adaptation Techniques": "LSBertje relies on a base model that was pretrained for masked language modeling and captures aspects of text complexity only as an incidental byproduct. It uses a masked language modeling mechanism that induces semantic preservation by repeating the input sentence. The goal of generating simpler substitutions is only implicitly targeted by restricting the generation to tokens consisting of a single subtoken. This effectively prevents the model from generating infrequent or morphologically more complex words, but the model is not explicitly optimized for capturing different levels of text complexity. We explore three strategies to adapt the linguistic register of the model so that it generates simpler substitutions: conceptual finetuning, continual pre-training, and multi-task learning.\nConceptual Fine-tuning We aim at adapting the linguistic register of the model by fine-tuning LSBert to predict the linguistic complexity of sentences before applying it for generating substitution candidates. The model is fed a pair of sentences and is trained to predict whether the first sentence is simpler or more complex than the second example. We use sentence pairs from the sentence-aligned simple-complex Wikipedia corpus (Kauchak, 2013). The sentences are balanced with respect to the simplification order condition, and we experiment with the number of sentences.7\nContinual Pre-Training For the second strategy, we adapt the linguistic register by exposing the model to simpler texts using continual pre-training. We continue the pre-training combination of masked language modeling and nextsentence prediction using only sentences from simple Wikipedia.8 We pair each sentence either with the directly following sentence or with a randomly selected sentence from another Wikipedia article.\nMulti-Task Learning We then combine the two ideas and train a model on two tasks simultaneously. We use the same training method but replace nextsentence prediction with complexity prediction.",
|
| 18 |
+
"5.1 Experimental Setup": "As the Dutch dataset is too small for representative evaluation, we first explore the register adaptation\n7cs.pomona.edu/ dkauchak/simplification/ 8github.com/LGDoor/Dump-of-Simple-English-Wiki\nstrategies using English evaluation data and the English LSBert model.\nEvaluation Data We evaluate the models on three commonly used benchmarking datasets. They consist of sentences from Wikipedia with the complex word highlighted and a list of humangenerated simplifications. LexMTurk (Horn et al., 2014), BenchLS (Paetzold and Specia, 2016a) and NNSEval (Paetzold and Specia, 2016b) contain respectively 500, 929, and 239 sentences.\nImplementation Details We base our implementation on the Huggingface documentation Bert.for_Pretraining and the same model as LSBert.9 10 For the masked language modeling components, we mask 15% of the tokens in the input sentences. Optimization is performed using an ADAM optimizer and a batch size of two. The continual pre-training is run for two epochs, the multi-task learning for four epochs. We varied the learning rate (5e-5, 5e-6, 5e-7) and the number of sentences (1000, 10.000, 50.000).",
|
| 19 |
+
"5.2 Results": "We find that the model adapted with conceptual fine-tuning lost its ability to perform masked language modeling. Its predictions for bear in children bear the future were: swallowed, if, knicks, cats, nichol. These predictions clearly indicate a case of catastrophic forgetting (Liu et al., 2020). In learning a new task, the model forgot its original capabilities.\nBoth continual pre-training and multi-task learning lead to improved performance on the simplification task in two and three configurations respectively. We find that the configuration of LR 5e-6 and 10.000 sentences is the best for both fine-tuning methods as shown in Table 5. See the Appendix for all scores.\nThe multi-task learning strategy seems to be the most promising approach. We test the robustness of our findings by training the model using 26 different random seeds. The model outperforms LSBert in 20 cases, see Table 8 of the Appendix for a detailed overview. Overall, we see an increase in precision, recall, and F1-score. While the model’s performance is highly sensitive to taskspecific components (the learning rate and the num-\n9bert-large-uncased-whole-word-masking 10https://huggingface.co/transformers/\nv3.0.2/model_doc/bert.html# bertforpretraining\nber of sentences), the performance remains robust for variation in the task-independent random seed. The results indicate that multi-task learning is a promising strategy for adapting the model’s linguistic register.",
|
| 20 |
+
"5.3 Analysis": "We analyze the effect of the register adaptation techniques by comparing the frequency of the generated substitutions using the same resources as Qiang et al. (2019) that contains word frequency counts for Wikipedia articles and a children’s book corpus. We see that the fine-tuned model generates simplifications that occur more frequently compared to the substitutions generated by LSBert (13,030 vs 20,000 occurrences on average). When we zoom in on the generations, we find that the fine-tuned model correctly generates 356 words that were not captured by LSBert and that these words have a high average frequency of 27,000. These findings indicate that the fine-tuning process indeed leads to the generation of simpler words.",
|
| 21 |
+
"5.4 Register Adaptation Results for Dutch": "Due to the absence of a sentence-aligned simplification corpus for Dutch, we only test the continual pre-training strategy on the Dutch data. The results show that the improvements obtained for English cannot yet be observed for Dutch. In the future, we plan to extend our experiments to a larger dataset and to the multi-task learning strategy.",
|
| 22 |
+
"6 Conclusion": "In this work, we have introduced two state-of-theart components for complexity prediction and simplification in Dutch. It can support teachers and text editors in making texts more accessible for people who face reading challenges.\nWe developed a demo that predicts binary complexity labels for Dutch sentences and highlights words that contributed positively or negatively to the prediction. Additionally, the demo interface provides scales for different aspects of sentencelevel complexity to enable a more fine-grained interpretation by the user.\nWe introduced LSBertje, which is the first model for contextualized lexical simplification in Dutch (to the best of our knowledge). We show that the model can generate adequate simplifications without additional fine-tuning. This base setup can serve as a reasonable starting scenario for context-\naware simplification generation for resource-poor languages. We developed a pilot evaluation dataset for Dutch that allowed us to perform initial comparisons. For a more elaborate analysis, a larger Dutch dataset needs to be curated in future work.\nWe explored strategies to adapt the linguistic register of the model to ensure the simplicity of the generated substitutions and find that both multi-task learning and continual pre-training show considerable potential. We further analyzed the model’s robustness and discovered a strong sensitivity to task-specific hyperparameters but little variation across random seeds.",
|
| 23 |
+
"Acknowledgements": "Eliza Hobo’s simplification experiments were initiated during an internship at the Gemeente of Amsterdam. Iva Gornishka has been a valuable source of insight and support in this process. Charlotte Pouw’s experiments on readability were initiated in a joint project with Florian Kunneman and Bruna Guedes supported by the Network Institute (VU Amsterdam) through the Academy Assistants Program. Lisa Beinborn’s work was supported by the Dutch National Science Organisation (NWO) through the projects CLARIAHPLUS (CP-W6-19005) and VENI (Vl.Veni.211C.039)."
|
| 24 |
+
}
|
ACL_23_no_limitation/ACL23_1248.json
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1248",
|
| 3 |
+
"Title": "Auto-req: Automatic detection of pre-requisite dependencies between academic videos",
|
| 4 |
+
"abstractText": "Online learning platforms offer a wealth of educational material, but as the amount of content on these platforms grows, students may struggle to determine the most efficient order in which to cover the material to achieve a particular learning objective. In this paper, we propose a feature-based method for identifying pre-requisite dependencies between academic videos. Our approach involves using a transcript engine with a language model to transcribe domain-specific terms and then extracting novel similarity-based features to determine pre-requisite dependencies between video transcripts. This approach succeeds due to the development of a novel corpus of K-12 academic text, which was created using a proposed feature-based document parser. We evaluate our method on hand-annotated datasets for transcript extraction, video pre-requisites determination, and textbook parsing, which we have released. Our method for pre-requisite edge determination shows significant improvement (+4.7%-10.24% F1-score) compared to existing methods.",
|
| 5 |
+
"1 Introduction": "In many online learning platforms, academic videos that cover specific concepts are included in the curriculum. These videos cover certain \"academic concepts,\" which are key ideas that are conveyed in the learning material. These fine-grained concepts aid students in understanding the learning content more effectively and achieving their core learning objectives. The prerequisite dependencies between these concepts, which pertain to the order in which they should be covered, are crucial for both educators and learners. They assist educators in curriculum planning and creating better learning pathways for students. With the increasing reliance on online learning platforms, there is a vast amount of academic content that requires proper organization into dependency graphs to aid in indexing\nfor smart search capabilities and providing defined learning paths for students. Research has shown that organizing content in this manner has significant benefits for learning, even in offline settings. A meta-analysis of 55 studies involving over 5,000 participants found that students who use concept maps for their daily studies were able to learn more in the same amount of time (Nesbit and Adesope, 2006).\nAlthough learning content is organized in textbooks and MOOCs, the creation of dependency graphs for academic videos serves to extend this organization, enabling us to identify only the relevant and required content for a specific learning objective based on prerequisite relationships. Such a system allows us to recommend personalized learning pathways to users, fostering efficient and effective coverage of specific academic concepts. This tailored approach enhances students’ educational experiences and promotes better understanding of the subject matter. Moreover, it saves time for the student by ensuring that all required concepts or skills are covered before viewing content related to the desired academic concept. In this study, we propose a two-stage methodology for identifying prerequisite relationships among academic videos. The process begins with transcribing videos utilizing a speech-to-text model, combined with a language model specifically trained on a K-12 domain corpus. Subsequently, we extract innovative similarity-based features from these transcripts to determine the prerequisite connections.\nThe features employed in our study have been meticulously designed with the guidance of expert educators in the respective domain. These features utilize several similarity-based factors between two videos to identify pre-requisite dependencies. These factors include similarities between titles, content, and taxonomy. We also use keyphrase extraction algorithms to identify the topics covered in the transcripts and then compare the similarity\n539\nbetween them. Our work introduces the use of extracted keyphrase-based similarity for this task, contributing a novel approach to this research domain. Once the features are extracted we use models such as LGBM (Ke et al., 2017), Random Forrest (Breiman, 2001), and ExtraTrees (Geurts et al., 2006) to predict prerequisite dependencies. Our approach for identifying prerequisite relationships among educational videos demonstrates superior performance compared to existing benchmarks.\nTo evaluate our pipeline, we used a hand-labeled dataset of K-12 academic videos with annotated pre-requisite edges. We introduced a novel featurebased PDF document parser that extracts a K-12 text corpus which ensures correct transcription of domain-specific terminologies and extraction of accurate semantic similarity-based features that take into account the contextual meaning of such terms. This tool extracts a hierarchical and well-organized corpus of K-12 academic text from core curriculum textbooks, strengthening the resilience and effectiveness of both pipeline stages when addressing technical vocabulary.\nThe primary contributions of our research can be enumerated as follows:\n• A method to extract transcripts from academic videos by using a text-to-speech model such as Wav2Vec2 (Baevski et al., 2020) along with a language model built from a corpus of K12 academic content.\n• A novel set of similarity-based features that can predict prerequisite edges between academic videos.\n• A method to parse academic PDF textbooks using novel layout-based features to extract hierarchical learning taxonomies and content.\n• We introduce the following datasets:\n– A hand-labeled dataset of over 2797 prerequisite edges between academic videos annotated by domain expert teachers.\n– Extracted transcripts using various methods and ground truth transcripts for a randomly selected subset of videos available in the public domain.\n– Hand-labelled textbooks parsed with all section headers, text body, and chapter names, as well as an object detection textbook page image dataset, with bounding boxes labeled on all instances of section headers.\nThe datasets are released at https://bit.ly/ 412WkQp and a demo for the generated prerequisite edges can be found at https://bit. ly/3VrzMYL.",
|
| 6 |
+
"2 Current work": "Our end-to-end pipeline to identify prerequisite dependencies between academic videos is novel. However, the sub-problems, such as transcript extraction, prerequisite edge detection, and parsing textbook PDFs have been well-studied in the literature.",
|
| 7 |
+
"2.1 Transcript extraction": "Speech-to-Text Recognition (STR) technology is widely used in the online learning domain. Previous studies have shown that students, especially those with learning disabilities, can greatly benefit from transcripts of learning content (Leibold and Buss, 2019). With an increase in the availability of large-scale datasets and newer deep-learning algorithms, many different methods have shown great performance on this task. End-to-end sequenceto-sequence (S2S) modeling using RNN-based, Transformer-based, and Conformer based models are often used for this task (Wang et al., 2020). Newer methods such as Wav2Vec2 (Baevski et al., 2020) have achieved great performance by masking speech input in the latent space and solving a contrastive task defined over a quantization of the latent representations which are jointly learned. This model trained on the librispeech automatic speech recognition (ASR) dataset (Panayotov et al., 2015) has found wide adoption for speech-to-text tasks. We augment the Wav2Vec2 speech model with a 5-gram n-gram language model trained on a corpus of K-12 academic textbooks.",
|
| 8 |
+
"2.2 Pre-req edge identification": "Identification of prerequisite relations between academic concepts has been a subject of study for decades. Teachers and curriculum planners have extensively utilized this knowledge to determine the order in which chapters are organized in conventional learning textbooks and to guide students in covering the syllabus efficiently (Novak, 1990). However, recent data-driven approaches have facilitated the automated identification of prerequisites, resulting in enhanced performance and the emergence of new research avenues. One example is the information-theoretic approach proposed by\n(Gordon et al., 2016). External knowledge bases, such as Wikipedia, have also been extensively employed. Liang et al. (2019) utilizes active learning on hand-crafted features (Liang et al., 2018b), while Sayyadiharikandeh et al. (2019) leverages Wiki click-stream-based features for prerequisite detection. Additionally, incorporating features similar to those employed in (Liang et al., 2018a), along with Long Short-Term Memory (LSTM) networks, has demonstrated strong performance as reported in (Miaschi et al., 2019). However, finding exact Wikipedia articles for domain-specific academic concepts is an error-prone process with poor results from direct search. Therefore, in our method, we avoid this mapping and find relevant features from the videos themselves. Recently, some methods have been developed to explore the determination of prerequisites between any two textual documents from different domains, including video transcripts, Wikipedia, etc. One such method, leverages aggregated fast-text word embeddings (Bojanowski et al., 2017) for effective prediction of prerequisites (Gasparetti, 2022). Furthermore, graph-based deep learning methods have also been explored (Li et al., 2019), but these methods tend to require a large amount of training data and may have limited real-world performance.",
|
| 9 |
+
"2.3 Parsing Academic Textbook PDFs": "PDF parsing is a well-researched issue, historically addressed using rule-based techniques to extract data from documents’ layouts (Mao et al., 2003). Many recent tools use Conditional Random Fields (CRFs) which are undirected graphical models trained to maximize a conditional probability that can be used to segment and label sequence data (Singh et al., 2016).\nAdditionally, it is possible to treat PDFs as im-\nages and perform text detection and extraction to extract the content. Deep learning computer vision methods have been found to be useful in this regard. For example, Siegel et al. (2018) utilized a modified version of the ResNet101 network to extract figures and captions from scientific documents. Architectures such as U-net (Ronneberger et al., 2015) has also been utilized for performing text body identification (Stahl et al., 2018). Deep learning methods are also effective for finding tables, headers, or citations in PDF files, treating it as an object detection problem. Huang et al. (2019) uses Yolo (Redmon et al., 2016) architecture to find tables in PDF files. However, it is important to note that most current work focuses on parsing research papers, and work on academic textbooks is limited.",
|
| 10 |
+
"3 Methodology": "In this section, we present a comprehensive explanation of the two-stage pipeline used for identifying prerequisite edges between academic videos as shown in Figure 1. The pipeline comprises a transcript extraction stage, followed by a feature extraction and classification stage for prerequisite edge detection. Additionally, the pipeline requires a corpus of academic text obtained from academic textbooks. To fulfill this requirement, we have developed our own academic textbook parser.",
|
| 11 |
+
"3.1 Transcript Extraction": "The first step in this process is to create a language model that can be used alongside the Wav2Vec2 speech model to improve the transcription of domain-specific terminologies. In order to create this language model, we use our corpus of academic K-12 text. This corpus contains parsed data from classes 9th to 12th for science and math subjects. To create a generic academic video tran-\nscriber, we use all textual data from this corpus. However, for a specific class and subject video transcription, it is possible to query data for only that use case and train the language model accordingly. We create a 5-gram n-gram language model using the KenLM method (Heafield, 2011). KenLM performs interpolated modified Kneser Ney Smoothing for estimating the n-gram probabilities (Kneser and Ney, 1995). This model is used to form the decoder, which is combined with the processor’s tokenizer and feature extractor to form the Wav2Vec2 processor with language model. We use this processor on the output of the Wav2Vec2 Large 960h model trained on the librispeech ASR dataset (Panayotov et al., 2015) to extract transcripts. The fine tuned language model aids the decoding process in Wav2Vec2 by providing context, which adjusts the prediction of the next token in the sequence based on the sequence of previously predicted tokens, thereby enhancing the linguistic coherence of the transcriptions.\nHowever, in order to process MP4 videos through this pipeline, we must first extract audio in the required format. Audio is extracted and saved as an MP3 file. Then, this MP3 file is re-sampled at 16 kHz (the frequency used by the Wav2Vec2 model). Also, as the model only works well with mono-audio, we check if the audio is in stereo format and convert it into mono-audio if required. We use FFmpeg tool (Tomar, 2006) to perform this processing. Finally, the processed audio is saved as WAV files that can be passed into the model to extract transcripts.",
|
| 12 |
+
"3.2 Pre-requisite Edge Detection": "The problem of finding prerequisites between academic videos is formulated as follows. An academic video corpus of an online learning platform can be represented by n videos, denoted as C = {V1, · · · , Vi, · · · , Vn} (1), where each Vi is one academic learning video. Each video Vi can be further represented as Vi = {Transcript, Title, Taxonomy, Extracted Phrases} (2).\nTranscript is the document of video text of the form Transcript = (s1 . . . si . . . s|V |) (3) , where si is the ith sentence of the video text.\nTitle is the heading of the video, which is typically the academic concept that the video covers.\nTaxonomy is a tuple associated with each video of the form: (su, cl, ch, to, st) (4) where su ∈ Su, cl ∈ Cl, ch ∈ Ch, to ∈ To, st ∈ St where\nthe set of all subjects is represented as Su, the set of all classes as Cl, the set of all chapters as Ch, the set of all topics as To, and the set of all subtopics as St. All sets, Su, Cl, Ch, To, and St pertain to the K12 curriculum. Furthermore, in this paper, we use a subset of Su and Cl as follows: Su = {Science, Mathematics, Physics, Biology, Chemistry} and Cl = {x | x ∈ Z, 6 ≤ x ≤ 12}.\nExtracted Phrases is an ordered set, denoted as {pi|i ∈ N, 1 ≤ i ≤ m} (5), comprising of phrases extracted from the Transcript of Vi (3) using Textrank (Mihalcea and Tarau, 2004). Here, m represents the total number of extracted phrases, and pi denotes the ith phrase. pi is ranked higher than pj if i < j. We opted for TextRank for keyword extraction due to its unsupervised, graph-based nature, which enables it to effectively capture contextual and semantic relationships within the diverse and complex language used in academic video transcripts. Its simplicity and versatility across domains also ensured it could efficiently handle our broad range of data.\nBased on these definitions, the problem of finding prerequisites between academic videos in corpus C (1) can be represented by a function F : C2 → {0, 1}, where :\nF (⟨a, b⟩) = { 1 if a is prerequisite of b\n0 if a is not prerequisite of b (6)\nand where ⟨a, b⟩ is a video pair (7) , a, b ∈ C (1). Given this video pair ⟨a, b⟩, we can extract a set of similarity-based features from their content (2). Let (Tra, T ia, Taa, Ea), (Trb, T ib, Tab, Eb) (8) be the transcripts, titles, taxonomies and extracted phrases of videos a and b, respectively. In order to find similarity-based features between these, we define a set:\ncontent pair = { (x, y) |x ∈ Tra, T ia, Taa, Ea\ny ∈ Trb, T ib, Tab, Eb (9)\nWe prune the set content pair manually to remove repeated and unnecessary pairs, and then define a function S : content pair → R (10) that computes the similarity between each pair of corresponding elements of the two videos. Let fi be one possible value generated by S, we take all these possible values together to form the final feature vector k = (f1, f2, · · · , fn). These features can then be used to learn the function F : C2 → {0, 1} (6) using a supervised learning algorithm.",
|
| 13 |
+
"3.2.1 Calculating Similarity": "For calculating the similarity as part of the function S (10) described above, we use the following approach: We employ two fine-tuned models, Word2Vec Skip-Gram (Mikolov et al., 2013), pre-trained on 100B Google News words and finetuned with a lock-factor of 0.2 for 5 epochs on our K-12 corpus, and FastText (FT) (Bojanowski et al., 2017), also fine-tuned on the same corpus. Word2Vec is utilized for phrases with less than 5 words; FT for longer phrases. For Word2Vec, embeddings are averaged to obtain a 300-dimensional vector, while FT directly generates sentence-level embeddings. Cosine similarity is computed between the 300-dimensional vectors to determine similarity scores, with -1 indicating complete dissimilarity and 1 representing identical inputs.\nWe opted for Word2Vec and FT, over transformer models, for their computational efficiency and simplicity, given our large transcript dataset. Word2Vec was chosen due to its strength in handling common words, while FT was selected for its speed and reduced out-of-vocabulary issue, which is particularly useful for longer phrases. Despite the embeddings being in different spaces, the similarity computation remains consistent as we use Word2Vec for shorter phrases and FT for longer ones, ensuring comparable similarity scores across phrase lengths.",
|
| 14 |
+
"3.2.2 Features Extracted": "The following features are extracted for each video pair < a, b > (7):\n• Title similarity: the similarity between the titles of the two videos Tia, T ib (8), is expected to be higher if the videos occur in a linked context in the K-12 corpus, suggesting that they have pre-requisite dependencies. • Taxonomy Similarity: Chapter- and subjectbased information is vital for determining the prerequisite order of videos. Hence, we calculate the similarity as described above between the taxonomies of two videos Taa, Tab (8). • Title and Transcript similarity: The title of a video appearing in the transcript of another video can be utilized to find dependencies. Therefore, we find similarity between the Title and Transcript Tia, T rb and Tib, T ra (8):\n– Simple count of Title and its subsentences in the Transcript.\n– Sum of similarities between Title and all phrases in the Transcript i.e for Tia, T rb we compute\n∑|Vb| i=1 ∑phrases(si) j S(Tia, j) (10)\nwhere, phrases(si) represents the word phrases in the sentence si and not the extracted phrases using textrank.\n– Cosine similarity between the TF-IDF vectors of Title and Transcript.\nAdditionally, we apply this process to the first 500 characters of the Transcript, as these initial sentences often contain crucial information that indicates prerequisite relationships (Liang et al., 2018a).\n• Title and extracted phrases similarity: The title of one video occurring as an important topic in another video can indicate that it is a prerequisite. Thus, we calculate the similarity between Tia, Eb and Tib, Ea (8):\n– ∑|Eb|\ni=1 S(Tia, pi) where pi ∈ Eb and∑|Ea| i=1 S(Tib, qi) where qi ∈ Ea. – List of instances where the similarity exceeds specific thresholds: {pi ∈ Eb|S(Tia, pi) > t} and\n{qi ∈ Ea|S(Tib, qi) > t}, where t ∈ {0.1, 0.2, . . . , 0.9} (11)\n• Title and taxonomy similarity: We compute S(Tia, j) where j ∈ Tab and S(Tib, l) where l ∈ Taa (4) to take into account the relatedness of the video title Ti with the subject, chapter, topic or sub-topics in the taxonomy Ta of the other video.\n• Similarity between extracted phrases: For each phrase pi ∈ E′a, where E′a denotes the top 10 extracted phrases in Ea, we find the similarity with the extracted phrases in Eb (5), and then sum these similarities while multiplying with the weight wi:\nwi ∑\npj∈Eb S(pi, pj) where wi =\n1\nλi\nand i ∈ N : 1 ≤ i ≤ 10. We obtained the best results when λ = 1.1. The motivation behind the weighting parameter arises from the notion that higher-ranked phrases tend to be of greater importance or relevance for prerequisite determination. By incorporating this weighting scheme, we assign more weight to the phrases that are ranked higher, hence magnifying their influence on the similarity score.\n• Similarity between video content: To calculate the overall similarity between the two transcripts, we utilize cosine similarity between their TF-IDF vectors, treating them as two independent textual documents. For calculating similarity between two large video transcripts, we use TF-IDF due to its computational efficiency and its capacity to detect recurring themes. TF-IDF, when combined with cosine similarity, enables us to compute the overall resemblance between transcripts, irrespective of their length. This makes it a practical solution for identifying textual similarities in extensive video transcripts.\nThe aforementioned features result in a feature vector of size 316. Additionally, we append a 665- length Bag of Words (BOW) vector, representing the combined titles of the two videos in the format \"<Title of Video A> <Space> <Title of Video B>\". This yields a combined feature vector of size 981, which is used to train our models in a supervised setting. We evaluated the performance of 36 widely-used machine learning models for all supervised tasks in this study, and present the results of the models that demonstrated superior performance.",
|
| 15 |
+
"3.3 Parsing Academic Textbook PDFs": "Previously, it was demonstrated that a hierarchically organized and clean K-12 academic corpus is essential for both transcript extraction and prerequisite edge determination. To accomplish this, we have created a collection of academic textbook PDFs that are publicly available1. We have selected PDF textbooks in the science, physics, chemistry, biology, and mathematics domains for classes 9th through 12th. Initially, these PDFs are converted to XML using PDF2XML (Peng and Zhang, 2004). Following this, we classify each font into one of three text classes: chapter names, section or subsection headers, and text body, based on the following features:\n• Font frequency and size: Chapter names and section headers use fonts that are larger and occur less frequently than the general text, making their font occurrence frequency and size distinct from the general text. • Font location and page occurrence: Chapter names and section headers are positioned at\n1NCERT website\nthe top of the page, and chapter names occur earlier in the overall text. This allows the use of statistical measures of font average location and page number, to distinguish between different text classes. • Color: Section headers and chapter names frequently use distinct colors. We calculate Euclidean color distance (12) between font color and black and white colors to quantify the font color’s uniqueness compared to the page’s most common colors. dist(C1, C2) = √ (r1 − r2)2 + (g1 − g2)2 + (b1 − b2)2\n(12) where C1 and C2 represent RGB color values [r1, g1, b1] and [r2, g2, b2] respectively.\n• Line width and section numbers: Section numbers (13) can distinguish section headers from other text classes. Additionally, chapter names tend to have a narrower average line width. Sectionno. = x.y.zorx.y, wherex, y, zϵN\n(13) Upon extracting the features, a machine learning model classifies each font into three text classes, assigning a class to each text line based on its font. Following the extraction of academic content, section and chapter names, section numbers in headers are utilized to derive the taxonomy. The extracted textual data and its hierarchical structure are included in the released datasets.",
|
| 16 |
+
"4.1 Transcript dataset": "To showcase the efficacy of our proposed Wav2Vec2 speech model combined with the language model trained on our K-12 corpus, we assembled a dataset comprising five random academic videos in the science and math domains from YouTube. We provide ground truth subtitles for these videos, alongside subtitles extracted by our algorithm and other benchmarks for comparison.",
|
| 17 |
+
"4.2 VID-REQ pre-requisite dataset": "To assess our approach, we introduce Vid-Req, a large-scale video prerequisite edge dataset. We initially gathered over 1,500 animated academic videos covering science, mathematics, chemistry, physics, and biology for grades 6 through 12 from Extramarks a leading EdTech company. On average, each video encompasses 418 words. However, these videos resulted in 1,124,250 distinct\nvideo pairs (1500C2), which was an overwhelming amount for labeling. Consequently, we selectively choose videos based on a specific criterion to reduce the dataset to a more manageable size. For this purpose, we firstly find chapter-level prerequisites and formulate the set CP = {(ch1, ch2)|ch1 is a prerequisite of ch2} where ch1,ch2 are chapters. Using CP , we form the potential video prerequisites set PV P = {(a, b)|a, b ∈ C, (cha, chb) ∈ CP, cha ∈ Taa, chb ∈ Tab} (1,4,9). Then, we prune the set PV P to form PV P ′ = {(a, b)|S(Tia, T ib) > 0.7, (a, b) ∈ PV P}. This set comprises 2,797 edges that we have hand-labeled, of which 1,684 are labeled as 0 (non-prerequisite edges) and 1,113 as 1 (prerequisite edges).\nFigure 2 displays the pre-requisite edge statistics for the entire dataset, including label 0 (not pre-requisites) and label 1 (pre-requisite edges) on the left, and only label 1 on the right. The figure shows that science-to-science edges are most frequent in the total dataset (n=1167), but in the label=1 set (n=455), mathematics-to-mathematics edges prevail (n=470). While mathematics appears as a pre-requisite for all subjects in the full edge set, it only acts as an actual pre-requisite for itself and science. Science remains a pre-requisite for other subjects, with most pre-requisite edges leading to physics, biology and chemistry (n=61,23,20).",
|
| 18 |
+
"4.2.1 Annotation Process": "Multiple experienced teachers were invited and assigned to their preferred subjects, with at least three teachers per subject. These domain experts annotated video pairs, determining if video \"B\" had a prerequisite video \"A\" by assigning binary labels (1: A is a prerequisite of B, 0: A is not a prerequisite of B) and also assigned a unique taxonomy from the set of taxonomies extracted from K12 text-\nbooks parsed using our PDF parser to each video. Teachers viewed the videos thoroughly before annotating and provided well-informed judgments and reasons. The relationship is non-symmetric. After annotating 2797 video edges, Cohen’s Kappa coefficient (0.64) confirmed substantial agreement among annotators. These final annotations served as ground truth labels for model training.",
|
| 19 |
+
"4.3 Academic textbooks dataset": "We generated a training dataset for PDF parsing by downloading 26 textbooks from2 and converting them to XML using PDF2XML. These textbooks span various subjects and classes, covering 662 unique fonts for chapter names (n=53), text body (n=563), and section name (n=46) text classes, hand-labeled by expert academicians. The model trained on this dataset was used to parse 189 PDFs for subjects like science, math, chemistry, biology, and physics for classes 9 to 12. Intermediary XML files and extracted text with taxonomical hierarchy and page numbers have been released.\nAdditionally, we created a dataset of 731 handlabeled textbook pages to test our method with object detection baselines, using an 80:10:10 train, validation, and test split. Pages were converted to 416x416 pixel JPEG images, and three augmentations (horizontal flip, vertical flip, and random crop) were applied which led to the final 1755 images with 1901 total objects.",
|
| 20 |
+
"5.1 Transcript extraction": "We evaluated the performance of Wav2Vec2 Large 960h (Baevski et al., 2020) trained on the Librispeech ASR dataset (Panayotov et al., 2015), with and without our language model (Wav2Vec2 and Wav2Vec2-LM), using Word Error Rate (WER), Match Error Rate (MER), and Word Information Lost (WIL) metrics. We compared it to the Deepspeech ASR method (Amodei et al., 2016), with Wav2Vec2 outperforming Deepspeech in speed and accuracy. Both models ran on CPU, reporting average run-time per video in seconds. Our language model’s inclusion improved domain-specific word transcription and reduced error rates, as shown in Table 1.\n2NCERT Textbooks Webpage",
|
| 21 |
+
"5.2.1 Performance on VID-REQ dataset": "Upon evaluation, three models emerge as the topperforming models on our released dataset of 2,797 prerequisite video pairs (VID-REQ). These models—Extra Trees (Geurts et al., 2006), LightGBM (LGBM) (Ke et al., 2017), and Random Forest classifiers with linear SVC feature selection (RFSVC) (Breiman, 2001)—are assessed using 5-fold cross-validation, reporting mean accuracy, precision, recall, and F1-score as shown in Table 2. Hyperparameters for each model were fine-tuned via grid-search from Scikit Learn (Pedregosa et al., 2011). Extra Trees emerged as the best-performing model with an F1-score of 79.08%. Although both Extra Trees and Random Forest employ multiple decision trees, the difference in performance can be attributed to their responses to various feature characteristics. The unique splitting mechanism of Extra Trees, which involves more randomness, lends robustness when dealing with potentially noisy or complex data. This resilience to the inherent complexities of the feature set likely contributed to Extra Trees’ superior performance over the LGBM and RF-SVC classifiers in our study. We employed the F1-score as a reliable metric given its simultaneous consideration of both precision and recall. This is crucial from a learner’s perspective, as it is vital to prevent mislabeling non-prerequisite videos as prerequisites while accurately identifying all essential prerequisite videos. Moreover, the F1 metric effectively addresses the slight class imbalance present in the dataset.\nFurthermore, we replicate the approach outlined in Gasparetti (2022) on our dataset as a baseline comparison. This technique utilizes aggregated fast-text word-embeddings input into SVC and RF classifiers to predict prerequisite dependencies between pairs of textual documents. As demonstrated in Table 2, our method surpasses the baseline in all metrics, with an F1-score exceeding by more than 10%.",
|
| 22 |
+
"5.2.2 Performance on AL-CPL dataset": "We also compared our features with those of (Liang et al., 2018b, 2019). The dataset released in Wang et al. (2016) is the most widely used Wikipedia prerequisite dataset, which covers data mining, geometry, physics, and pre-calculus subjects. The authors of Liang et al. (2018b, 2019) have pre-processed this data which is released as the AL-CPL dataset. We extract our features from this dataset and quote F1-score performance using 5 fold cross validation of the best performing model i.e., Random Forest with linear SVC feature selection in Table 3. We also compare the results of this model with those of Miaschi et al. (2019) who have used a multimodal architecture that uses LSTM and global features similar to Liang et al. (2018b, 2019) to predict pre-requisites. Both the above mentioned methods quote mean 5-fold cross validation results for the F1 metric. However, Miaschi et al. (2019) has showcased performanced on in-domain and crossdomain prerequisite relationships separately, on 3 variants of their proposed architecture (M1,M2,M3). Thererfore, in order to facilitate direct comparison we choose best results for the F1-score across the models and then take average of the in-domain and cross-domain results. As evident in Table 3 our method surpasses Liang et al. (2018b, 2019) for all subjects and Miaschi et al. (2019) for 3 out of 4 subjects. The average F1-score across subjects of our methods also surpasses that of Miaschi et al. (2019).",
|
| 23 |
+
"5.2.3 Performance on Meta-Academy dataset": "We further showcase performance of our method on another Wikipedia pre-requisite dateset that includes pre-requisites extracted from MetaAcademy (Sayyadiharikandeh et al., 2019). Metacademy is a free, open-source platform encompassing 487 machine-learning concepts connected by 7,947 prerequisite pairs. Our top-performing model, RF-SVC, trained on our novel features, demonstrates superior performance compared to the AdaBoost model trained on Wiki-clicks-based features (user navigation patterns on Wikipedia) on this dataset. As exhibited in Table 2, our model surpasses the AdaBoost model across all metrics, with an F1-score exceeding by over 5%.\nThese experiments showcase the robustness of our features, exceeding benchmarks for Wikipedia prerequisites tasks, even though they were designed for videos. This success can be attributed to our in-depth collaboration with domain expert teach-\ners during feature creation, leading to enhanced effectiveness and performance of our algorithm.",
|
| 24 |
+
"5.3 PDF Parsing": "To evaluate performance on the dataset described in Section 4.3, we use an 80:20 train-test split. The LightGBM classifier (Ke et al., 2017) achieves the best classification results as shown Table 4 and is used in the PDF parser to generate our K-12 corpus.\nTo compare our PDF parsing methods with recent deep learning-based approaches, we treat the extraction of text-classes as an object detection problem, focusing on the crucial section name text class. We use a random subset of textbooks (46 section headers) and extract section headers using both methods. Headers are considered correctly matched if they have distance D (14) less than 0.6 (Doucet et al., 2011; Wu et al., 2013).\nD = LevenshteinDist(A,B) ∗ 10\nMin(Len(A), Len(B)) (14)\nFor this experiment, we use the YOLOv5 model (Jocher, 2021) for object detection and EASYOCR (AI, 2021) to extract text from cropped header images. Our font-based classification method outper-\nforms the YOLO + OCR approach in both performance and average per-page time as shown in Table 5. The deep learning method’s low precision stems from its reliance on visual features alone, which are inadequate for detecting text-classes. In contrast, our method utilizes text, color, and occurrencebased features for accurate classification, and by labeling only the fonts in PDF textbooks, it achieves faster and more precise performance.",
|
| 25 |
+
"6 Conclusion": "In this paper, we present a pipeline for detecting prerequisite dependencies among academic videos using novel similarity-based features. Our approach outperforms existing methods, even surpassing prerequisite detection in domains like Wikipedia. We introduce hand-labeled datasets to discover prerequisite relations across diverse subjects, fostering future research in this area.\nFuture work will explore additional features and methods, extending our approach to a broader range of educational content such as podcasts, slides, and lecture notes. We also aim to integrate collaborative filtering and recommender systems for personalized learning paths, enhancing students’ educational experience and learning outcomes."
|
| 26 |
+
}
|
ACL_23_no_limitation/ACL23_1249.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1249",
|
| 3 |
+
"Title": "Transformer-based Hebrew NLP models for Short Answer Scoring in Biology",
|
| 4 |
+
"abstractText": "Pre-trained large language models (PLMs) are adaptable to a wide range of downstream tasks by fine-tuning their rich contextual embeddings to the task, often without requiring much taskspecific data. In this paper, we explore the use of a recently developed Hebrew PLM – alephBERT – for automated short answer grading of high school biology items. We show that the alephBERT-based system outperforms a strong CNN-based baseline, and that it generalizes unexpectedly well in a zero-shot paradigm to items on an unseen topic that address the same underlying biological concepts, opening up the possibility of automatically assessing new items without item-specific fine-tuning.",
|
| 5 |
+
"1 Introduction": "Advances in NLP offer transformative technology to support educational practice, including scoring of constructed (free text) responses in both holistic and analytic fashion. In particular, pre-trained large language models (PLMs) hold great promise for applications that require sophisticated context-rich analysis of student responses.\nHowever, progress in PLMs and their applications in English outstrips that in other languages. Recent research in Hebrew NLP made available a new Hebrew PLM – alephBERT (Seker et al., 2022); while it has been shown to be effective for NLP tasks such as POS tagging and NER, its effectiveness for a downstream automated scoring application is an open question.\nWe evaluate alephBERT-based classifiers for the task of analytic content-scoring of short answers in biology in a formative high school setting, comparing it to a strong CNN-based baseline.\nWe contribute new knowledge about the effectiveness of BERT-based classifiers in languages other than English for a content-scoring task. Our two key findings are that the alephBERT-based classifiers i) provide a significant improvement over\nthe CNN-based baseline; and ii) generalize surprisingly well to unseen items that deal with the same underlying scientific concepts but in the context of a different topic. We briefly discuss implications of the findings and directions for future work.",
|
| 6 |
+
"2 Related Work": "An especially promising application area of NLP is automated analysis of responses to open-ended questions, either in the form of a full essay, where the goal is typically a demonstration of proficiency in writing in a particular genre (Beigman Klebanov and Madnani, 2021), or in the form of short responses, where the goal is typically to demonstrate content knowledge. In this paper, we consider the latter application, often termed Automated Short Answer Grading (ASAG).\nTo date, most of the scientific development on ASAG has been done in English (see Haller et al. (2022) for a survey), including ASAG using PLMs (Bexte et al., 2022; Li et al., 2021; Condor, 2020; Sung et al., 2019a,b), although work on PLMS for ASAG in other languages does exist, e.g., Japanese (Oka et al., 2022), Arabic (Nael et al., 2022).\nRecently researchers also used multi-lingual PLMs for ASAG: Schneider et al. (2023) used the LaBSE multilingual transformer model (Feng et al., 2022) for scoring very short responses (the bulk of the responses are 5 words or shorter) in a variety of subjects and in 14 languages. Unfortunately, the authors did not provide a detailed breakdown of performance by language or by subject area, although they did show that numeric responses tended to be easier to score than textual or mixed ones, across multiple languages. Interestingly, while there were relatively few responses in English (1.7K), the system’s error on scoring textual responses in English was lower than for Ukranian, which had more than two orders of magnitude more responses than English (500K), which could suggest that languages with smaller digital footprints and therefore less\n550\ndata for pre-training the PLMs would still be at a disadvantage even if there are a lot of responses in those languages for the specific task.\nThe ASAG task for Hebrew was addressed by Ariely et al. (2023). The authors built CNN-based classifiers that used word2vec embeddings; these models will serve as baselines for the current work. Hebrew, like Arabic, is a semitic language where vowels are generally omitted in writing, resulting in substantial ambiguity where the same sequence of written letters can have many meanings depending on context. Therefore, a PLM that implements the latest contextualization advancements holds great promise for ASAG in Hebrew. AlephBERT, the recently introduced Hebrew PLM (Seker et al., 2022), shows SOTA performance on multiple tasks, including morphological and POS tagging and NER. Our goal is to evaluate alephBERT for the ASAG task in Hebrew.",
|
| 7 |
+
"3.1 Data": "The data consists of responses to open-ended questions on three biology items from 669 students in grades 10-12 from about 25 high schools across Israel. There are thus 669 labeled responses for each of the three items (henceforth, q1, q2, q3), scored by a team of content and pedagogy experts with a binary score per category; that is, for every response, there are 10-13 binary labels according to the analytic rubric for the given item.\nThe items present questions about the effect of smoking (q1), anemia (q2), and travel in high altitude (q3) on physical activity. A very similar analytic rubric is used for all three items to assess students’ ability to write causal explanations in biology. The rubric consists of a causal reasoning chain built from 13 categories, each of which evaluates whether a specific scientific fact or causal relation is addressed correctly in a response. Table 1 shows the mapping between the items and the binary analytic categories. Table 2 shows brief definitions of the categories. Figure 1 shows the score distributions per item per category. We observe that item q3 is harder than items q1 and q2 on most categories shared by the three items.\nThe rubric evaluates the ability to explain stepby-step the causal chain leading to the phenomenon. For example, q1 asks students to explain how high levels of CO make it difficult for smokers to exercise. Two responses are shown below, trans-\nlated into English. Response 1 was given credit for mentioning the changes in oxygen levels after CO binding to hemoglobin (category 1), for stating the connection between the decreased cellular respiration rates and the reduction in the generation of energy which is necessary for physical activity (8-12). However, the reasoning chain is not articulated fully, since the transfer of oxygen to the cells by red blood cells and the role of oxygen in cellular respiration are not stated (no credit for categories 3-7). Conversely, Response 2 does mention the impairment of oxygen transfer to the body and cells (4 and 5), but does not include the parts of the explanation that connect oxygen to cellular respiration and cellular respiration to production of energy for the physical activity, hence no credit is given on categories 6-12.\nResponse 1 A cigarette contains several harmful substances, including CO. CO has a strong tendency to bind to hemoglobin found in red blood cells. As a result, less oxygen binds to hemoglobin, which affects the rate of cellular respiration. Because the rate of cellular respiration slows down, less energy is generated in the cells of the body, so the cells do not have enough energy to perform physical activity and difficulty is created. Scores: [-, 1, -, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1]\nResponse 2 Because those carbon dioxide molecules bind to hemoglobin, the transfer of oxygen to the body’s cells is impaired.\nLack of hemoglobin and oxygen explains the difficulty of people who smoke to exercise. Scores: [-, 1, -,0 ,1 ,1 ,0 ,0 ,0 ,0 ,0 ,0 ,0]\nThis rubric was developed in consultation with teachers to support in-class formative assessment, for example by assigning students to small study groups based on reasoning types revealed in their response patterns.\nThe items are typical open-ended questions commonly used (or versions of them) in teaching materials in biology and in the Israeli high school matriculation exam (‘Bagrut’). The three items were presented to students in a randomized order. The average length of response is 55, 48, 70 words and standard deviation of 34.5, 27.4, 48 for q1, q2, q3, respectively. The data collection was approved by IRB and includes permission to use the data for research. The data was collected prior to and independently of this study and was previously used in computational experiments of Ariely et al. (2023).",
|
| 8 |
+
"3.2 Experiment design": "In this study, we investigate how well an alephBERT classifier performs on analytic ASAG, compared to the CNN-based system of Ariely et al. (2023). We conduct evaluations in two scenarios: (a) within-item, where train and test data come from the same item, and (b) cross-items, where the system is trained on two items and tested on the third. The main goal of the latter evaluation is to address cases where a new item is created that deals with a different application area of the same scientific concept, that is, a new item that would address cellular respiration mechanism in a different real-life application. This is a common pedagogical strategy for creating teaching, practice, and multiple forms of assessment materials.\nWe partition the students into train, development, and test groups in the 60/20/20 proportions respectively; their responses comprise the q1-train, q1dev, q1-test sets, and the same for q2 and q3. This is done in order to ensure that responses from the same student do not appear in both train and test data in the evaluations. We build a classifier for each category (13 classifiers in total); while the student responses are the same across categories (we are using the full text of the response), the labels may differ across categories. That is, a given response can have the score of 0 on category 3 and the score of 1 on category 8, as in Response 1 shown in section 3.1.\nFor within-item experiments, we train on q1train and test on q1-test; same for q2 and q3. For cross-item experiments, we train on the combination of q1-train and q2-train and test on q3-test; the same for the other two permutations of the items. In this design, in addition to benchmarking against prior work, we also compare performance between within-item and cross-item scenarios, e.g., results on q3-test when trained on q3-train vs trained on the combination of q1-train and q2-train.\nFor evaluation, we use Cohen’s κ, per item per category. We also report proportion of categories with κ > 0.6, to get a sense of the extent to which the rubric as a whole can be automatically scored with reasonable reliability for a formative context. Ariely et al. (2023) reported average performance over 50 iterations of cross-validation for each item and each category; in our context, it is prohibitively time-consuming to run such a large number of evaluations. We report evaluations on q1, q2, and q3 test sets for the alephBERT models; thus, performance estimates for alephBERT are somewhat noisier than for the CNN baseline.",
|
| 9 |
+
"4.1 Baseline": "For the baseline, we use published results for CNNbased classifiers reported in Ariely et al. (2023), where each classifier predicts whether a certain category is addressed in the response. Pre-processing included tokenizing the input text and performing a morphological and syntactic analysis using Hebrew NLP tools. Word embeddings over a vocabulary of frequently-used morphemes and their part of speech were constructed using Gensim’s word2vec CBOW algorithm. The embeddings were fed forward into two consecutive convolutional layers, followed by a fully connected layer and a sigmoid activation function. The embeddings (of size 100)\nwere trained on the entire Hebrew Wikipedia.",
|
| 10 |
+
"4.2 AlephBERT based models": "AlephBERT PLM (Seker et al., 2022) is based on the same architecture as the English BERT PLM (Devlin et al., 2018). AlephBERT was designed to handle Hebrew morphology; see Seker et al. (2022) for a detailed description. AlephBERT was trained on a larger corpus than any Hebrew language model before it, including Twitter, Hebrew wiki and the Hebrew portion of the Oscar dataset (Ortiz Suárez et al., 2020). It was not specifically trained on biology or science data beyond the occurrence of these topics in the general corpora. It includes 12 layers, i.e., transformer blocks (768 units per layer), 12 attention heads, the total of 110M parameters and vocabulary size of 52K.\nFor every category, we built a classifier that uses the alephBERT PLM pre-trained embeddings and an additional classification layer, with sigmoid activation. We fine-tune the models on our training data using cross-entropy loss; all layers of the model are tuned. The learning rate and number of epochs hyperparameters were tuned on dev sets.",
|
| 11 |
+
"5 Results": "Table 3 shows the performance of the alephBERTbased system on all <category, item, case> combinations, where case refers to ‘within-item’ or ‘cross-item’. The performance of the CNN baseline is shown as published in Ariely et al. (2023).",
|
| 12 |
+
"5.1 Comparison to CNN baseline": "AlephBERT-based models perform significantly better than the baseline, p = 0.016, using the one-sided Wilcoxon signed-rank test (paired) with n = 44 (all <item,category,case> cells in Table 3 that have results for both the models), α = 0.05. The largest gain is on category 9 within-item: from κ = .06-.73 (baseline) to κ > .90 (alephBERT). Category 9 looks for a specific phrase (‘cellular respiration’). We hypothesize that this improvement is driven by the improved ability of alephBERT to capture the rich token-internal structure of the Hebrew language reported by Seker et al. (2022) based on morpheme-level evaluations.",
|
| 13 |
+
"5.2 Comparison between within-item and cross-item performance": "We compare the alephBERT-based within-item models with the cross-items (i.e., zero-shot) models\non all <category, item> combinations where both models can be run (see Table 3). The cross-item performance is not significantly worse than withinitem, p = 0.9 using the one-sided Wilcoxon signedrank test (paired), n = 32, α = 0.05.\nThis is a remarkable result, since one would expect a degradation in performance for models that saw no data coming from the test item at train time. In fact, an unseen item on the same biology concept can be scored with a common analytic rubric with κ > 0.6 on average across categories for each item, which may be sufficient for formative uses and may allow teachers to create and score new items based on a similar rubric on the fly.\nWe observe a complete failure of cross-item generalization on category 1. This category occurs only in q1 and q3; the cross-item generalization is thus based on one training item. This could compromise the system’s ability to zero in on those meaning elements that are common to the two training items and instead overly rely on the specifics of the training item’s topic. Category 1 is also more difficult to address well in q3 than in q1 (30% correct vs 78% correct, see Figure 1), further complicating cross-item transfer. Understanding the necessary conditions for transfer is a topic for future research.",
|
| 14 |
+
"6 Conclusions": "Pre-trained large language models can be adapted to downstream tasks by fine-tuning their rich contextual embeddings to the task. We explored the recent Hebrew PLM – alephBERT – for short answer grading in high school biology. We found that the alephBERT-based system outperformed a strong baseline and that it generalized unexpectedly well to items on an unseen topic addressing the same biology concepts. The second finding provides evidence in support of the viability of the modular design of the rubric – not only is it the case that human raters were able to reliably assess different items with subsets of the same analytic categories, but an automated model was likewise able to zero in on the commonalities in the way categories are manifested in student responses across multiple topics.\nThe cross-item generalization has exciting implications for educational practice, as this may allow teachers to create and automatically score new items based on a similar rubric on the fly. A study of this possibility with teachers and an improvement of our understanding of the conditions neces-\nsary for the successful transfer to occur are two of the directions of our future work, as well as further enhancement of the scoring system.",
|
| 15 |
+
"Acknowledgements": "This research was partially supported by the Israeli Council for Higher Education (CHE) via the Weizmann Data Science Research Center."
|
| 16 |
+
}
|
ACL_23_no_limitation/ACL23_1252.json
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1252",
|
| 3 |
+
"Title": "Rating Short L2 Essays on the CEFR Scale with GPT-4",
|
| 4 |
+
"abstractText": "Essay scoring is a critical task used to evaluate second-language (L2) writing proficiency on high-stakes language assessments. While automated scoring approaches are mature and have been around for decades, human scoring is still considered the gold standard, despite its high costs and well-known issues such as human rater fatigue and bias. The recent introduction of large language models (LLMs) brings new opportunities for automated scoring. In this paper, we evaluate how well GPT-3.5 and GPT-4 can rate short essay responses written by L2 English learners on a high-stakes language assessment, computing inter-rater agreement with human ratings. Results show that when calibration examples are provided, GPT-4 can perform almost as well as modern Automatic Writing Evaluation (AWE) methods, but agreement with human ratings can vary depending on the test-taker’s first language (L1).",
|
| 5 |
+
"1 Introduction": "Automated writing evaluation (AWE) systems are commonly used to evaluate test-taker writing. AWE systems are deployed on large-scale, highstakes writing assessments used for admissions to higher education institutions, and for lower-stakes US state writing assessments that provide information about K-12 students’ academic writing performance. These systems typically use featureengineering approaches that include rule-based and statistical natural language processing (NLP) methods. NLP is used to extract features from essay writing responses that are characteristic of writing quality. Features may include errors in grammar and spelling, discourse structure, discourse coherence, vocabulary usage, and sentence variety. Features may be rule-based or statistically derived. Statistical model methods, such as straightforward linear regression, are used to train (build) AWE scoring models for high-stakes scoring of writing assessments. Detailed descriptions of systems are avail-\nable for major systems, including e-rater®, Intelligent Essay Assessor™, Intellimetric®, and PEG (Shermis and Burstein, 2013), and Cambium’s automated essay scoring system (Lottridge, in press).\nRecent advances in language modeling with neural transformer architectures (OpenAI, 2023; Brown et al., 2020) have the potential to revolutionize AWE. These large language models (LLMs) demonstrate an incredible potential to analyze and evaluate text which has implications for the future of AWE. In addition, GPT’s intuitive, text-based interface lowers barriers for use, potentially increasing accessibility and adoption of these tools for AWE. The assumptions about how LLMs – specifically GPT-4 – can be used for AWE tasks, such as automated scoring and feedback need to be evaluated to determine how we can use them beneficially, and particularly to ensure that they can be used in a fair and ethical manner (Burstein, 2023).\nPrevious research evaluated GPT-3.5 for essay scoring tasks in an L2 context (Mizumoto and Eguchi, 2023). In this paper, we evaluate GPT4 for a similar task, comparing it to GPT-3.5, human judgement, and a strong baseline using current AWE methods. We also explore various aspects that affect the accuracy of GPT’s ratings, and its fairness across gender and L1.",
|
| 6 |
+
"2 Data": "For our experiments, we used a human-rated dataset consisting of short essay responses collected as part of the Duolingo English Test, a highstakes test of English for L2 learners. For this essay task, test-takers are given a short written prompt randomly selected from an item bank of about 700 items. Test-takers have 5 minutes to provide their essay response to the prompt. Two human raters used a scoring rubric aligned with the Common European Framework of Reference (CEFR) (Council of Europe, 2001).\nWe started by sampling 10,000 responses from\n576\ntest sessions that took place over a 10-month period, controlling for L1 and gender. For L1, we limited responses to 7 of the most common L1 languages for the test, which also captures a broad range of language families: Arabic (ara), Mandarin Chinese (cmn), Telugu (tel), English (eng)1, Spanish (spa), Gujarati (guj), and Bengali (ben). To ensure all CEFR levels were well represented in the final dataset2, we used a simple CEFR classifier that uses logistic regression and NLP features to roughly estimate the CEFR level of each response. For the final dataset, we randomly sampled an equal number of responses for each combination of L1, gender, and estimated CEFR level from the 10,000 test sessions.\nThe scoring rubric was aligned to the CEFR scale and assessed each response based on its content, coherence, vocabulary, and grammar. The rubric instructed raters to assign each essay one of eight rating categories: six based on the CEFR scale, and two “unscorable” categories for minimal responses (e.g., provides no response or says they can’t answer the question) and bad-faith responses (e.g., off-topic or nonsensical). The full rubric is provided in Appendix B.\nBased on this rubric, two assessment researchers developed a set of calibration examples by collectively rating 676 essays, 180 of which were rated by both. The rubric and calibration examples were provided to two new human raters, who collectively rated 1,961 new essays, including a random sub-sample of 389 essays that were rated by both. Both new human raters were trained by one of the original assessment researchers and inter-rater agreement was routinely checked. Raters were provided feedback to help with calibration when necessary. The final Quadratic Weighted Kappa (QWK) between the two raters was 0.87. Ratings were roughly normally distributed (see Figure 1), with ∼53 % of essays receiving a rating of B1 or B2 and only ∼12 % getting a rating of A1 or C2.\n1Test-takers who identify their L1 as English may come from countries where English is an official language, such as India.These test-takers are required to take an English language proficiency test to attend an English-medium institution abroad.\n2In particular, the DET test-taker population’s proficiencies follow a unimodal distribution around the B1/B2 CEFR levels (Cardwell et al., 2022), and so uniform random sampling would have resulted in too few A1 and C2 essay responses being included in the dataset.",
|
| 7 |
+
"2.1 Methodology": "In our experiments, we used the ChatGPT API to rate these short essay responses, comparing them to human judgements using the same rubrics.\nIn the system message, we instructed GPT to rate each provided essay in one of eight rating categories: one of the six CEFR levels or one of the two unscorable categories, [No-Response] and [Nonsense/Off-Topic]. In the default setting, we provided specific criteria the two unscorable categories, but not for CEFR levels3. See Appendix C for details.\nIn addition to the system message, we also provided GPT with varying numbers of calibration examples. These examples were randomly sampled from the set of 180 essays that were double-rated by assessment researchers where both researchers agreed on the same rating. The same number of examples were provided for each of the eight rating categories. We tested providing up to the maximum number of calibration examples that would fit into each model’s token limit (generally two per category for GPT-3.5 and four per category for GPT-4)4. To avoid any possible interaction between essays, we used a fresh GPT conversation to rate each essay.\n3Querying GPT-4 easily shows that it already has some built-in knowledge of CEFR, presumambly from its massive training corpora, and can even provide CEFR descriptors for various language skills verbatim, if prompted. So, it was reasonable to evaluate GPT’s ability to apply CEFR rating categories accurately without a rubric. The same is not true for the unscorable rating categories, and preliminary experiments showed that GPT applied the unscorable labels much too broadly if their criteria weren’t elaborated in the instructions to GPT.\n4Note that this token limit applies to the entire GPT conversation, not just a single turn within the conversation, and thus this puts a hard limit on the number of calibration examples that can be provided.\nOnce all ratings were collected, we tabulated them on a scale of 0 – 6: assigning a 0 for both unscorable categories, and a score 1 – 6 for the CEFR levels. We then computed the inter-annotator agreement between GPT and rater 1 (n=1,175), computing 90% confidence intervals using bootstrapping and comparing this to the agreement between the two human raters. We also compared our results to two baselines: a machine learning (ML) classifier using only the response’s character length, and a strong baseline representative of current AWE methods that use feature engineering and statistical modeling (Attali and Burstein, 2006; Foltz et al., 1999). The strong AWE baseline, which is used to score writing responses on the Duolingo English Test, uses XGBoost (Chen and Guestrin, 2016) and is trained on hundreds of thousands of short essay responses using 85 research-based linguistic features covering a wide range of writing sub-skills, including cohesion, grammatical complexity, lexical sophistication, grammatical and lexical accuracy, length, and relevance. A more detailed breakdown of these features are provided in Appendix A.",
|
| 8 |
+
"3 Experiments": "We conducted three experiments. The first evaluates both GPT-3.5 and GPT-4 with a minimal rubric and up to the maximum number of calibration examples that fit within the GPT model’s token limit. The second experiment evaluates various prompt engineering strategies for improving performance. The third experiment explores GPT-4’s fairness properties across gender and L1.",
|
| 9 |
+
"3.1 Experiment 1: Calibration Only": "In this first experiment, we evaluated GPT’s ability to rate essay responses on the CEFR scale when provided only a minimal rubric (as described in Appendix C) and varying numbers of calibration examples.\nFigure 2 shows the QWK between GPT and the first human rater, depending upon the model used and the number of calibration examples provided. When no calibration examples were provided, neither GPT-3.5 nor GPT-4 even outperform the baseline classifier using character length only. However, by providing just one calibration example for each rating category, GPT-4 almost matches the performance of the AWE baseline (QWK 0.81 vs 0.84, p < 0.1). Providing additional examples did not result in significant improvement. GPT-3.5, on the\nother hand, did not improve much when provided calibration examples, and only outperformed the length-only baseline when provided two calibration examples per rating category (i.e., the maximum possible with GPT-3.5’s limit of 4,096 tokens).\nThe confusion matrices in Figure 3 provide more insight. We see that when no examples were provided, both versions of GPT were generally able to identify unscorable responses, and did tend to assign slightly higher ratings to better essays, but mainly rated essays in the B1 – B2 range. When provided calibration examples, GPT-4 learned to use the full range of CEFR levels, but struggled to distinguish between adjacent CEFR levels compared to humans, especially for CEFR level B2. GPT-3.5, on the other hand, improves only slightly when provided calibration examples.",
|
| 10 |
+
"3.2 Experiment 2: Prompt Engineering": "In our second experiment, we tested two strategies for improving the performance of GPT-4:\nDetailed Rubric - In the system message, we replaced the minimal rubric used in the previous experiment with a detailed rubric that described the criteria for each CEFR level (see Appendix C).\nRequire Rationale - In the system message, we asked GPT to provide a rationale before providing its rating in order to elicit a chain of reasoning, which has been shown to improve the the ability of LLMs to perform complex tasks (Wei et al., 2022). This also meant providing rationales for the calibration examples, which could help GPT-4 better understand the reason for each example’s rating.\nBoth of these techniques required significantly more token-space for the input prompt and thus lim-\nited the number of calibration examples that could be provided. Only up to two per rating category could be provided when using a detailed rubric, and only up to one per rating category when requiring rationales.\nAs seen in Figure 4, these strategies contributed substantial lift in performance when not providing calibration examples, but when at least one calibration example per rating category was provided, these techniques contributed negligible benefit.",
|
| 11 |
+
"3.3 Experiment 3: Fairness": "Ensuring that raters do not show systematic bias that can affect scoring accuracy due to background characteristics of test-takers, such as gender or L1, is an important step in rater analysis with human raters (Jin and Eckes, 2022). This is also a needed step in developing AWE systems. To investigate the extent to which GPT-4’s ratings are fair, we evaluated its performance for each gender and each of the L1 languages in the dataset.\nTo maximize statistical power and ensure that the analysis is not biased by a single human rater, we used all essays rated by any one of the raters or researchers in our dataset, except the 180 essays that were double-rated by the two researchers, which were reserved for calibration examples. The resulting dataset included 2,457 essays, roughly equally distributed among both genders and all L1s.\nWe found no significant differences in performance by gender, and while GPT-4’s ratings were slightly positively biased compared to human ratings overall (by about +0.15 CEFR levels), this bias did not vary significantly by any gender or L1 (p > 0.10).\nHowever, we did find that GPT-4 had less agreement with human ratings for essays written by L1\nspeakers of some languages compared to others: QWK was lowest for L1 speakers of Telugu (tel) at 0.66 and highest for L1 speakers of Spanish (spa) at 0.89. A more detailed analysis showed that some of the differences in agreement by L1 was explained by differences in the distribution of human ratings for those L1s. The standard deviation of human ratings by L1 ranged from 1.04 for Telugu (tel) to 1.56 for Arabic (ara). Those L1s with narrower distributions of human ratings had a greater proportion of essays rated in categories for which GPT-4 had lower rates of agreement overall, such as B2, and thus brought down the QWK for those L1s.\nWe assume that the differences in the distribution of human ratings by L1 reflect systematic errors in the CEFR classifier used in sampling (see Section 2) and possibly differences in our underlying test-taker population. Thus we controlled for these distribution differences by recomputing QWK for each L1 using importance sampling so that all L1s would have the same effective distribution of human ratings. The results are shown in Figure 5. Even after the importance sampling correction is applied, GPT-4’s ratings agreed less with human ratings for responses written by L1 speakers of Mandarin Chinese (cmn), Telugu (tel), and Bengali (ben) compared to those written by L1 speakers of Spanish (spa). It is possible that essays of some L1s are harder to distinguish and thus have less reliable human ratings, but our dataset does not consist of a sufficient number of double-rated essays to investigate this hypothesis, so we leave this for a future work.",
|
| 12 |
+
"4 Conclusion": "We showed that unlike GPT-3.5, GPT-4 is able to attain performance similar to conventional Automated Writing Evaluation (AWE) models when rating short L2 essays. GPT-4 only required one calibration example per rating category to achieve\nnear optimal performance, but other prompt engineering techniques we tried were not very helpful. Furthermore, when assessing fairness with respect to the test-taker’s gender or L1, we found that while GPT-4 did not show bias in favor of any one group, it showed significantly less agreement with human ratings for some L1s. It is unclear whether this is due to the reliability of GPT-4 or that of the human ratings themselves. More research is needed to understand this discrepancy and its implications for fairness. Future research may also explore other prompt engineering strategies for improving GPT4’s performance at this task, or potentially finetuning GPT-3.5, enabling one to leverage dramatically more training data than what can be provided in a prompt. Perhaps most excitingly, future work may explore GPT-4’s potential for providing feedback aligned to essay scoring: a task for which GPT-4 seems particularly well suited.",
|
| 13 |
+
"Acknowledgements": "We thank the researchers and raters who contributed to building the dataset, and the reviewers who reviewed our paper and provided valuable feedback, particularly JR Lockwood, Ben Naismith, Klinton Blicknell, and Alina von Davier.",
|
| 14 |
+
"A AWE Baseline Model Features": "Here we provide a more detailed breakdown of the features used in our AWE baseline:\n• 13 cohesion features, including overlap features and coreference counts (McNamara and Graesser, 2012)\n• 3 grammatical complexity features, including max/mean dependency tree depth and mean sentence length (Schwarm and Ostendorf, 2005)\n• 7 lexical sophistication features measuring the proportion of words at each CEFR level (including an out-of-vocabulary category for words that could not be found in the CEFR dictionary) (Xia et al., 2019)\n• 51 lexical and grammatical accuracy features, measuring the error rates across a wide variety of error types (Bryant et al., 2017)\n• 4 features using n-gram models over wordforms, lemmas, part-of-speech, and dependency tags to measure differential use of vocabulary and grammar across test-takers of different proficiency levels (Attali, 2011)\n• 3 length features, including number of characters, words, and sentences\n• 2 lexical diversity features derived from the Measure of Textual Diversity (MTLD) (McCarthy and Jarvis, 2010)\n• 1 vocabulary control feature using n-gram models to measure idiomatic use of vocabulary\n• 1 relevance feature, computed using IDF weighted word embeddings between the prompt and the response (Rei and Cummins, 2016)",
|
| 15 |
+
"B Scoring Rubric": "Below are the criteria for each rating that were used in the rubric provided to human raters, and the system message prompts provided to ChatGPT (where applicable).\nC2 The response fully achieves the task requirements: (1) the response is clear, relevant, fully developed, and is written in an appropriate\nstyle (2) the response is smoothly-flowing, coherent, and cohesive throughout; (3) vocabulary (including collocations and idiomatic language) is accurate, appropriate, and precise; and (4) a wide range of grammatical structures are flexibly used, and there are no grammatical errors other than slips characteristic of expert speakers. Does the response have an excellent effect on the reader, such that the writer communicates their position/describes the image extremely effectively and in detail, there is no strain on the reader, and a very high level of language is used consistently throughout?\nC1 The response achieves the task requirements: (1) the response is clear, relevant, appropriately developed, and is written in an appropriate style (2) the response is well-structured, coherent, and cohesive; (3) vocabulary (including collocations and idiomatic language) is accurate, appropriate, and demonstrates a broad range; and (4) a wide range of grammatical structures are used, and grammatical errors are rare. Does the response have a very good effect on the reader, such that the writer communicates their position/describes the image clearly and effectively at some length, with a high level of language used consistently throughout other than minor lapses which do not impact the communicative effect?\nB2 The response mostly achieves the task requirements: (1) the response is mostly clear, relevant, developed, and written in an appropriate style (2) the response is generally wellstructured, coherent, and cohesive despite occasional lapses; (3) vocabulary (including collocations and idiomatic language) is generally accurate and appropriate to the task; and (4) a range of grammatical structures are used, and grammatical errors usually do not impact communication. Does the response have a good effect on the reader, such that the writer communicates their position/describes the image fairly clearly and with some detail, with a level of language that allows them to successfully complete the task despite inaccuracies?\nB1 The response partially achieves the task requirements: (1) the response is not always clear, relevant, developed, or written in an appropriate style (2) the response is somewhat organized\nbut may lack coherence or cohesion at times; (3) vocabulary (including collocations and idiomatic language) is generally clear but limited; and (4) a limited range of grammatical structures are used with some errors which may impact communication. Does the response have a satisfactory effect on the reader, such that the writer communicates their position/describes the image despite lapses, with a level of language that allows them to generally complete the task despite errors?\nA2 The response minimally achieves the task requirements and may be somewhat off-topic or underlength: (1) the response is limited to simple descriptions/personal opinions and topics and may be unclear, irrelevant, or written in an inappropriate style or format (2) the response uses some simple cohesive devices but may be repetitive or incoherent at times; (3) vocabulary is limited and often inaccurate or unclear; and (4) grammar structures are basic and there are frequent errors which may impact communication. Does the response have a poor effect on the reader, such that the writer communicates only basic impressions or opinions/a basic description, with a level of language that allows them to only minimally complete the task despite numerous errors?\nA1 The response does not achieve the task requirements and may be off-topic or very underlength: (1) the response is limited to simple personal information and does not present a position/describe the image. Ideas are often unclear or irrelevant. (2) the response does not demonstrate organizational features and is composed of isolated phrases and sentences; (3) vocabulary is very limited, inaccurate, and is insufficient for the task; and (4) only basic grammatical structures are produced and errors predominate. Does the response have a very poor effect on the reader, such that the writer does not communicate a relevant position/adequately describe the image, with a level of language that does not allow them to successfully complete the task?\nNo-Response There is no response, it is very minimal, or the test-taker indicates that they cannot answer the question (e.g., “I don’t understand”, “Sorry my English is bad”, etc.).\nNonsense/Off-Topic The test-taker does not respond to the prompt in good faith, repeats the prompt without responding to it, or intentionally goes off-task in an attempt to “trick” the system (e.g., by writing random words, writing in a non-English language, writing random strings of letters, or giving a memorized off-topic response).",
|
| 16 |
+
"C GPT Prompts": "The wording and design of the prompts provided to GPT can affect its performance. In this appendix, we provide the exact details of each prompt we used.\nFor our purposes, there are two components to the GPT prompts: the system message and the conversation turns. The system message tells ChatGPT the role it is playing in the conversation, and helps set its behavior during the interaction. For the system messages, we used two different messages, depending on whether the rubric was provided or not.\nWhen providing a minimal rubric to GPT without asking for a rationale, we used the following message:\nYou are a rater for writing responses on a high-stakes English language exam for second language learners. You\nwill be provided with a prompt and the test-taker’s response.\nRatings are based on the CEFR scale. Each rating should be one of the following: [A1], [A2], [B1], [B2], [ C1], [C2], [Nonsense/Off-Topic], or [No-Response].",
|
| 17 |
+
"You should assign a [No-Response] rating": "if: - There is no response to assess. - There is no or very minimal response. - The test-taker indicates they cannot\nanswer the question (e.g., I don’t understand, Sorry my English is bad, etc.).\n\nif: - There is no response to assess. - There is no or very minimal response. - The test-taker indicates they cannot\nanswer the question (e.g., I don’t understand, Sorry my English is bad, etc.).",
|
| 18 |
+
"You should assign a [Nonsense/Off-Topic]": "rating if: - The test-taker is not responsive to the prompt in good faith: - The test-taker repeats the prompt but does not respond to it. - The test-taker intentionally goes offtask in some way to ’trick’ the system, e.g., by writing random words, writing in a non-English language, writing random strings of letters, or giving a memorized offtopic response.\nYou should reply to each response with just your rating: do not explain or justify it.\nWhen the rubric was provided to GPT, we used the message below, which adds the descriptions for each CEFR level. We used the same descriptions as defined in Appendix B, so we elide them here, replacing them with a comment between angled brackets <>, for brevity. You are a rater for writing responses on\na high-stakes English language exam for second language learners. You\nwill be provided with a prompt and the test-taker’s response.\nRatings are based on the CEFR scale. Each rating should be one of the following: [A1], [A2], [B1], [B2], [ C1], [C2], [Nonsense/Off-Topic], or [No-Response].\n\nrating if: - The test-taker is not responsive to the prompt in good faith: - The test-taker repeats the prompt but does not respond to it. - The test-taker intentionally goes offtask in some way to ’trick’ the system, e.g., by writing random words, writing in a non-English language, writing random strings of letters, or giving a memorized offtopic response.\nYou should reply to each response with just your rating: do not explain or justify it.\nIn both cases, we explicitly instructed GPT not to explain or justify its responses, to ensure that a definitive rating that could be parsed and used in the evaluation would be provided. When we experimented with requesting rationales as described in Experiment 2, we replaced the last line with the following: You should reply to each response with\nyour rationale and rating in the following format:",
|
| 19 |
+
"Scoring Criteria:": "For each CEFR rating, there is a description which addresses relevant aspects of language related Content, Discourse, Vocabulary, and Grammar. When assigning a score, the overall holistic impression should be\nconsidered it is not necessary for a test=taker to achieve all of the positive characteristics of a grade as long as overall the descriptor is the best match.",
|
| 20 |
+
"Rating: [C2] Description: <See description in": "Appendix A above>\n<Repeated for ratings C1 - A1>",
|
| 21 |
+
"Rating: [<<<Your rating here.>>>]": "The conversation turns were used to provide GPT with the essay to be rated, and to elicit a rating. It was also used to provide GPT with calibration examples, when applicable. In both cases, we used the same format.\nThe user message provides the essay prompt and the test-taker’s response. As recommended by OpenAI, both are surrounded in triple-quotes. Prompt: \"\"\" <Essay prompt placed here.> \"\"\"\nResponse: \"\"\" <Essay response placed here.> \"\"\"\nThe assistant response message following each user message would simply contain the rating in square brackets (e.g., [B2] or [Nonsense/Off-Topic]). In most cases, GPT would prefix its response with Rating:, which we simply dropped."
|
| 22 |
+
}
|
ACL_23_no_limitation/ACL23_1253.json
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1253",
|
| 3 |
+
"Title": "Towards automatically extracting morphosyntactical error patterns from L1-L2 parallel dependency treebanks",
|
| 4 |
+
"abstractText": "L1-L2 parallel dependency treebanks are UDannotated corpora of learner sentences paired with correction hypotheses. Automatic morphosyntactical annotation has the potential to remove the need for explicit manual error tagging and improve interoperability, but makes it more challenging to locate grammatical errors in the resulting datasets. We therefore propose a novel method for automatically extracting morphosyntactical error patterns and perform a preliminary bilingual evaluation of its first implementation through a similar example retrieval task. The resulting pipeline is also available as a prototype CALL application.",
|
| 5 |
+
"1 Introduction": "L1-L2 parallel dependency treebanks are corpora where sentences produced by learners of a second language (L2), paired with native-like (L1) correction hypotheses, are annotated following the Universal Dependencies (UD) standard (Nivre et al., 2020). This data format, proposed by Lee et al. (2017), has interoperability as its main goal: UD provides a uniform annotation layer across different languages and its fine-grained morphosyntactical analysis is meant to make explicit error tagging unnecessary, preventing the incompatibilities that arise from the use of project-specific taxonomies. In addition, the availability of increasingly reliable dependency parsers can significantly speed up, if not completely automate, the annotation process.\nPutting L1-L2 treebanks into use, however, requires effective ways to extract information from them. Errors, explicitly marked in most learner corpora, are for instance not straightforward to identify in such datasets. In this paper, we report on ongoing work on this problem, focusing on morphosyntax. In particular, we propose a novel approach to locate error-correction pairs and convert them into machine-readable error patterns, which can serve as a starting point for a variety of tasks, includ-\ning explainable automatic error classification and controlled feedback comment generation.\nWe put a first implementation of this method to the test through an example retrieval task where patterns extracted from a set of example sentencecorrection pairs are used to find similar errors in an L1-L2 treebank. An interactive version of the resulting system is also made available as a prototype Computer-Assisted Language Learning (CALL) application, similar to Arai et al. (2019)’s corpus search tool for L2 Japanese learners.1",
|
| 6 |
+
"2 Related work": "Standardizing and automating the annotation of learner corpora is desirable for a variety of purposes. Notable in this sense is ERRANT (Bryant et al., 2017), an automatic ERRor ANnotation Toolkit for learner English whose principal aim is allowing finer-grained evaluation of Grammatical Error Correction (GEC) and Detection (GED) systems. ERRANT extracts edit operations from learner sentence-correction pairs. Each edit is later labelled following an error taxonomy relying solely on dataset-agnostic information such as the POS (Part Of Speech) tag of the tokens involved.\nWith L1-L2 parallel UD treebanks, there is no explicit error annotation step: the idea is that morphosyntactical annotation should suffice, as error can be described by means of tree patterns pairs, comparing the original learner attempt with its target L1 counterpart (Lee et al., 2017). When it comes to retrieving instances of specific patterns of error, a query engine was developed by Masciolini (2023). Choshen et al. (2020), on the other hand, used UD-annotated parallel data to automatically derive SERCL, a new taxonomy of Syntactic ERrors for automatic CLassification, later combined with ERRANT’s under the name of SERRANT (Choshen et al., 2021). SERCL error types\n1Our software is available for download at github.com/ harisont/L2-UD (accessed 31.05.2023).\n585\nare obtained by concatenating the morphosyntactical features of the head of a problematic text segment before and after correction. The results are labels such as ADJ→ADV (adjective replaced by adverb), applicable for instance to the example in Figure 1. Choshen et al. (2020)’s system, as well as the query tool, has been tested both on manually annotated treebanks and on automatically parsed sentences, with results suggesting the standard parsers’ relative robustness to learner errors.\nQuerying parallel UD treebanks and using them to automatically derive data-driven error taxonomies are two tasks closely related but not identical to what we attempt in this paper. As opposed to searching for specific error types, we try to detect all errors appearing in an L1-L2 treebank, and rather than classifying them according to a flat labelling scheme we aim at obtaining fine-grained descriptions of each, in the form of patterns meant for further processing.",
|
| 7 |
+
"3 Methodology": "We see error pattern extraction as a two-stage process. Given a learner sentence and the corresponding correction, the first step, discussed in Section 3.1, is locating its problematic portions to extract error-correction pairs. As per Section 3.2, the latter are then converted into machine-readable patterns.",
|
| 8 |
+
"3.1 Locating error-correction pairs": "A simple way to locate errors in a pair of sentences is to phrase- and/or word-align them and consider as erroneous all correspondences presenting any discrepancies between their L1 and L2 components. If the goal is to only select errors belonging to a specific macro-category, the task of deciding whether a discrepant alignment is relevant or not becomes less straightforward. In this case, we are mostly interested in morphosyntax, for which UD annotation is particularly informative. At this stage, however, we assume our data to only contain this type of errors and focus on alignment alone.\nThat of alignment is a problem common to all the works mentioned in Section 2. To extract edits, ERRANT uses a linguistically-enhanced L1-L2\nalgorithm (Felice et al., 2016). While reportedly achieving state-of-the-art results, its implementation is English-specific. Choshen et al. (2020), on the other hand, work in a bilingual setting. The paper leaves the details of the alignment step unspecified, but from a superficial inspection of the source code it appears that the same method, along with an ad-hoc adaptation to Russian, is used.\nSince our aim is to work cross-lingually, we adopt the same approach as Masciolini (2023), consisting in extracting correspondences between UD subtrees using the CONCEPT-ALIGNMENT package (Masciolini and Ranta, 2021). Originally developed for the syntax-based extraction of translation equivalents from multilingual parallel UD treebanks, the library is completely language-agnostic at its core, and its alignment rules can be easily customized to better suit the L1-L2 domain.\nFurthermore, extracting subtrees rather than text spans ensures some degree of flexibility in determining how much context to extract for a given error. Depending on the use case, error-correction pairs can consist either of just the tokens involved in the corresponding edit operation, similarly to what is done in SERRANT, or of larger segments, useful to understand why the edit is required. In Figure 1, for instance, both the adverb slowly and the adjective slow (resp. långsamt and långsam) are acceptable forms, if taken in isolation: adjectives are only marked as incorrect because they modify a verb. For each detected error, our extraction module produces patterns of various sizes. From the perspective of example retrieval, in fact, smaller patterns are more likely to generate hits, but larger ones result in better matches.",
|
| 9 |
+
"3.2 From CoNLL-U trees to error patterns": "Alignments, and therefore errors, are internally represented as pairs of rose trees, tree structures with a variable, unbounded number of children per node. While this representation can be easily converted back into CoNNL-U format, which is itself machine-readable, complete UD sentences are too information-rich for most practical purposes and not as easy to manipulate as a recursive data struc-\nture. We therefore describe errors using a UD query language. Among several existing options, we selected the pattern matching language available as part of GF-UD (Kolachina and Ranta, 2016; Ranta and Kolachina, 2017), the easiest to integrate with the rest of the codebase.\nUD patterns GF-UD essentially provides three types of patterns:2\n• single-token patterns, such as POS \"ADJ\", matching subtree roots. With a similar syntax, it is possible to pattern match based on the token’s XPOS, DEPREL, FEATS, FORM or LEMMA, each corresponding a CoNNL-U field3; • tree patterns in the form TREE p [ps], where p is a pattern to be matched by the root of a subtree and [ps] a list of patterns denoting its dependents. TREE (POS \"NOUN\") [DEPREL \"amod\"], for instance, matches nouns modified by an adjective; • sequence patterns like SEQUENCE [DEPREL \"amod\", POS \"NOUN\"], matching nouns preceded by an adjectival modifier.\nIn addition, the language allows combining patterns with the logical operators AND, OR and NOT and provides a TRUE pattern matching any subtree.\nFollowing Masciolini (2023), we use pairs of these UD patterns to describe the discrepancies between L1 and L2 trees. As a consequence, a way to describe the error in Figure 1 on the basis of POS tags is the following:4\n⟨TREE_ (POS \"VERB\") [POS \"ADV\"], TREE_ (POS \"VERB\") [POS \"ADJ\"]⟩\nHere, the first pattern denotes the correct form and the second the erroneous learner attempt. This can be written even more concisely as TREE_ (POS \"VERB\") [POS {\"ADV\"→\"ADJ\"}]\nThis means that, to modify a verb, the learner used an adjective rather than an adverb. If we focus on the edit operation only, we obtain the pattern POS {\"ADV\"→\"ADJ\"}\nequivalent to SERCL/SERRANT’s ADJ→ADV. 2For the full specification of the GF-UD pattern syntax, see github.com/GrammaticalFramework/ gf-ud/blob/master/doc/patterns.md (accessed 19.04.2023).\n3For more information about the UD standard, see universaldependencies.org (accessed 31.05.2023).\n4Underscored TREE_ patterns match even trees having dependents other than those explicitly listed, like Figure 1’s.\nConverting alignments to tree pattern pairs, which have the same recursive structure, is extremely simple. The same can be said of sequence patterns, since GF-UD also provides a list-like data type to represent UD sentences and functions to convert between the latter and rose trees. The most straightforward approach, however, yields “full” UD patterns that are excessively specific. For this reason, we develop various simplification strategies producing more general, yet informative patterns.\nSimplification strategies A first, simple strategy, is to filter patterns by CoNNL-U field. This was already exemplified above when only considering Universal POS tags. A less strict options is to take into account all morphosyntactically relevant fields (FEATS, DEPREL, POS and possibly XPOS). A way to achieve further simplification is to remove fields whose values are identical in both components of the patterns. Another approach is to recursively compare the L1 and L2 sides of an error pattern and eliminate identical subpatterns. In addition, it is possible to simplify single (monolingual) patterns in various ways, for instance by transforming sequence patterns of length 1 and tree patterns with empty dependent lists into single-token patterns. Appendix A demonstrates the application of these strategies to the example in Figure 1. With example retrieval in mind, we apply all strategies, in sequence, to each extracted pattern, without discarding the intermediate results. This maximizes the chance of finding relevant examples while laying the foundation for ranking the results.",
|
| 10 |
+
"4 Preliminary evaluation": "We carry out a first evaluation of our method through an example retrieval task. In particular, we try to find occurrences of errors similar to those extracted from a given sentence-correction pair in an L1-L2 treebank. Implementation-wise, this is done by combining our error extraction module with Masciolini (2023)’s query engine: run on an input pair, the extraction procedure returns one or more patterns, in turn used to query the treebank.\nWe make an interactive version of such error retrieval pipeline also available as a prototype CALL application, analogous to the incorrect example retrieval tool presented in Arai et al. (2019). In this case, input sentences are entered as text and parsed on the fly using UDPipe’s REST API.5\n5lindat.mff.cuni.cz/services/udpipe/ api-reference.php (accessed 31.05.2023).",
|
| 11 |
+
"4.1 Data": "While the final iteration of our extraction method will be meant for authentic learner data, we carry out this first evaluation on two datasets for linguistic acceptability judgments composed of minimal correct-incorrect sentence pairs isolating specific linguistic phenomena, i.e. where the incorrect element contains a single grammatical error. In this way, we postpone dealing with the complexities that can arise from the simultaneous presence of several errors involving the same tokens. We simplify the task further by filtering out sentences containing errors beyond mere morphosyntax, such as incorrect lexical choices and spelling mistakes, for which automatic UD annotation is less informative and potentially misleading.\nBLIMP The Benchmark of LInguistic Minimal Pairs (BLIMP) (Warstadt et al., 2020), developed for evaluating the linguistic knowledge of language models, is a dataset consisting of 67 subsets, each containing 1 000 correct-incorrect sentence pairs exemplifying a specific error type or paradigm. Examples are artificially generated based on linguistcrafted templates and subsets are organized in 12 groups on the basis of the linguistic phenomenon they describe. Based on their metadata, we select lexically identical pairs marked as belonging to the fields of morphology or syntax and parse them with UDPipe 2 (Straka, 2018)’s default English model. The result is a parallel treebank of 14 996 sentences, 100 of which we set aside as inputs for the example retrieval pipeline. Specifically, we extract patterns from this 100-sentence subset and match them against the remaining 14 896 pairs to retrieve similar correct-incorrect examples.6\nDALAJ The DAtaset for Linguistic Acceptability Judgments (DALAJ) is, in turn, composed of L2 Swedish sentence-correction minimal pairs derived from the error-annotated SWELL SWEdish Language Learner corpus (Volodina et al., 2019) and therefore arguably closer to the data our system is being built for.7 SWELL uses a two-level error taxonomy: labels, such as M-Adj/adv, are composed of a capital letter, indicating the error’s macro-category (in this case, Morphology), followed by an abbreviation\n6The BLIMP splits used in this paper, as well as the preprocessing scripts, are available at github.com/harisont/ L1-L2-BLiMP/tree/bea (accessed 31.05.2023).\n7An early version of DALAJ, covering only lexical errors, is presented in Volodina et al. (2021).\nspecifying the affected POS and/or morphological features. The M-Adj/adv label, for instance, refers to Adjective forms corrected with the corresponding adverb, such as långsamt → långsam* in the example displayed in Figure 1. We select the 1 198 error-correction pairs belonging to the M and S macro-categories and process them analogously to BLIMP data, the only difference being the usage of a Swedish model.8",
|
| 12 |
+
"4.2 Results": "Ideally, quantitatively evaluating the performance of our system on the example retrieval task defined above would involve computing the precision and recall of each query performed with the extracted patterns. In practice, however, this is unfeasible in our current setup, as it would require manually inspecting all matches. While an identity of error labels between the input pair and a match is generally a good indication of a true positive, in fact, it is not at all always the case that different labels correspond to a false positive: the same error can sometimes be interpreted, and therefore labelled, differently. The Swedish word långsamt, for instance, is both an adverb (“slowly”) and the singular neuter form of the adjective långsam (“slow”), meaning that a phrase like ett {långsamt → långsam*} tempo (“a slow tempo”, where {långsamt → långsam*} modifies the neuter noun tempo) could, following the SWELL annotation guidelines (Rudebeck and Sundberg, 2021), be annotated both as M-Adj/adv and M-Gend. For similar reasons, counting actual false negatives is also challenging.\nInstead, for each dataset, we compute the retrieval rate R, i.e. the percentage of sentences for which the system was able to return one or more matches, regardless of their correctness, and compare it with the successful retrieval rate R+, where only sentences with at least one relevant match was found. Since we use search results as a proxy of the usefulness of the extracted patterns rather than to assess the performance of the query engine, we deem this to be sufficient for a first evaluation. Results are summarized in the table below.\nBLIMP DALAJ R 82% 69% R+ 82% 63%\n8The DALAJ splits used in this paper, as well as the preprocessing scripts, are available at github.com/harisont/ L1-L2-DaLAJ/tree/bea (accessed 31.05.2023).\nFigures for BLIMP, whose data is controlled and finely categorized by paradigm, were obtained fully automatically by checking whether one or more of the retrieved examples belonged to the same subset. DALAJ matches, on the other hand, still required manual inspection due to the dataset’s coarser-grained labelling scheme and the scarcer predictability of the sentences. More specifically, we checked the search results of each query looking for relevant matches, defined, for the sake of this evaluation, as examples presenting an error similar to that of the input pair, regardless of the degree of specificity and granularity of the extracted pattern(s). Given the input de blev {utsatta → utsattad*} på två olika sätt (“they were exposed in two different ways”, where the adjective utsattad*, \"exposed\", is incorrectly inflected for number), for instance, this implied considering the sentences {promenader → promenad*} är bra för människors hälsa (“{walks → walk*} are good for people’s health”, where the number inflection error involves the noun) and vi är {glada → glad*} varje dag (“we are happy every day”, where the incorrectly inflected word is again an adjective, glad ) even though only the latter involves the same POS9. While results are encouraging for both datasets, we observe a marked difference between the two in terms of retrieval rate. Several different factors might contribute to this: the difference in size between the two corpora, the fact that all pairs we selected from BLIMP, but not from DALAJ, are lexically identical and some intrinsic characteristics of the BLIMP dataset, such as the template-based method used to generate its sentences.\nIn cases where no or exclusively incorrect matches are found, failures may also be caused by parse errors, issues related with the query engine or, especially when it comes to the smaller Swedish treebank, merely by a lack of similar examples in the corpus. In such instances, we investigate further by inspecting the UD trees and extracted patterns. When it comes to BLIMP data, pairs with no matches belong in all but one case to the island effects group, comprising word order errors related to wh-words, such as Whose {hat should Tonya wear → should Tonya wear hat*}? Unsurprisingly, errors of this kind pose a challenge for the parser and therefore often incorrectly aligned.\nWord order errors are problematic in Swedish 9See Appendix B for a similar example, where the same\nsentence matches two patterns of different sizes.\ntoo, but even other syntactical errors, most notably S-Clause (change of basic clause structure), S-MSubj (missing subject) and M-Adj/adv10 (adjective corrected to adverb form, as in Figure 1) appear to cause issues at the parsing stage, especially when corrections involve complex rephrasings and/or lexical changes. Morphological errors involving nonexistent word forms are also often handled incorrectly. An example of that is the Swedish L2 sentence Kommunikationen hade dittills skett via brev, och brevutdelning fick man fem {gånger → gångar*} om dagen (“Communication had until then taken place by mail, and letters were delivered five times a day”), where gångar is an incorrect plural form of the noun gång, corrected to gånger. In such cases, the morphological analysis of L2 is identical to that of the L1 and the only usable patterns are those preserving lexical information, for which finding treebank matches is less likely.",
|
| 13 |
+
"5 Conclusions and future work": "We presented a novel approach for extracting morphosyntactical error patterns from L1-L2 parallel UD treebanks and put it to the test through an example retrieval task. While performed on datasets for linguistic acceptability judgments rather than authentic learner data, our preliminary evaluation gave promising results and provided helpful insights for the further development of the tool.\nFuture work on the extraction method itself will focus on handling nonexistent word forms and dealing with the complexity of actual L2 data. Realworld L2 texts come with two main challenges: handling non-morphosyntactical errors, such as spelling mistakes and incorrect lexical choices, and isolating each of the grammatical errors occurring in the same sentence. We mentioned that our system extracts patterns of different sizes and at varying degrees of simplification, whose usefulness depends on the use case. This drives us to also investigate pattern selection and ranking. The latter, together with a more user-friendly interface, could contribute to the improvement the example retrieval pipeline to better suit the learners’ needs. Further improvements will require addressing the L2 parsing issues identified through the our preliminary evaluation, for instance by fine-tuning a UDPipe model on L2 data, and possibly intervening on the alignment step.\n10Even though SWELL classifies this as a morphological error, it is syntactical from a UD perspective.",
|
| 14 |
+
"A Application of simplification strategies": "Input correct-incorrect sentence pair: ⟨I write slowly, I write slow⟩.\n0. largest complete extracted error pattern: TREE\n(AND [ FORM \"write\", LEMMA \"write\", POS \"VERB\", XPOS \"VBP\", FEATS \"Mood=Ind|Number=Sing|Person=1|Tense=Pres|VerbForm=Fin\", DEPREL \"root\"]) [AND [ FORM \"I\", LEMMA \"I\", POS \"PRON\", XPOS \"PRP\", FEATS \"Case=Nom|Number=Sing|Person=1|PronType=Prs\", DEPREL \"nsubj\"],\nAND [ FORM {\"slowly\" → \"slow\"}, LEMMA {\"slowly\" → \"slow\"}, POS {\"ADV\" → \"ADJ\"}, XPOS {\"RB\" → \"JJ\"}, FEATS \"_\", DEPREL {\"advmod\" → \"amod\"}]]\n1. filtering by CoNNL-U field, keeping only morphosyntax-related fields (UPOS, FEATS and DEPREL): TREE\n(AND [ POS \"VERB\", FEATS \"Mood=Ind|Number=Sing|Person=1|Tense=Pres|VerbForm=Fin\", DEPREL \"root\"]) [AND [ POS \"PRON\", FEATS \"Case=Nom|Number=Sing|Person=1|PronType=Prs\", DEPREL \"nsubj\"],\nAND [ POS {\"ADV\" → \"ADJ\"}, FEATS \"_\", DEPREL {\"advmod\" → \"amod\"}]]\n2. removal of fields whose values are identical everywhere in both the L1 and L2 component: TREE\n(AND [POS \"VERB\", DEPREL \"root\"]) [AND [POS \"PRON\",DEPREL \"nsubj\"], AND [POS {\"ADV\" → \"ADJ\"},DEPREL {\"advmod\" → \"amod\"}]]\n3. elimination of identical subpatterns: TREE (TRUE) [TRUE, AND [POS {\"ADV\" → \"ADJ\"},DEPREL {\"advmod\" → \"amod\"}]]\n4. monolingual single-pattern simplifications: AND [POS {\"ADV\" → \"ADJ\"},DEPREL {\"advmod\" → \"amod\"}]\nB Example program output11\nInput correct-incorrect sentence pair: ⟨jag skriver långsamt, jag skriver långsam⟩. Sentence 391\nL1 sentence L2 sentence\nFör det andra kommer studenterna ibland så tidigt så de måste vänta i en korridor istället för att vänta på ett café och dricka kaffe eller te . För det andra kommer studenterna ibland så tidig så de måste vänta i en korridor istället för att vänta på ett café och dricka kaffe eller .\nSentence 395\nL1 sentence L2 sentence\nNär man inte har någon bil , får man promenera till jobbet eller ta bussen ; Det går inte så snabbt , och man måste planera lite mer , men det är naturligt för oss . När man inte har någon bil , får man promenera till jobbet eller ta bussen ; Det går inte så snabb , och man måste planera lite mer , men det är naturligt för oss .\nSentence 684\nL1 sentence L2 sentence\nOch just nu känns vårt liv jättebra . Och just nu känns våras liv jättebra . Och just nu känns vårt liv jättebra . Och just nu känns våras liv jättebra .\nSentence 459\nL1 sentence L2 sentence\nPå senare år har engelskan kommit att få en allt starkare ställning internationellt och också i Sverige .\nPå senare år har engelskan kommit att få en allt starkare ställning internationell och också i Sverige .\nPå senare år har engelskan kommit att få en allt starkare ställning internationellt och också i Sverige .\nPå senare år har engelskan kommit att få en allt starkare ställning internationell och också i Sverige .\nSentence 436\nL1 sentence L2 sentence\nJag är väldigt glad över det eftersom jag tycker att det finns för många människor , speciellt barn , som ser kläder som en statussymbol och köper dem även om de har inte tillräckligt med pengar . Jag är väldigt glad över det eftersom jag tycker att det finns för många människor , speciell barn , som ser kläder som en statussymbol och köper dem även om de har inte tillräckligt med pengar .\n11Results obtained on the DALAJ treebank with the latest version of the interactive example retrieval pipeline (example command of L2-UD, run with the -markdown option), with commit SHA 9a1ec851313a4c3176826c77aa677e94158c3519. As it is to be expected, some sentences match several of the extracted patterns. While seemingly identical matches have been manually removed for the sake of compactness, highlighting clearly shows that sentences like 459 match not only the single-token POS {\"ADV\"→\"ADJ\"} pattern, but also the more specific TREE_ (POS \"VERB\") [POS {\"ADV\"→\"ADJ\"}] pattern and could therefore be ranked higher.\nSentence 1017\nL1 sentence L2 sentence\nMen i Sverige går det bättre för bönderna ! Men i Sverige går det bästa för bönderna !\nSentence 437\nL1 sentence L2 sentence\nOm man skulle välja att gå emot normen så skulle det leda till utanförskap , vilket är någonting jag inte tror att någon vill uppleva , och därför väljer jag att klä mig likadant som de andra på mitt jobb . Om man skulle välja att gå emot normen så skulle det leda till utanförskap , vilket är någonting jag inte tror att någon vill uppleva , och därför väljer jag att klä mig likadan som de andra på mitt jobb . Om man skulle välja att gå emot normen så skulle det leda till utanförskap , vilket är någonting jag inte tror att någon vill uppleva , och därför väljer jag att klä mig likadant som de andra på mitt jobb . Om man skulle välja att gå emot normen så skulle det leda till utanförskap , vilket är någonting jag inte tror att någon vill uppleva , och därför väljer jag att klä mig likadan som de andra på mitt jobb .\nSentence 420\nL1 sentence L2 sentence\nDet finns säkert en del som undrar varför de finska ungdomarna obligatoriskt ska läsa svenska i finska skolor när endast cirka sex procent av befolkningen läser svenska som modersmål . Det finns säker en del som undrar varför de finska ungdomarna obligatoriskt ska läsa svenska i finska skolor när endast cirka sex procent av befolkningen läser svenska som modersmål . Det finns säkert en del som undrar varför de finska ungdomarna obligatoriskt ska läsa svenska i finska skolor när endast cirka sex procent av befolkningen läser svenska som modersmål . Det finns säker en del som undrar varför de finska ungdomarna obligatoriskt ska läsa svenska i finska skolor när endast cirka sex procent av befolkningen läser svenska som modersmål .\nSentence 407\nL1 sentence L2 sentence\nAndra punkten : Vi behöver biblioteket för att där finns böcker på olika språk , specifikt mitt modersmål .\nAndra punkten : Vi behöver biblioteket för att där finns böcker på olika språk , specifik mitt modersmål .\nSentence 392\nL1 sentence L2 sentence\nDet är viktigt för mig när jag behöver ta det lite lugnt och göra mina läxor , och det är viktigt för mig att prata svenska med en svensk person och lära mig många nya ord .\nDet är viktigt för mig när jag behöver ta det lite lugna och göra mina läxor , och det är viktigt för mig att prata svenska med en svensk person och lära mig många nya ord .\nSentence 425\nL1 sentence L2 sentence\nHistorier som från början bara var muntligt berättade tar idag alla tänkbara former och förekommer som musik , teater , romaner , serier , filmer och spel . Historier som från början bara var muntlig berättade tar idag alla tänkbara former och förekommer som musik , teater , romaner , serier , filmer och spel .\nSentence 429\nL1 sentence L2 sentence\nI boken ” Stjärnlösa nätter ” så ser man tydligt hur en hatkärlek kan påverka en människas liv både negativt och positivt . I boken ” Stjärnlösa nätter ” så ser man tydligt hur en hatkärlek kan påverka en människas liv både negativ och positivt .\nSentence 984\nL1 sentence L2 sentence\nDär sitter jag med min familj och äter , sjunger , dansar , skrattar , leker och studerar . . . I hemmet kommer jag jättenära min son och jag kan lära honom mycket om livet och hur han kan bli bra person .\nDär sitter jag med min familj och äter , sjunger , dansar , skrattar , leker och studerar . . . I hemmet kommer jag jättenärmare min son och jag kan lära honom mycket om livet och hur han kan bli bra person .\nSentence 401\nL1 sentence L2 sentence\nJag tycker att buss är bättre än bil eftersom det är lättare att använda buss än bil , för alla människor , särskilt de fattiga , kan använda buss som de vill . Jag tycker att buss är bättre än bil eftersom det är lättare att använda buss än bil , för alla människor , särskild de fattiga , kan använda buss som de vill .\nSentence 457\nL1 sentence L2 sentence\nJag lärde mig att om saker inte går bra för dig ska du vara modig och ta det lugnt , det kommer att bli bättre , ge bara aldrig upp ! Jag lärde mig att om saker inte går bra för dig ska du vara modig och ta det lugn , det kommer att bli bättre , ge bara aldrig upp ! Jag lärde mig att om saker inte går bra för dig ska du vara modig och ta det lugnt , det kommer att bli bättre , ge bara aldrig upp ! Jag lärde mig att om saker inte går bra för dig ska du vara modig och ta det lugn , det kommer att bli bättre , ge bara aldrig upp !\nSentence 442\nL1 sentence L2 sentence\nDet finns olika sätt som man kan använda eller utrycka sig på för att kunna kommunicera med varandra , till exempel skrivet eller muntligt med hjälp av ord på en mängd olika språk .\nDet finns olika sätt som man kan använda eller utrycka sig på för att kunna kommunicera med varandra , till exempel skrivet eller muntlig med hjälp av ord på en mängd olika språk .\nSentence 421\nL1 sentence L2 sentence\nDetta leder till motstånd från landets folk som ser negativt på regeringens maktfullkomliga metod .\nDetta leder till motstånd från landets folk som ser negativ på regeringens maktfullkomliga metod .\nDetta leder till motstånd från landets folk som ser negativt på regeringens maktfullkomliga metod .\nDetta leder till motstånd från landets folk som ser negativ på regeringens maktfullkomliga metod .\nSentence 431\nL1 sentence L2 sentence\nHistorier som från början bara var muntligt berättade tar idag alla tänkbara former och förekommer som musik , teater , poesi , romaner , serier , filmer och spel .\nHistorier som från början bara var muntliga berättade tar idag alla tänkbara former och förekommer som musik , teater , poesi , romaner , serier , filmer och spel .\nSentence 458\nL1 sentence L2 sentence\nDet är inte så lätt att svara snabbt . Det är inte så lätt att svara snabb . Det är inte så lätt att svara snabbt . Det är inte så lätt att svara snabb .\nSentence 451\nL1 sentence L2 sentence\nMitt råd är att du måste ta det lugnt och fokusera , till exempel klä på dig fina kläder , det betyder inte smustiga kläder , eller du kan använda parfym , men inte så mycket . Mitt råd är att du måste ta det lugn och fokusera , till exempel klä på dig fina kläder , det betyder inte smustiga kläder , eller du kan använda parfym , men inte så mycket . Mitt råd är att du måste ta det lugnt och fokusera , till exempel klä på dig fina kläder , det betyder inte smustiga kläder , eller du kan använda parfym , men inte så mycket . Mitt råd är att du måste ta det lugn och fokusera , till exempel klä på dig fina kläder , det betyder inte smustiga kläder , eller du kan använda parfym , men inte så mycket .\nSentence 466\nL1 sentence L2 sentence\nJag personligen lägger inte medvetet så stor vikt vid kläder , kanske för att den miljö som jag lever i eller de människor som jag umgås med inte ser kläder som något betydelsefullt . Jag personligen lägger inte medveten så stor vikt vid kläder , kanske för att den miljö som jag lever i eller de människor som jag umgås med inte ser kläder som något betydelsefullt . Jag personligen lägger inte medvetet så stor vikt vid kläder , kanske för att den miljö som jag lever i eller de människor som jag umgås med inte ser kläder som något betydelsefullt . Jag personligen lägger inte medveten så stor vikt vid kläder , kanske för att den miljö som jag lever i eller de människor som jag umgås med inte ser kläder som något betydelsefullt .\nSentence 462\nL1 sentence L2 sentence\nAlla mina dagar gick så dåligt . Alla mina dagar gick så dålig .\nSentence 461\nL1 sentence L2 sentence\nSammanfattat har jag en föränderlig relation till kläder , men det viktigaste är att de möjliggör allt jag vill uppleva , från bergsvandring till fest .\nSammanfattad har jag en föränderlig relation till kläder , men det viktigaste är att de möjliggör allt jag vill uppleva , från bergsvandring till fest .\nSammanfattat har jag en föränderlig relation till kläder , men det viktigaste är att de möjliggör allt jag vill uppleva , från bergsvandring till fest .\nSammanfattad har jag en föränderlig relation till kläder , men det viktigaste är att de möjliggör allt jag vill uppleva , från bergsvandring till fest .\nSentence 390\nL1 sentence L2 sentence\nDessutom är det troligen kö då alla vill ha rast och kaffe samtidigt . Dessutom är det troliget kö då alla vill ha rast och kaffe samtidigt .\nSentence 387\nL1 sentence L2 sentence\nDet var ganska svårt först men jag är van och lärde mig själv hur man bor och anpassar sig i ett nytt land . Det var ganska svårt första men jag är van och lärde mig själv hur man bor och anpassar sig i ett nytt land .\nSentence 469\nL1 sentence L2 sentence\nTänk positivt istället så kommer du att hitta många betydelsefulla saker inom din familj . Tänk positiv istället så kommer du att hitta många betydelsefulla saker inom din familj . Tänk positivt istället så kommer du att hitta många betydelsefulla saker inom din familj . Tänk positiv istället så kommer du att hitta många betydelsefulla saker inom din familj .\nSentence 467\nL1 sentence L2 sentence\nDetta kan dock skapa svårigheter med att kunna förbereda och undervisa ungdomar tillräckligt .\nDetta kan dock skapa svårigheter med att kunna förbereda och undervisa ungdomar tillräckliga .\nDetta kan dock skapa svårigheter med att kunna förbereda och undervisa ungdomar tillräckligt .\nDetta kan dock skapa svårigheter med att kunna förbereda och undervisa ungdomar tillräckliga .\nSentence 410\nL1 sentence L2 sentence\nEfter några år visade inspektörerna rapporter om att det nog fanns lite kokain i coca cola , men tyvärr ville de inte kommunicera detta offentligt . Efter några år visade inspektörerna rapporter om att det nog fanns lite kokain i coca cola , men tyvärr ville de inte kommunicera detta offentlig . Efter några år visade inspektörerna rapporter om att det nog fanns lite kokain i coca cola , men tyvärr ville de inte kommunicera detta offentligt . Efter några år visade inspektörerna rapporter om att det nog fanns lite kokain i coca cola , men tyvärr ville de inte kommunicera detta offentlig .\nSentence 375\nL1 sentence L2 sentence\nHon lär ut svenska mycket snällt och fint . Hon lär ut svenska mycket snäll och fint .\nSentence 463\nL1 sentence L2 sentence\nJag hoppas kunna lära mig snabbt och börja söka jobb .\nJag hoppas kunna lära mig snabb och börja söka jobb .\nJag hoppas kunna lära mig snabbt och börja söka jobb . Jag hoppas kunna lära mig snabb och börja söka jobb ."
|
| 15 |
+
}
|
ACL_23_no_limitation/ACL23_1258.json
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1258",
|
| 3 |
+
"Title": "MultiQG-TI: Towards Question Generation from Multi-modal Sources",
|
| 4 |
+
"abstractText": "We study the new problem of automatic question generation (QG) from multi-modal sources containing images and texts, significantly expanding the scope of most of the existing work that focuses exclusively on QG from only textual sources. We propose a simple solution for our new problem, called MultiQG-TI, which enables a text-only question generator to process visual input in addition to textual input. Specifically, we leverage an image-to-text model and an optical character recognition model to obtain the textual description of the image and extract any texts in the image, respectively, and then feed them together with the input texts to the question generator. We only fine-tune the question generator while keeping the other components fixed. On the challenging ScienceQA dataset, we demonstrate that MultiQG-TI significantly outperforms ChatGPT with few-shot prompting, despite having hundred-times less trainable parameters. Additional analyses empirically confirm the necessity of both visual and textual signals for QG and show the impact of various modeling choices. Code is available at https://rb.gy/020tw",
|
| 5 |
+
"1 Introduction": "Automatic question generation has the potential to enable personalized education experiences for subjects such as reading comprehension at a large scale (Wolfe, 1976; Kokku et al., 2018; Zhang et al., 2022; Kulshreshtha et al., 2022) and improve standardized tests by reducing the costs and the test length (Burstein et al., 2021). Most, if not all, existing question generation (QG) methods operate only on text: they take a textual paragraph (Wang et al., 2018) or story (Xu et al., 2022) as input and generate a textual question. These methods’ focus on text-based QG is limiting, because many interesting questions can involve, or be generated from, multiple modalities such as images, diagrams, and tables, in addition to texts (Lu et al., 2022).\n∗Work done while at Rice University.",
|
| 6 |
+
"1.1 Contributions": "In this paper, we conduct, to our knowledge, the first investigation into the under-explored problem of multi-modal question generation (QG). Specifically, we study the following problem: given multimodal inputs containing both visual (e.g., an image) and textual (e.g., a textbook paragraph) information, we would like a model to output a textual question based on such multi-modal input. Note that the definition of visual input is very broad, e.g., it can be an image, a diagram, or a table in the image format. Although this multi-modal setting (image and text as input and textual question as output) is only a specific instance of multi-modality (one could consider using audio and video as input to generate questions, or generating questions with images in addition to texts), we argue that our setting is sufficiently broad and educationally meaningful. For example, many science questions ask about scientific phenomena, processes, and relationships commonly described in figures, diagrams, and tables (Talmor et al., 2021; Lu et al., 2022). We believe that our problem setting, illustrated in Figure 1, is an important first step toward more general multi-modal QG.\nWe propose a novel method, dubbed MultiQGTI, for generating textual questions from multimodal inputs of texts and images. The idea is simple: we enable a text-based question genera-\n682",
|
| 7 |
+
"Text-to-image module": "tor to “see” by feeding it visual information in the form of text. Specifically, we first use an off-theshelf image-to-text model and an optical character recognition (OCR) model to produce a textual description of the image and extract the texts in the image. We then fine-tune a text-based generative model to generate a question given the input text and the text extracted from the input image. These components are readily available and require no or minimal fine-tuning, making MultiQG-TI easy to use and efficient to train. Figure 2 presents a high-level overview of MultiQG-TI.\nWe demonstrate MultiQG-TI’s strong performance on the challenging ScienceQA dataset (Lu et al., 2022). For example, MultiQA-TI outperforms models using only texts or only images as input, demonstrating the necessity of including both texts and images as input in QG. MultiQA-TI also significantly outperforms ChatGPT in the few-shot in-context learning setting, demonstrating its competitiveness against much larger models. Finally, we analyze the factors that impact MultiQA-TI’s performance, including the choices of image-totext models and the sizes of the question generator model. We also provide generation examples to illustrate our method’s strengths and errors.",
|
| 8 |
+
"Question generation (QG) for education. QG": "models are often an integral component in personalized learning, intelligent tutoring systems, and assessment platforms to cheaply and scalably generate customized questions for each student (Le et al., 2014; Pan et al., 2019; Srivastava and Goodman, 2021; White et al., 2022). For example, prior research has developed models to generate a variety of questions including those based on fairytales (Xu et al., 2022; Zhao et al., 2022), factual questions (Heilman and Smith, 2010; Wang et al., 2018), and math word problems (Wang et al., 2021; Liu et al., 2021). Despite the rapid progress, most\nexisting work focuses on textual-based QG. The exciting frontier of automatic multi-modal QG remains under-explored.\nMulti-modal processing with text-only models. Our work is partially motivated by the recent line of work that demonstrate the possibility to use textonly models to perform visual-related tasks by feeding it text descriptors of the visual input. For example, Wang et al. (2022) enable large language models to perform video-related tasks such as event prediction by connecting them with image-to-text models. A few others take a similar approach to enable text-only models to perform captioning, reasoning, and question answering that involve videos or images (Yang et al., 2022, 2023; Hu et al., 2022). However, the utility of their approach for multimodal QG remains largely known.",
|
| 9 |
+
"2 The MultiQG-TI Methodology": "We now describe the four modules in MultiQG-TI: a question generator module, an image-to-text module, an optical character reconigion (OCR) module, and an input formatting module.\nThe question generator module. This module generates the question and is the only trainable module in MultiQG-TI. We adopt a text-based question generator such that its inputs must be all in text format. Adopting a text-based question generator enables us to choose from a wide range of pre-trained text-based generative models, whose training is also often more efficient than their multimodal counterparts. In this work, we instantiate the question generator with the recent Flan-T5 model (Chung et al., 2022) that have shown to perform strongly on new downstream tasks when fine-tuned on limited task-specific data.\nThe image-to-text and OCR modules. A textbased question generator cannot take any visual input. To solve this problem, we use the imageto-text and OCR modules to interface between the",
|
| 10 |
+
"Method BLEU METEOR ROUGE BLEURT": "image and text modalities and extract the visual information from the image format into a textual format appropriate as input for the text-based question generator. In particular, we use the image-totext module to describe the content in the image in texts, including any objects, scenes, actions, and events. We instantiate this module with the FlanT5-XXL version of BLIP-2 (Li et al., 2023). While the image-to-text module extracts visually rich signals, it often fails to recognize any text in the image. This is problematic if the majority of the content in the image is text, such as a table. Therefore, we complement the image-to-text module with an OCR module that specializes in extracting the texts in the image. We instantiate the OCR module in MultiQG-TI with PaddleOCR (Du et al., 2020).\nThe input formatting module. This module, g, is a simple function that concatenates the input text and the texts from the input image into one coherent textual input for the question generator model. There are many choices available and one can simply perform a string join operation. In this work, we apply input formatting with the following template: Generate a question based on the following information.",
|
| 11 |
+
"Background: {input_text}. Image:": "{image_description}. Texts in image: {image_text}.. In this template, {input_text}, {image_description}, and {image_text} are placeholders that will be replaced with the actual input text, the output from the image-to-text model and the output from the OCR module, respectively.\nTraining and inference. During training, we only update the parameters of the QG module while keeping the other modules fixed. We use the next word prediction as the training objective, which is commonly used in modern language model training (Vaswani et al., 2017). During inference, we proceed as follows: given an input image and text,\nwe first extract the text from the image using imageto-text module and the OCR module, then format them together with the input text, and finally feed the formatted texts to the fine-tuned QG module to generate a question.",
|
| 12 |
+
"3 Experiments": "Dataset. We use the ScienceQA dataset (Lu et al., 2022) throughout our experiments, which we preprocess and split into training, validation, and test splits. All results in this paper are reported on the test split. More details on the dataset and preprocessing steps are in Appendix A.1.\nBaselines. Because there are no prior work on automatic multi-modal QG, we use off-the-shelf model APIs and variants of MultiQG-TI as the baselines. Specifically, we use ChatGPT API (Ouyang et al., 2022) with zero-shot and in-context learning (Kaplan et al., 2020; Wei et al., 2022) with up to seven examples, each of which is formatted exactly the same as our preprocessed data points in the ScienceQA dataset. We also compare with MultiQG-TI with only a single modality as input (i.e., either only text or only image).\nevaluation. We choose four evaluation metrics including BLEU, METEOR, ROUGE, and BLEURT, all of which have been widely used in existing QG works. We report all results, except for those using ChatGPT API, based on the average of 4 random, independent runs. More details on the experiment setup, baselines, and evaluation are in Appendices A.2 and A.3.",
|
| 13 |
+
"3.1 Main quantitative results": "Table 1 summarizes the main results.1 These results clearly show that ChatGPT fails at the multimodal QG task in our setting. Although its performance steadily improves with more examples in the in-context learning setting, ChatGPT trails\n1For conciseness, we choose not to report standard deviations because all of them are quite small (around 0.002).\nMultiQG-TI by a gigantic margin. The comparison between ChatGPT and MultiQG-TI reminds one to be cautious when using ChatGPT in specialized tasks such as multi-modal QG and presents strong empirical evidence that a small, fine-tuned model is still highly relevant in certain generation tasks. Table 1 also demonstrate the benefits of including both the visual and textual information when generating questions because MultiQG-TI outperforms its variants with only textual or only visual input.",
|
| 14 |
+
"3.2 Analyses": "The choice of question generators. We study the impact of the model size of the QG module on the QG performance and summarize the results in Figure 3, where “small”, “medium”, and “large” represent the Flan-T5 variants of 80 million, 250 million, and 780 million parameters, respectively. The figure implies that a larger model generally leads to improved performance across all evaluation metrics. Notably, by fine-tuning only on a few thousand training examples with a modest-sized model, MultiQG-TI achieves high performance,2 making it appealing for practical use and deployment in resource-constrained settings.\nThe choice of image-to-text models. We also study the impact of the image-to-text models on\n2As a comparison, some of the latest QG works achieve a BLEURT score of up to 0.67; see the results of a recent QG competition: https://www. thequestchallenge.org/leaderboard\nthe QG performance and summarize the results in Table 3. Specifically, we compare BLIP2-FlanT5-XXL (11 billion parameters), the image-to-text model we use in MultiQG-TI, to three smaller variants ranging from 239 million to 2.7 billion, and 6.7 billion parameters, respectively. We observe that QG performance improves steadily but minimally after the model becomes larger than 2.7 billion parameters, although the largest model still wins modestly. These results imply that MultiQG-TI may retain the same level of competitiveness even with a smaller off-the-shelf image-to-text model, suggesting more resource-saving opportunities without compromising performance.\nQualitative examples. We show an example generated question by MultiQG-TI in Table 2, as well as additional ones in Appendix C. These examples further illustrates MultiQG-TI’s capability in generating fluent, coherent, and meaningful questions from multi-modal scientific contexts. We also provide an in-depth analyses of the errors that MultiQG-TI makes during generation, which we defer to Appendix C due to space constraint.",
|
| 15 |
+
"4 Conclusion": "We have conducted a first study into automatic multi-modal QG from images and texts. Our proposed solution, MultiQG-TI, is simple, easy-to-use, and highly capable, as evaluated and analyzed on the ScienceQA dataset. Our work opens a myriad of research opportunities. Some of the exciting future directions include: 1) QG with multi-modal inputs and multi-modal outputs; 2) end-to-end visionlanguage modeling approach for QG; and 3) evaluating and comparing the pedagogical utilities of questions generated from multi-modal sources in real-world educational scenarios.",
|
| 16 |
+
"A.1 Dataset and preprocessing": "Each data point in the ScienceQA dataset contains the question text, a background text, and an image. The total number of data points in the ScienceQA dataset is 21,208. We refer readers to Lu et al. (2022) for more details on the dataset. However, the background text and the image are optionally included. As a result, not all data points contain both the background text and the image. We only keep data points that contain all three elements, resulting in 5,942 data points. We further randomly split them into train, validation, and test splits, resulting in 3606/1204/1132 data points in the train/validation/test splits, respectively. For both the remaining texts and images, we did not perform further processing and keep them as-is before feeding them to the MultiQG-TI components that are responsible for processing them.\nWe note that the MultimodalQA dataset (Talmor et al., 2021) is also an appropriate dataset choice with rich multi-modal information beyond just texts and images. Because our present work focuses on image and text as input modalities, we leave more complex data modalities for QG for future work.",
|
| 17 |
+
"A.2 MultiQG-TI model details": "Image-to-text generation. We use contrastive sampling (Su et al., 2022) with the following parameters:3 α = 0.6 and k = 4, with a temperature of 1, n-gram penalty of 3, and minimum text description length of 30 tokens. For each given image, we sample 10 different text descriptions, rerank them by the image-to-text model’s perplexity, and choose the best description (with the lowest perplexity score) as the final text description for the image, which we will then send to the QG module, together with the OCR module’s output and the input background text.\nQG module training. We perform all training on a single NVIDIA Quadro RTX 8000 GPU. For all QG module variants that we consider, we use the same training setup. Specifically, we train it with a learning rate of 0.0003 for 8 epochs with early stopping if validation loss does not improve over the most recent 3 epochs. We use a batch size of 3 with a gradient accumulation step of 4, resulting in\n3See this blog post for an explanation of the different parameters that appear in contrastive sampling: https:// huggingface.co/blog/introducing-csearch\nan effective batch size of 12 (e.g., the parameters are updated every 12 training steps). We also clip the gradients to 1 to stabilize training. All these training procedures are standard in training text generative models.\nInference and evaluation. We use the same contrastive sampling strategy as in image-to-text generation. Additionally, we sample 10 generated questions, rerank them by perplexity, and fetch the bestranked sample as the final generated question for each input text-image pair in the test set. All evaluations are conducted on this “top-1” setting. For each individual run, we perform the above sampling strategy with a different seed to obtain a different set of generated questions for each input in the test set. We then perform the same evaluation on each generated set and then average the results, resulting in the averaged quantitative evaluations reported in the main paper.\nRemarks. MultiQG-TI leverages readily available, open-source tools to solve the new problem of multi-modal question generation. Its modular design makes it flexible and easily adaptable, enabling one to upgrade a component when a more capable one becomes available. Moreover, the only trainable component is the question generator. There are many choices available for this component, any of which can achieve competitive performance with relatively limited model sizes, making it suitable for low-resource training settings. An end-to-end multi-modal QG model is still methodologically interesting and we leave this as a future work.",
|
| 18 |
+
"A.3 ChatGPT baseline": "We use the gpt-3.5-turbo-0301 model API throughout our experiments. The system message we give to the model at the beginning of the API call is as follows: You are a helpful assistant. Your job is to generate a question, which consists of a question background/context and the question itself, given the user’s provided context information, which consists of an instruction, background, subject, topic, and category. Your answer should be in the following template: ’Question context: ... Question: ...’. After that, for zero-shot QG, we send the\ntemplated input background text, OCR extracted text from the input image, and the text description of the input image to the API, formatted exactly as what we would do for MultiQG-TI. For few-shot QG, we construct each example as a pair of input and output, where the input is the templated input consisting of the input text and texts extracted from the input image, and the output is the corresponding question text to the input text and image. We only perform generation once for each setting and for each input to avoid incurring higher costs of making OpenAI API calls.\nSelecting examples for in-context learning. We perform a basic cosine similarity search for each input context and image pairs. Specifically, we first encode each formatted input text (recall, it contains the input background text, the image description, and the texts in the image) as a vector using the SentenceTransformers.4. Then, for each formatted input in the test set, we perform a similarity search, computing its cosine similarity with every formatted input in the training set, and select up to seven most similar formatted input as the examples to be used in prompting ChatGPT in the few-shot in-context learning setting.",
|
| 19 |
+
"B Additional literature review": "The MultimodalQA dataset (Talmor et al., 2021) actually involves a cursory description of generating questions from multiple sources. However, the QG process described therein relies on human annotation, a manual process that cannot achieve automatic QG and therefore is neither a baseline to our work nor related to our goal of automatic QG.\nRecent research has demonstrated the impressive capabilities of models that can connect data from multiple modalities, such as generating images from texts (Ramesh et al., 2022) and vice versa (He and Deng, 2017). Specifically related to our work, recent advances in vision-language models (Alayrac et al., 2022; Li et al., 2023; OpenAI, 2023) enable models to converse with a user given both texts and images. However, most demonstrated use cases of these models are in casual dialogues (Li et al., 2023), image captioning (Hossain et al., 2019), and visual question answering (Antol et al., 2015). The utilities of these models for QG remain largely unknown.\n4https://www.sbert.net/",
|
| 20 |
+
"C Additional results": "Additional examples of generated questions. We provide additional generation examples in Table 4 for chemistry, physics, and biology, respectively. These examples corroborate with the one in the main text and demonstrate the capability of MultiQG-TI in generating reasonable questions from image and text inputs.\nQualitative generation error analysis. MultiQG-TI is not without problems. In Table 5, we provide an exemplary erroneous generated question to illustrate the typical problems that MultiQG-TI has when performing QG.\nIn our observation, there are two major sources of error. The first one comes from the mistakes cascaded from the image-to-text model. In the example in Table 5, the object in the image is dolerite, but the image-to-text model in MultiQG-TI recognizes it as granite, resulting in the image description “a black piece of granite on a white background”. As a result, the question generator, which generates the question conditioned on the image description, picks up the wrongly reconigized object “granite” and use it to generate a question on granite instead of on dolerite.\nThe second source of error comes from hallucination, a major bottleneck preventing language models from real-world, high-stake use scenarios (Ji et al., 2023). MultiQG-TI is not immune to this problem. In the example in Table 5, the question generator produces the phrase “pure substance”, which is neither a property of dolerite nor granite because both are mixtures.\nThese are challenging issues to tackle. For example, it is even difficult for a non-expert to identify the object in the image in Table 5. Similarly, it is difficult to verify the factual correctness of the generated question without resorting to external sources such as web search and textbooks. Reducing these errors would require improvements to the image-to-text model and mitigating hallucination in language models, both of which remain active areas of research."
|
| 21 |
+
}
|
ACL_23_no_limitation/ACL23_1262.json
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1262",
|
| 3 |
+
"Title": "Enhancing Educational Dialogues: A Reinforcement Learning Approach for Generating AI Teacher Responses",
|
| 4 |
+
"abstractText": "Reinforcement Learning remains an underutilized method of training and fine-tuning Language Models (LMs) despite recent successes. This paper presents a simple approach of finetuning a language model with Reinforcement Learning to achieve competitive performance on the BEA 2023 Shared Task whose goal is to automatically generate teacher responses in educational dialogues. We utilized the novel NLPO algorithm that masks out tokens during generation to direct the model towards generations that maximize a reward function. We show results for both the t5-base model with 220 million parameters from the HuggingFace repository submitted to the leaderboard that, despite its comparatively small size, has achieved a good performance on both test and dev set, as well as GPT-2 with 124 million parameters. The presented results show that despite maximizing only one of the metrics used in the evaluation as a reward function our model scores highly in the other metrics as well.",
|
| 5 |
+
"1 Introduction": "Controlling the output of Language Models is a challenging problem in the field of Natural Language Processing (NLP). Recently Reinforcement Learning (RL) has successfully been applied to the training and fine-tuning of Language Models. ChatGPT, based on InstructGPT (Ouyang et al., 2022a), makes use of Reinforcement Learning. Ramamurthy et al. (2023) have proposed the GRUE (General Reinforced-language Understanding Evaluation) benchmark that consists of a variety of different tasks, supervised by different Reward Functions to measure the quality of the trained models. The reported results on a variety show good results on a variety of tasks. Despite recent advances in applying RL to the training and fine-tuning of LMs and their wide applicability to different tasks and benchmarks this approach is still not widely applied.\nIn this paper we make use of Reinforcement Learning-based fine-tuning to tackle the BEA 2023 Shared Task (Tack et al., 2023). The goal of the task is the generation of teacher-like responses in an educational dialogue setting between a student and a teacher. This necessitates that the language model can mimic the tone and overall quality of the teacher response. We have employed an approach that pushes the generations of the model in the right direction through the use of BERTScore as a reward function and using Reinforcement Learning as our training strategy.\nOur model submission to the leaderboard is the implementation of the T5 model (Raffel et al., 2020) in the HuggingFace repository, t5-base with 220 million parameters. As the goal is to generate a response given an input dialogue we have chosen a sequence-to-sequence model. We follow the findings of Ramamurthy et al. (2023) who suggest that a small model with a high-quality reward function can match or outperform models with magnitudes of more parameters. For the training process we use the dialogue preceding the final teacher response as input and the final teacher response as the reference text. We achieve an average rank across all metrics of 5.38, out of 10 submissions, placing overall in seventh place on the leaderboard. For the DialogRPT maximum weighted ensemble metric our model achieves first place on the test set. We additionally present results for an autoregressive model. The chosen model is the base GPT-2 model from the HuggingFace repository with 124 million parameters. The autoregressive model outperforms our submitted model despite its smaller size in terms of parameters, suggesting that this model architecture may be more suitable for this task.",
|
| 6 |
+
"2 Related Work": "Ramamurthy et al. (2023) present results showing that Reinforcement Learning can be applied\n736\nsuccessfully in various NLP settings, including on the DailyDialog dataset (Li et al., 2017), which is similar in structure to the BEA task’s dataset. Liu et al. (2021) present an approach to make language model generations less politically biased using Reinforcement Learning. Toledo et al. (2023) demonstrate the viability of a Reinforcement Learning approach in text-based games. Notably they achieve improvements over the previous state of the art in this zero-shot setting. The task of aiding students is comparable due to the large number of possible topics and unforeseen behavior of students when interacting with either a human teacher or a machine teacher. While it is not specifically considered in this task and underrepresented in current research, likely due to the current state of research in this area, there is the possible danger of models becoming outdated in the future, possibly very quickly, as the world around us changes. A solution for this is of course to re-train the models on new data to update them, but a strong performance in a zero-shot setting circumvents this problem altogether, and Reinforcement Learning approaches show viability in this area.",
|
| 7 |
+
"3 Data": "The training data provided for the task by the organizers consists of 2747 samples of student-teacher dialogues from the Teacher Student Chatroom Corpus (Caines et al., 2020, 2022). There are always two speakers, a student and a teacher, and they take turns talking. Each of the samples contains one response. Each dialogue turn is prefixed with teacher: or student:, respectively. We use the full input dialogue as the input text, separating each speaker turn by newline. The reference text is the teacher response that follows the input dialogue. We used the t5-base model as well as the gpt2 model from HuggingFace and their respective tokenizers. Table 1 shows the lengths of the official training set released for the task.\nTo avoid potential issues or the need to cut off samples from the test set we have padded all the in-\nput tokens to a length of 256 tokens for our model. We note that the task description states that each passage is at most 100 tokens long. The difference in maximum lengths likely comes from our chosen tokenizers, which uses a different tokenizing strategy than the approach that was used to calculate the expected maximum length of 100 tokens. For the training process we used a 80/10/10 split for training-validation-testing of the released training data.",
|
| 8 |
+
"4 Approach": "Below, we present the methods we developed to generate teacher responses in real-world samples of teacher-student interactions.",
|
| 9 |
+
"4.1 Reinforcement Learning in NLP": "Our submission to the task leaderboard is a sequence-to-sequence-based model. The task is structured in a way that is suited for these kinds of models: Given an input sequence of studentteacher dialogues, the output is another sequence, the response of the teacher. The comparatively small size of the data set and simplicity of the data set allows fast prototyping and experimentation. One research area where problems are also often small is that of Reinforcement Learning (Sutton and Barto, 2018). While combining Reinforcement Learning with human feedback is an active field (Knox and Stone, 2008; Arumugam et al., 2019; Li et al., 2019; Christiano et al., 2023), it has only recently started being used in the field of NLP (Ziegler et al., 2019; Ouyang et al., 2022b; Lambert et al., 2022). Most importantly, the RL4LMs framework (Ramamurthy et al., 2023) has enabled the easy adaptation of RL approaches for NLP tasks. The authors have applied their framework to similar tasks, notably the IMDB review continuation, using the dataset by Maas et al. (2011). They achieved good results on this task using GPT2. They further report good results using T5 (Raffel et al., 2020) for a summarization task on news (Hermann et al., 2015) as well as the CommonGen task (Lin et al., 2019).",
|
| 10 |
+
"4.2 T5": "In the spirit of research we have initially decided to use T5 for this task instead of following the findings of the authors and using GPT2 due to the task’s similarity to the IMDB task. The compatibility of our chosen model with both being fine-tuned with\nReinforcement Learning as well as being usable in the RL4LMs framework has been demonstrated on a different task, so we conclude that our approach, while admittedly unusual, is not entirely unfounded in prior research.",
|
| 11 |
+
"4.3 GPT-2": "Due to the relatively low ranking on the leaderboard of our T5 model we have additionally finetuned a GPT-2 checkpoint from the HuggingFace repository, with 124 million parameters, after the task concluded. As such this model was not submitted to the leaderboard. We include the configuration used for the training of both models in the appendix.",
|
| 12 |
+
"4.4 Algorithm": "We follow the findings of Ramamurthy et al. (2023) and use their NLPO algorithm for the policy optimization during training. The performance of this algorithm is reported as the highest. It is an extension of the PPO algorithm (Schulman et al., 2017) and masks unlikely actions to reduce the action space. In the context of language generation this means masking next tokens whose cumulative probability is below a certain threshold. This reduction of the action space is important in the context of natural language problems as the action space in these contexts can be quite large. In the context of Reinforcement Learning a policy is a probability distribution over actions given a state. In our approach the policy is the language model being fine-tuned. The state is the generated tokens and the action is the next token to be generated in a language generation setting. Considering a language model itself to be a policy is a concept that has been used before in Liu et al. (2021) but is not widespread yet.",
|
| 13 |
+
"4.5 Reward Function": "As our reward function we have chosen a pragmatic approach. We decided to use one of the metrics used in the evaluation as the reward function, as that should allow us to train the model to achieve a high score. The possibility of doing this showcases an advantage that a Reinforcement Learningbased approach has over other, more traditional approaches (both classic Machine Learning and Deep Learning) in the field of NLP: To lessen the gap between the evaluation criteria and the loss during training. Approaches for this problem exist (Song et al., 2016; Casas Manzanares et al., 2018)\nbut it remains an open problem. This mismatch can be avoided by using Reinforcement Learning, and, in theory, should allow a high performance on a variety of tasks. Ramamurthy et al. (2023) report that the quality of the reward function has a greater effect on the performance of the model than the amount of training data. To keep our reward function clear we have opted to use only one metric as the reward signal, as opposed to combining all the evaluation metrics into one function that calculates a scalar value. We experimented with using the average of all the evaluation metrics as the reward but empirically found quickly that this does not yield good performance and have not pursued this direction further. The metrics for the BEA task are BERTScore (Zhang et al., 2020) and DialogRPT updown, human vs. rand and human vs. machine scores (Gao et al., 2020). We wanted to avoid the potential issue of reward hacking and thus decided not to use the updown score as a metric, as it seemed potentially prone to that issue. The other two DialogRPT scores were eliminated due producing very high scores (above 0.95) even early on during training and thus are unlikely to be useful as reward signals, as any improvements that the model learns could only lead to marginal increases in reward. For this reason we have chosen to use the BERTScore, specifically the F1, as our reward function.",
|
| 14 |
+
"5 Results": "In Table 2 we present the outputs by a zero-shot t5-base model, our fine-tuned t5-base model and our fine-tuned GPT-2 model. Model output were not trimmed or modified. We note that the both the fine-tuned T5 and GPT-2 include prefixes in their responses in some cases. The GPT-2 model is especially prone to outputting a \"student:\" response, which is not the goal of the task. This does not have an overly negative effect on the evaluation metrics however. Further investigation of the alignment of the task metrics with the stated goal of generative models assuming the rule of teacher in student-teacher dialogues is recommended for this reason. Prompting the models by using the dialogue and adding a \"teacher:\" prompt at the end guided the models towards first writing a teacher response and only after that, on occasion, further student responses. To minimize assumptions and to modifying the task to improve our results we have not pursued the evaluation in this direction, and\ninstead evaluated the models only on their output when given a dialogue, without any further prompting or modification.",
|
| 15 |
+
"5.1 Training Performance": "Figure 1 shows the scores our GPT-2 model has achieved during the training process on the validation set. The scores of the trained model as well as zero-shot performance on the validation set are reported in Table 3. Due to an error the validation set splits were not pure during the training process of the T5 model and we do not include it in the graphic above.",
|
| 16 |
+
"5.2 Test Set Performance": "We present the results of the evaluation on the test set in Table 4. Model outputs were generated on the test data dialogues, with the prefixes included, and were not pruned. Models often included wrong prefixes such as \"student:\" in their response. We did not remove these or filter the outputs for the first \"teacher:\" response. GPT-2 responses were set to have a minimum length of 12 and a maximum length of 100.",
|
| 17 |
+
"6 Conclusion": "In this work we have shown our Reinforcement Learning-based approach on the BEA 2023 Shared\nTask. We have used a relatively simple approach and trained two models, t5-base with 220 million parameters and gpt2 with 124 million parameters. Despite the overall performance of the models being mixed we have achieved good results in some areas. The GPT-2 model has achieved a good performance on the task and is showing clear gains in terms of evaluation metrics over a zero-shot approach on the same data. This suggests that Reinforcement Learning-based fine-tuning of language models is a valid approach. According to previous work in the area the model performance when fine-tuned with Reinforcement Learning is strongly influenced by the quality of the reward function. Our approach to this task was very basic and leaves room for improvement, which we believe can be achieved by using both higher quality models instead of relatively small ones with few parameters as well as an improved reward function that makes use of multiple evaluation metrics.",
|
| 18 |
+
"A Appendix": "We include our RL4LMs configuratiosn used for training. The configuration seen in Figure 2 shows the configuration for the submitted T5 model. The reward function bertscore_bea is the F1 BERTScore, using the \"distilbert-base-uncased\" model, with the prefixes removed before the rewards are calculated. Figure 3 shows the configuration for the GPT-2 model. The reward function does not remove the prefixes before calculating the reward.\ntokenizer: model_name: t5-base padding_side: left truncation_side: left pad_token_as_eos_token: False\nreward_fn: id: bertscore_bea args: language: en\ndatapool: id: bea_full_seq2seq_splits_onlyResponse args: file_path: \"/data/bea/data/release_1_train_dev/train_with-reference.jsonl\"\nenv: n_envs: 1 args: max_prompt_length: 256 max_episode_length: 100 terminate_on_eos: True prompt_truncation_side: \"right\" context_start_token: 0\nalg: id: nlpo args: n_steps: 128 batch_size: 64 verbose: 1 learning_rate: 0.00001 n_epochs: 5 ent_coef: 0.0 gae_lambda: 0.9 vf_coef: 0.1\nkl_div: coeff: 0.02 target_kl: 2 policy: id: maskable_seq2seq_lm_actor_critic_policy args:\nmodel_name: t5-base apply_model_parallel: True mask_type: \"learned_top_p\" top_mask: 0.9 target_update_iterations: 20 generation_kwargs:\ndo_sample: True min_length: 20 top_k: 200 max_new_tokens: 100 # this must align with env’s max steps\ntrain_evaluation: eval_batch_size: 100 n_iters: 100 eval_every: 10 save_every: 10 metrics: - id: bertscore_bea\ntokenizer: model_name: gpt2 padding_side: left truncation_side: left pad_token_as_eos_token: True\nreward_fn: id: bertscore_bea_distil args: language: en\ndatapool: id: bea_full_seq2seq_splits_onlyResponseNoShuffle args: file_path: \"/data/bea/data/release_1_train_dev/train_with-reference.jsonl\"\nenv: n_envs: 1 args: max_prompt_length: 256 max_episode_length: 100 terminate_on_eos: True\nalg: id: nlpo args: n_steps: 128 batch_size: 64 verbose: 1 learning_rate: 0.00001 n_epochs: 5\nkl_div: coeff: 0.1 target_kl: 1.0 policy: id: maskable_causal_lm_actor_critic_policy args:\nmodel_name: gpt2 apply_model_parallel: True top_mask: 0.9 min_tokens_to_keep: 100 mask_type: ’learned_top_p’ target_update_iterations: 5 generation_kwargs:\ndo_sample: True min_length: 12 max_new_tokens: 100\ntrain_evaluation: eval_batch_size: 100 n_iters: 100 eval_every: 10 save_every: 10 metrics: - id: bertscore_bea_distil\nargs: language: en\nFigure 3: RL4LMs configuration used for training the GPT-2 model."
|
| 19 |
+
}
|
ACL_23_no_limitation/ACL23_1265.json
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1265",
|
| 3 |
+
"Title": "Empowering Conversational Agents using Semantic In-Context Learning",
|
| 4 |
+
"abstractText": "Language models are one of the biggest game changers in downstream NLP applications, especially in conversational agents. In spite of their awesome capabilities to generated responses to solve the inquiries, there are still some big challenges to using them. One challenge is how to enable the LLMs to use the private internal data to solve inquires. And secondly, how to keep the LLMs updated with newly incoming data without the burden of finetuning as it is not only expensive but also not an available option for some commercial LLMs, such as ChatGPT. In this work, we propose Semantic In-Context Learning (S-ICL) to address the aforementioned challenges. Our proposed approach participated in the BEA 2023 shared task1 and ended up achieving the fourth place in both the development and evaluation phases.",
|
| 5 |
+
"1 Introduction": "Conversational agents are one of the most important applications of NLP. If implemented successfully, they can bring tremendous benefits for both organizations and clients, such as improving the efficiency of customer service in terms of support and availability of the services.\nWith the emergence of powerful large language models (LLMs) such as ChatGPT, there is a lot of interest in leveraging LLMs to develop AI agents. Even though LLMs are capable of answering a broad spectrum of questions, there are still two major bottlenecks for using them as an AI assistant.\nFirst, each organization has some valuable internal knowledge such as FAQs, policies, regulations, etc. that can or should be used to resolve incoming inquiries. However, the LLMs are trained based on public datasets and may not be aware of private knowledge sources that could help them to resolve incoming inquiries more accurately.\n1https://sig-edu.org/sharedtask/2023 Our username and team’s are amino and aiitis, respectively.\nSecondly, fine-tuning these LLMs on the organization’s internal data is not an easy task due to factors such as the size of the LLMs, cost of training, frequent updates in the internal data, and data privacy. For example, in news media, news articles are published every day that LLMs are not aware of them. If the news media decides to use an LLM as an agent, the agent would be unable to provide users with information about current events or answer their questions about what is happening now. On top of that, the fine-tuning option is not available for certain LLMs (e.g., ChatGPT with the GPT-3.5-turbo engine).\nOne possible solution to the mentioned problems is In-Context Learning (ICL), as it can enable the LLMs to perform well on the tasks or data that they have never seen before (Brown et al., 2020). In ICL, a prompt containing an instruction, few labeled samples, and an unlabeled sample is given to the LLM. Then, the LLM would be able to label the unlabeled sample without the need for any gradientbased training (Liu et al., 2022).\nHowever, it is infeasible to show all the available samples to the LLM due to the high cost of computation. Also, previous research shows that the format of the prompt, the selection of samples, the number, order, and structure of samples could have not only significant but also unforeseeable effects on LLMs’ performance (Min et al., 2022; Sanh et al., 2021; Wei et al., 2023; Liu et al., 2022).\nTo solve the aforementioned problems, we propose Semantic In-Context Learning (S-ICL) which utilizes a semantic search engine (i.e., an SBERT model (Reimers and Gurevych, 2019)) and an LLM (i.e., ChatGPT with the gpt-3.5-turbo engine) to build a conversational agent. This agent not only benefits from the knowledge of an LLM but also utilizes available private knowledge sources to provide the correct answer to the inquiries. We also propose a flexible architecture that allows experts to apply and compare different approaches for prompt\n766\nengineering. The proposed model is developed and participated in the BEA 2023 Shared task (Tack et al., 2023). However, the proposed model is flexible, and the agent can be used in other domains such as news media, customer service, and more.\nThe rest of the paper is as follows. In Section 2, we describe the proposed architecture along with its components. In section 3, we compare different configurations of the proposed model on the created test set, and we also evaluate the model on the competiton’s data. Finally, this paper is wrapped up with the conclusion in section 4.",
|
| 6 |
+
"2 Proposed Model": "In this section, we present our proposed approach for generating a response to the inquiry. Our proposed approach uses semantic search (Reimers and Gurevych, 2019) to enable the agent to utilize private domain data. It also uses a large language model not only to provide higher quality answers but also to enable the agent to answer questions that are significantly different from past questions and answers in the private domain data.",
|
| 7 |
+
"2.1 Overview": "As shown in Figure 1, the proposed architecture consists of five main components: Data preprocessor, Embedder, Retriever, Prompt builder, and Answer generator. The first three components are related to the semantic search part of the architecture, while the other two are related to the language model.",
|
| 8 |
+
"2.2 Data pre-processor": "The data pre-processor receives utterances in JSON format containing a context and a query (i.e., the last utterance). It extracts and transforms the JSON file into the followings:\nConcatenation: it’s a textual concatenation of all the utterances made by a student and a teacher. The main purpose of transforming data into this format is to enable its use in the semantic search part of the architecture.\nSample: It’s a conversational flow between the student and the teacher. Based on who wrote the utterance, either \"Teacher: \" or \"Student: \" would be appended in the beginning of the utterance. This format is being used by the prompt builder component as it is more appropriate to be used by the language model.",
|
| 9 |
+
"2.3 Embedder": "In this section, we use a state-of-the-art transformer encoder model to convert the concatenation format, which is built in the data pre-precossor part, into the embedding represention. We use the pre-trained model \"multi-qa-mpnet-base-dot-v1\" to generate embeddings as it has the highest performance in the Hugging Face benchmark 2. The tokenizer first tokenizes the input text, and then the transformer encoder model infers an embedding vector with a size of 768 for each token of the input text. The embedding vector of the CLS token in the last layer is considered the embedding representation of the whole input text.",
|
| 10 |
+
"2.4 Retriever": "The Retriever is responsible for finding the most similar records that exist in the training data to the incoming context. It calculates the cosine similarity between the embedding vector of the context and each embedding vector in the training set. Then, the results would be sorted in descending order based on the cosine similarity score, and the top N results would be passed on to the next step.\nThis process could be significantly sped up on large datasets by using approximate K-nearest neighbor methods, such as Facebook AI Similarity Search (Faiss) (Johnson et al., 2019). However, due to the small size of our data, we don’t need to use any approximate K-NN methods.",
|
| 11 |
+
"2.5 Prompt builder": "The prompt builder component creates a prompt based on the selected prompt building approach. Figure 2 shows the structure of the prompt which consists of the following components in order:\nCommand: It’s a first component of the prompt that informs the language model of what is expected to be done.\nSample(s): The retrieved sample(s) from the training set are included to assist the language model in answering the inquiry. This part of the prompt is optional because the number of samples to be used depends on the selected approach.\nInquiry: It contains the last utterance along with the previous utterances (i.e., Context) given to the system.\nThe command part of the prompt is written by humans, while the other parts are generated auto-\n2https://www.sbert.net/docs/ pretrained_models.html\nmatically depending on the chosen prompt building approach. So far, four prompt building approaches have been designed, but more could be defined to further improve the agent’s performance or adapt it better to different domains of data, such as news.",
|
| 12 |
+
"2.6 Answer Generator": "A large language model is used in this part of the architecture. In our experiment, we use ChatGPT 3 API using gpt-3.5-turbo engine. A prompt created in the previous stage would be sent to the language model, and the response would be returned to the end user. To make the result reproducible, we set the temperature value to zero.\nIn this way, the language model can not only use its knowledge but also have access to the relevant past responses from the private domain knowledge to answer the question. Another advantage is that there is no need to fine-tune the large language model on private internal data, which may not be an option for many models, such as ChatGPT.",
|
| 13 |
+
"3 Experiment": "This section has three subsections. In the first subsection, we introduce the dataset used, split the train portion of the data into our created train and test sets, and show how the pre-processing has been done. In the second subsection, we conduct experiments on the proposed architecture using the created test set (i.e., selected from the original training set) and compare the accuracy of the model using\n3https://openai.com/blog/chatgpt\ndifferent prompt building approaches. In the third subsection, we will demonstrate the model’s performance on the development and testing sets of the competition data.",
|
| 14 |
+
"3.1 Data": "The data consists of the conversation between a student and a teacher provided by (Caines et al., 2020). The sizes of the provided data and their release dates in the competition are shown in Table 1. We transform the training set using the pre-processor component (subsection 2.2). Then, we use the embedder component (subsection 2.3) to convert the concatenation of the utterances into their embedding representations (i.e., Train set embedding in Figure 1).\nThen, we split the train set into customized train and test sets with sizes of 2647 and 100, respectively. We use the customized train and test sets to compare the different prompt generation approaches in subsection 3.2. Since some of the records in the training set have similar utterances (i.e., they overlap), we select the test data in a way that none of the test conversations can be answered directly from the conversations in the training set (i.e., there is no overlap between the utterances of the train and test sets).",
|
| 15 |
+
"3.2 Evaluation of different approaches": "We use five different approaches to provide the response to the incoming inquiry. In the first approach, we only use the semantic search. That\nmeans the last utterance of the most similar retrieved sample is chosen as a response. Next, we are curious to see how good the language model is in completing the conversation without using any samples. The command we use is \"Complete the following conversation by giving an appropriate answer by the teacher\". However, for the third approach, we ask the language model to \"Find the appropriate answer by the teacher from sample 1 to complete the conversation 1\". The provided sample, which has the ID \"train_0063\", was chosen by us from the training set and has been used for all inquiries.\nDuring the experiments, we observed that ChatGPT tends to generate longer responses than the ground truths. However, we discovered that by formulating our prompt command in a certain way (i.e., find the appropriate answer by the teacher from sample ...), ChatGPT can produce more concise and shorter responses. Therefore, we decided to write the command part of our prompt in this way. We also observed that for some inquiries, ChatGPT mentions \"teacher :\" in its response, so we wrote a rule to remove it.\nThe fourth approach includes the top 3 most similar samples in the prompt and the command is \"Find the appropriate answer by the teacher from sample 1, sample 2 and sample 3 to complete the conversation 1\". And the last approach is similar to the third one but instead of using the curated sample, the most similar sample from the training set is being used. The last two approaches are based on S-ICL.\nThe results of the above approaches on the created test dataset are shown in Table 2 in terms of BERT Score (Zhang et al., 2019) and DialogRPT (Gao et al., 2020). In Table 2, P, R, F, U, HvR, HvM stand for precision, recall, f1-score, updown (the probability that a response receives upvotes), human vs random (the probability that the response is relevant to the given context), human vs machine (the probability that the response was written by a human rather than generated by a machine), respectively. The first three measures belong to BERTScore (Zhang et al., 2019), and the rest of them belong to DialogRPT (Gao et al., 2020). We use \"roberta-large\" model 4 for the BERTScore as we do not know which model the competition is using. We then compare the generated responses with their ground-truths using BERTScore in terms of precision, recall, and f1-score. Each of the first three measures of DialogRPT (i.e., U, HvR, and HvM) 5 has its own pre-trained model. Each model\n4https://huggingface.co/roberta-large 5https://github.com/golsun/DialogRPT\nreceives the generated responses and their corresponding contexts (i.e., the previous utterances of each conversation) to calculate a score.\nInterestingly, the model that uses the fixed sample for all the inquiries (third approach) gained the best BERTscore in terms of f1-score. This observation is inline with the results of other studies such as (Min et al., 2022) that they concluded replacing the sample labels randomly would barely hurts the performance of the LLMs. In terms of DialogRPT, the second approach gained the best results. However, when we examined the generated answers, we found out the answers of the fifth approach are both more reasonable and preferable in comparison with the other approaches.",
|
| 16 |
+
"3.3 BEA Workshop’s evaluation": "Our proposed approach ranked fourth both in development and evaluation phases. We used our third approach (using the fixed sample) for the development phase as we noticed the majority of utterances in development data have overlap with the training set. If we use either the fifth or fourth approach, the model would recognize the similarity between the sample and the conversation and produce a response so similar to the existing utterance in the sample that it would inflate the performance of the system. However, we discovered that the test data is different in a way that none of its conversations could have their responses directly obtained from any utterances in either the training or development sets. Therefore, for the evaluation set, we used the fifth approach. Another reason that why we used the fifth approach in the evaluation phase is that the top three models would be evaluated by the human evaluators, and we already noticed in subsection 3.2 that the results of the fifth approach are more desirable from humans’ point of view.\nThe evaluation phase was started on May 1st and ended on May 5th. Due to an unprecedented emergency, we were unable to continue working\non the test data and our last submission was on May 1st. Our model ended up ranking fourth in the evaluation phase and could not pass to the human evaluation phase. However, we think that the proposed model has a high potential for improvement, especially if more efforts would be put on the prompt engineering part of the architecture.",
|
| 17 |
+
"4 Conclusion": "We proposed a Semantic In-Context Learning (SICM) approach for conversational agents using the combination of a semantic search and a large language model (i.e., ChatGPT). We also implemented an architecture enabling users to apply and compare different approaches for prompt engineering. We applied our proposed method on the BEA 2023 shared task and our approach ended up ranking fourth in both the development and evaluation phases.",
|
| 18 |
+
"Acknowledgements": "This work is funded by Natural Science and Engineering Research Council of Canada (NSERC)."
|
| 19 |
+
}
|
ACL_23_no_limitation/ACL23_1270.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1270",
|
| 3 |
+
"Title": "Gaussian Distributed Prototypical Network for Few-shot Genomic Variant Detection",
|
| 4 |
+
"abstractText": "Automatically identifying genetic mutations in the cancer literature using text mining technology has been an important way to study the vast amount of cancer medical literature. However, novel knowledge regarding the genetic variants proliferates rapidly, though current supervised learning models struggle with discovering these unknown entity types. Few-shot learning allows a model to perform effectively with great generalization on new entity types, which has not been explored in recognizing cancer mutation detection. This paper addresses cancer mutation detection tasks with few-shot learning paradigms. We propose GDPN framework, which models the label dependency from the training examples in the support set and approximates the transition scores via Gaussian distribution. The experiments on three benchmark cancer mutation datasets show the effectiveness of our proposed model. Due to the ever-expanding biomedical literature, automated approaches in the biomedical text mining domain play an important role in mining gene interactions (Özgür et al., 2008; Trieu et al., 2020; Sahu et al., 2019), identifying biomarkers and exploring the genetic mutations, which can significantly reduce time and effort compared to traditional labour-intensive approaches. In particular, as a critical step in analysing the literature for cancer genomics data, text mining in cancer genomics studies (Birgmeier et al., 2020; Cejuela et al., 2017; Mahmood et al., 2016; Wei et al., 2013, 2018) has automatically identified novel somatic alterations such as single-nucleotide polymorphisms (SNPs), deletion and insertions, copy number aberrations, structural variants, and gene fusions. For cancer genomics mutation extraction, the most representative works use either manuallycrafted templates (Caporaso et al., 2007; Si and Training set Test set One such mutation MEK1 (P124L) was identified in a resistant metastatic focus that emerged in a melanoma patient treated with AZD6244. Label: Substitution Three microdeletions were also identified,two of which ( c.611delG and c.640_667del28) were located within the coding region whereas one ( c.609+28_610-16del) was located entirely within intron. Label: Deletion Selec ve accumula ons of radiotracer in the L858R and [E746-A750] del EGFR mutants were observed when compared to the tumors with wild-type EGFR or vector-transfected cells. DNA sequencing revealed that all the affected males carried an inser on muta on [(c.370-371insA)] unreported previously predicted to result in frameshi s and generate a premature stop codon (p.S124fsX127). Prediction by the proposed model: B-DEL √ Prediction by BiLST+CRF model: O ❌ Prediction by the proposed model: B-DEL Prediction by Prototype Network: I-DEL ❌ √ Figure 1: An example shows the semantic inconsistency issue between training set and test set. The entity ’P124L’ from the training set differs substantially from ’c.611delG’ in the test set, highlighting the challenge of predicting unseen categories by supervised learningbased models. This difference illustrates the difficulty traditional few-shot learni g m thods encounter when trying to recognize novel cancer genomic variants. Roberts, 2018) or feature engineering (Cejuela et al., 2017; Wei et al., 2018) with machine learning-based approaches (Doughty et al., 2011; Wei et al., 2015; Si and Roberts, 2018). The main drawback of the traditional methods is that they are not competent with unseen categories. Nevertheless, as cancer research advances, thousands of new cancer genomes and exomes are identified and classified into new categories. There has been a lack of progress in automated genomic variant detection attempts. There may be a way around this problem by annotating more data for the model to capture new categories, but this would be highly costly in terms of time and labour costs in the cancer domain. Intuitively, humans can understand a concept with a few samples, which drives the researcher to apply the few shot learning paradigm to downstream text mining tasks, such as named entity recognition (Cao et al., 2021; Settles, 2004), relation extraction (RE) (Yao et al., 2019; Zhou et al., 2014), and event extraction (EE) (Trieu et al., 2020; Björne and Salakoski, 2018). In FSL, a trained model rapidly learns a new concept from a few examples while retaining great generalisation from observed examples (Vinyals et al., 2016). Thus, if",
|
| 5 |
+
"1 Formulation": "Our goal in this work is to formulate cancer genomic variant detection as a FSL problem, which has not been done in prior work. To achieve this, we first present the FSL framework and specify symbols and terminology in this section 1. Then we illustrate the proposed method in the following section 2.",
|
| 6 |
+
"1.1 Few Shot Learning": "In Few shot learning (FSL), we preliminarily assign two sets: support set S, which contains classified\nsamples, and query set Q, which contains unclassified samples. Models can predict the label of an instance x from a query set Q, by learning from a support set S and a label set C. Prior FSL investigations used an N -way K-shot configuration with N clusters representing N categories and K data samples.\nSince we cast this task as a sequence labelling problem and adopt BIO tagging schema (B represents beginning of an entity, I represents intermediate of an entity, O represents outside an entity), we extend the N -way K-shot to 2∗N+1 way K shot, where 2*N clusters denote the B and I categories, and 1 cluster denotes O label.\nTherefore, given a word sequence X = {x1, x2, . . . , xn} and its corresponding label sequence Y = {y1, y2, . . . , yn}, the support S can be represented as:\nS = {(X0, Y0), (X1, Y1), . . . , | (X(2∗N+1)∗K , Y(2∗N+1)∗K)} (1)\nWhere (2 ∗ N + 1) ∗ K is the total number of samples in the support set S .",
|
| 7 |
+
"1.2 Linear Chain CRF": "Conditional Random Fields (CRFs) (Wallach, 2004) are undirected statistical graphical models, which are well suited to tackle sequence labelling problem. Following on the above section, given a word sequence X = {x1, x2, . . . , xn} and its corresponding label sequence Y = {y1, y2, . . . , yn}, linear-chain CRFs define the conditional probability of a label sequence given an input sequence to be:\nP (Y |X) = exp (∑n k=1 U(xk, yk) + ∑n−1 k=1 T (yk, yk+1) )\nZ(X) (2)\nwhere Z(X) is a normalization factor of all state sequences. Note that U(.) is the scoring function that calculates the probabilistic score of label y for each token in the sequence X . T (.) is the transition function that calculates the transit score between the adjacent labels yk and yk+1.",
|
| 8 |
+
"2.1 Instance Encoder": "We first map discrete words to a continuous highdimensional vector space to simplify neural net-\nwork training using BioBERT (Lee et al., 2020), which is a pre-trained biomedical language representation model and had shown great effectiveness on many downstream biomedical text mining tasks. Formally, Given the token sequence x = {x1, x2, . . . , xn}, we have:\nH = h1, h2, . . . , hn = BioBERT (x1, x2, . . . , xn) (3)",
|
| 9 |
+
"2.2 Interactive Prototype Encoder": "This module generates a representative vector for each label t in the support set S from the overall representations of its instances. Instead of employing the original Prototypical Network suggested in (Snell et al., 2017), which determines all representation vectors equally, we claimed that the supporting vectors are conditionally important with respect to each query q ∈ Q, therefore, model the interactions from each label. Formally, to compute the prototype for a class t ∈ T , it collects all of the instance’s representations and calculates them as the supporting vectors’ weighted sum. The weights are determined by the attention mechanism in accordance with the query representation:\najt = ∑ σ(f(Hjt )⊙ f(q)) (4)\nαjt = exp(ajt )∑\nHk∈S exp(a k t )\n(5)\nPrt = ∑\nHt∈S αjtf(Hj) (6)\nwhere Prt denotes the Prototypical representation of label t ∈ T , ⊙ denotes element-wise product. f represents the encoding function and is BioBERT in our paper.",
|
| 10 |
+
"2.3 Gaussian Distributed Prototypical CRF": "In section 1.2, we already learn that CRF layer consists of a scoring function U(.) and a transition function T (.). We compute these two components separately. The scoring function represents a value U for the label y given our token xi vector at the i-th timestep. Prior works leverage the output of LSTM as the U , where it is the so-called LSTM+CRF framework (Huang et al., 2015) that has been applied to most of the conventional NER tasks.\nInstead of using the output of LSTM to gain the output U from scoring function, we first calculate\nthe correlation between each token xi representation and the prototype representation Prt for label yt.\nsi = fS(yt, xi,S) = Prt ⊙ hi (7)\nAccordingly, the scoring function U() for the entire sequence is gained via the sum of each token score as follows si :\nU(y, x,S) = n∑\nt=1\nfS(yt, ht,S) (8)\nIn terms of the transition score, the conventional CRF model optimizes the transition function T (.) from massive data samples, which overcomes the data fluctuation problem to a large extent. However, the few data samples can achieve dramatic data randomness. The transit function lacks the optimization process, thus resulting in huge data bias representing the probability of transition of two adjacent labels. To alleviate this problem and smoothen the randomness caused by a few samples, we adopt Gaussian distribution as our transition function and utilize mean value µ and variance value σ to approximate the transition score as follows:\nµij = Wµ(Pri;Prj) + bµ (9)\nσij = exp (Wσ(Pri;Prj) + bσ (10)\nWhere ; denotes concatenation operations. Like linear chain CRF, our transition function for the entire sequence is achieved as follows:\nT (y) = n−1∑\ni=1\nT (yi, yi+1)) (11)\nTherefore, the probability of label sequence Y given the token sequence is as same as the conventional CRF model:\nP (Y |X) = exp\n(∑n k=1 U(xk, yk)\n+ ��n−1\nk=1 T (yk, yk+1)\n)\nZ(X) (12)\nWhere and Z(x) is normalization factor in order to get a probability distribution over sequences. In the inference stage, we use Viterbi algorithm (Forney, 1973) as with traditional CRF to find the optimal path from the input.",
|
| 11 |
+
"3.1 Dataset": "In this work, we implement the proposed method on three benchmark datasets. The relevant statistical figures have been listed in Table 1. TmVar is a sequence variant corpus derived from Pubmed abstracts, which contains a large number of sequence variants at both the protein and gene level using a standard nomenclature for sequence variants created by the human genome variation society (Wei et al., 2013). TmVar includes 500 PubMed abstracts and titles with 871 variants.\nBRONCO (Lee et al., 2016) is now the most extensive full-text cancer variant corpus annotated with information about genes, diseases, medicines, and cell lines associated with the variants. BRONCO has 108 full-text papers with 403 gene variations, as indicated in Table 1. EMU (Doughty et al., 2011) searched mutations, gene mentions, and disease connections by retrieving a set of PubMed abstracts that were possibly beneficial for finding mutations. EMU contains two subsets which are Breast cancer and prostate\ncancer, respectively. As we can see from Table 1, EMU consists of 109 PubMed abstracts with 172 variants.",
|
| 12 |
+
"3.2 Data Pre-processing": "FSL does not allow us to directly use the dataset’s splits since the label types in the training set and the testing set are not congruent. As a result, we adopted the scheme conducted in (Lai et al., 2020) and have further divided these datasets to meet three requirements for FSL:\n• Label types in the train set are distinct from those in the testing and development sets. In another word, there is no overlap regarding the label types between training/development/testing sets.\n• The label type contain less than 5 samples are abandoned.\n• The training set should contain as many samples as possible.\nWe re-split the dataset based on the standards above. As the label types of EMU and BRONCO are quite limited to support the FSL setup, we combine both dataset as a whole to underpin the FSL training and testing process. The final splits are shown in Table 2.",
|
| 13 |
+
"3.3 Implementation Details": "We adopt a mini-batch mechanism to train our model, with a batch size of 2 and a learning rate of 1e-5. A warm-up strategy and dropout with 0.1 probability are introduced to prevent the model from over-fitting. All parameters are optimized using Adam (Kingma and Ba, 2014). Furthermore, we also adopt an episodic training scheme that has been commonly adopted in fsl, and we used the sample evaluation methods in (Cong et al., 2020); an entity is counted as correct only if its label and its textual span are both correct.",
|
| 14 |
+
"3.4 Baseline Models": "Since the scope of our task is NER with fsl settings, we compare the proposed model with two types of baselines: the state-of-the-art FSL models that have been applied in many areas and the typical NER models commonly used for NER tasks. For FSL baseline models, we applied 5 well-adopted ones which include (1) Matching Network (Vinyals et al., 2016) adopted cosine similarity as a prototypical\nscore with the averaging operation. (2) Proto Network (Snell et al., 2017) used Euclidean Distance as the similarity metric with the averaging prototype. (3) Proto+Dot (Lai et al., 2020) used a dot product to compute the similarity. (4) Proto+Att (Lai et al., 2020) used a weighted sum prototype with Euclidean Distance. (5) Relation (Sung et al., 2018) builds a trainable distance function and a neural network to measure the similarity.\nIn terms of the CRF-based baselines, they can be divided into two groups: The first group consists of vanilla CRF sequence labeling models: (1)BiLSTM+CRF (Luo et al., 2018) utilizes the BiLSTM layer to map the semantics features to a higher dimension and CRF layer is to model the label’s consistency. (2) BERT+CRF (Dai et al., 2019) is similar to BiLSTM+CRF instead of using BERT for feature extraction. The second group consists of the state-of-the-art CRF sequence labeling models for FSL NER tasks 1: (1) CONTAINER (Das et al., 2021) optimized a generalized objective of differentiating between token categories based on their Gaussian-distributed embeddings. This effectively alleviates overfitting issues originating from training domains. (2) FEW-NERD (Ding et al., 2021) released a massive-scale FSL NER dataset and proposed the corresponding baseline models that combined BERT tagger with Prototype network. (3) Decomposed Meta-Learning (Ma et al., 2022) took the few-shot span detection as a sequence labeling problem and trained the span detector by introducing the model-agnostic meta-learning (MAML) algorithm to find a good model parameter initialization that could fast adapt to new entity classes.",
|
| 15 |
+
"4 Results": "Table 4 and Table 5 in Appendix sector show the precision, recall, and F1 score of the baseline models and the proposed model on the three benchmark datasets under N-way K-shot few-shot learning settings. Unlike the conventional few shot learning tasks using 5 or 10 ways and shots for the settings, we utilize 1-to-3 ways and 1-to-5 shots due to the limited scale of the datasets. Additionally, we also evaluate the model by test epoch, which relates to the number of samples included in the test set, to verify the effect of the data fluctuation on the model’s performance.\n1We only adopt the baseline models that are applicable for our datasets and have the same settings with the proposed model.",
|
| 16 |
+
"4.1 From the perspective of FSL Settings": "We first evaluate the results from the perspective of different test settings. To be more concrete, we test the effect of N-way, K-shot, and test epoch, respectively. We can see from both Table 4 and\nTable 5 in Appendix the performance of the models on 1-way K-shot is always better than 2-way and 3- way K-shot. Statistically, the vanilla NER models drop 0.7% and 4.5% on average from 1 way to 2-way and 3-way given a certain number of shot, the general FSL models drop 3.12% and 4.86%, while CRF-based models drop 3.54% 6.13% and under the same circumstance. The reason is the fact that the increase in the number of classes leads to a larger scope of the probability distribution, resulting in the lower results.\nThe above demonstrates the influence of the Nway. Next, we analyze the effect of K-shot. As shown in Table 5, the baseline and proposed models mainly achieve better performance while K increases with a fixed N-way. In the FSL models in general, comparing the performance from K = 1 to K = 5 given the 1-way setting, the prototype-based models boost 4.94% F1 score on average, the other\nFSL models, i.e., Relation network and Matching network increase 3.58% F1 score on average, and even Vanilla NER model improves 5.24% on average. In the CRF-based models, we can also notice a 3.79% F1 increase. This unified tendency indicates that the added shots are able to benefit the models to gain more semantic features given a certain label, which is consistent with the experimental results we can observe from the other works (Lai et al., 2020; Das et al., 2021; Ma et al., 2022).\nWe also evaluate how the number of test epochs affects the results. We initially speculated that the test epoch determines the number of samples involved in the test loop, reflecting the influence of data randomness on models’ performance. Thus, the lower test epoch should achieve higher performance improvements on a specific model, as the data randomness issue is more severe when the number of the data sample is smaller. However, the results suggest that different settings of the test epoch do not straightforwardly relate to the data randomness. As noticed in Table 4 and Table 5 in Appendix, the model’s performance can be either higher or lower with different test epochs. When we keep the N-way and K-shot fixed, the test epoch cannot unveil the data fluctuation issue.",
|
| 17 |
+
"4.2 From the Perspective of Models": "Then, we analyze the experimental results from the perspective of the model types. We can notice that in both Table 4 and Table 5 that general FSL\nmodels outperform the vanilla NER models to a large extent. The reason is that the vanilla NER models struggle with the insufficient semantic features for each label, thus resulting in an unqualified transition matrix to model the label’s consistency. BERT+CRF model exacerbates this trend due to the specific tokenization approach, WordPiece (Song et al., 2020), which is a more fine-grained way to split the words into subwords.\nOn the other hand, in the general FSL models, prototype-based (Proto, Proto+Att, Proto+Dot) models outperform the Matching network and the Relation network in all FSL configurations. Proto+Att and Proto-Dot are marginally better than Proto among prototype network models, with an average performance improvement of 2.18% and 1.96% F1 scores on the three benchmark datasets. The reason can be inferred that the interactive information amongst each label is integrated by Att and Dot operations, which naturally gains more benefits from the data samples. The proposed model outperforms the prototype-based models with an average 10.33% F1 score gap on BRONCO/EMU dataset and an average 10.99% F1 score gap on TmVar dataset.\nCompared to the CRF-based models, our model is also built upon the CRF architecture, which leverages the label’s dependency to cast this task as a sequence labeling problem. As we can notice in Table 4, our model outperforms the baseline models under different settings, and achieves 1.39% F1 score advance in TmVar dataset compared to each state-of-the-art results. For BRONCO/EMU dataset, we can notice that our model achieves the competitive results. When there is under the settings of 1Way-5Shot-5Epoch, 1Way-1Shot-5Epoch and 2Way-1Shot-10Epoch, our models outperform all the baseline models. The model gains these improvements due to the fact that successfully reducing the illegal label transition from CRF-based models. Our proposed model approximates the transition scores via prototypical representations, and optimize it by Guassian distribution to alleviate the huge data fluctuation issue caused by limited number of training samples.",
|
| 18 |
+
"4.3.1 Ablation Study": "We evaluate the model components in three aspects shown in Table 3. Instead of using Gaussian distribution, we generate the transit score directly, the\nmodel performance drops 0.38% and 0.61% F1 score, respectively, on TmVar and BRONCO/EMU datasets. It indicates that our Gaussian distribution estimation can alleviate the data uncertainty to some extent and thus estimate a more accurate transit score to reflect the data samples. Furthermore, we also replace the interactive prototype layer with the vanilla prototype network, and we can notice the model performance decreases with a 3.98% and 3.81% F1 gap. We can infer that the interactive prototype layer can integrate with different categories by giving different weights to the prototypical representations. Finally, we changed our BioBERT instance encoder to raw word2vec embedding (McCormick, 2016). The results dropped 1.35% and 0.38% F1 scores on the datasets, which shows the effectiveness of the BioBERT in encoding the semantic information.",
|
| 19 |
+
"5 Discussion": "As shown in Figure 4, we utilize two cases to demonstrate the effectiveness of the proposed model. Specifically, we compare the proposed model with the FSL and conventional NER models, respectively, to showcase our model’s advance.\nThe upper figure is a comparison between the proposed model and a conventional CRF-based NER model. We can notice that our model correctly assigns the “E746-A750” a label “B-DEL”, while BiLSTM-CRF model wrongly predicts it as “O.” As the label “B-DEL” rarely appears in the training set, BiLSTM-CRF model struggles with capturing its relevant semantics and assigning “B-DEL” to the correct token spans. Our model can predict the “B-DEL” for the token “E746-A750” credited to the prototype representation of the label “B-DEL” that has been learned in the support set.\nThe following case compares the proposed model and a prototype-based FSL model. Our model successfully predicts “c.370-371insA” as “B-DEL” while Prototype Network predicts its as “I-DEL”. This is due to the fact that Prototype Network only learns the transition scores via prototype representations of specific categories. Although the prototype representation provides the feature of label “DEL” to some extent, the model still miscognizes it as “I-DEL” because of the data randomness. Our model overcomes this issue according to suppress the data fluctuation via Gussinan distribution, therefore predicting this case correctly.\nTraining set Test set One such mutation MEK1 (P124L) was identified in a resistant metastatic focus that emerged in a melanoma patient treated with AZD6244. Label: Substitution Three microdeletions were also identified,two of which ( c.611delG and c.640_667del28) were located within the coding region whereas one ( c.609+28_610-16del) was located entirely within intron. Label: Deletion",
|
| 20 |
+
"5.1 Empirical Experiment of Distributions": "Gaussian distribution is leveraged to estimate the transition scores to smoothen the fluctuation caused by scarce samples. In this section, we also conduct an empirical experiments to test the model’s performance with different distributions 2. The distributions can be divided into two groups, the first group is discrete variable distributions including Categorical distribution, Binomial distribution, and Bernoulli distribution. These group of distributions gain much lower results shown in Figure 3, because the range of the distribution function is discrete, and the output of possible values is finite. The second group is continuous variable distributions, including Gaussian distribution, Log-Normal distribution and Student’s t distribution. As we can see from Figure 3, although these distributions turning out to be slightly higher or lower in a limited range, Gaussian distribution still achieves the best results in majority of the settings. We speculate the reason is that Gaussian distribution can better eliminate the influence of outliers in few sample scene, so as to accurately grasp the central tendency and discrete trend of data, therefore we empirically apply Gaussian distribution to our method.",
|
| 21 |
+
"5.2 Error Analysis": "We also conducted the error analysis of predictions to demonstrate the models’ bottleneck. 81.7% of\n2https://pytorch.org/docs/stable/distributions.html\nthem attribute to long-span errors, which means our model is relatively weak in predicting the textual spans that constitute more than one token. By ‘longspan errors’, we refer to instances when our model only predicted a portion of the total relevant token span, or when our model failed to properly identify and predict an entity that spans multiple tokens. This does not necessarily indicate a deficiency with the tokenization process. In fact, many surface forms of mutation events do consist of more than one token. However, the model struggles to capture these instances consistently, leading to these \"long-span errors\". This challenge appears to be a common issue in few-shot learning for this type of NER task, which often require more comprehensive training to effectively capture and predict entities that consist of multiple tokens.\nOn the other hand, 9.8% of errors are because our model does not recognize the target entities, thus just assigning them a “O” label. Finally, 8.5% of errors can be summarized that our model successfully recognizes the textual span but wrongly assign the labels to them since some categories provide limited semantic features in the support set to be used in the training stage.",
|
| 22 |
+
"6 Conclusion": "In this paper, we address the problem of recognizing the unseen entity categories in the genomic cancer literature. We exploit the few shot learning paradigm in this task and propose a transited pro-\ntotype NER framework to generate the transition scores for CRF models. Meanwhile, since the training samples are limited in the support set, which results in data fluctuation, we adopt Gaussian distribution as our transition function to smoothen the randomness caused by a few samples. Finally, experimental results on the three cancer genomic datasets prove the effectiveness of our proposed method.",
|
| 23 |
+
"7 Appendix": "Table 4 and Table 5 are shown in the next page.\n1W ay\n-6 Sh\not -1\n0E po\nch 2W\nay -6\nSh ot\n-1 0E\npo ch\n3W ay\n-6 Sh\not -1\n0E po\nch 1W\nay -6\nSh ot\n-1 00\nE po\nch 2W\nay -6\nSh ot\n-1 00\nE po\nch 3W\nay -6\nSh ot\n-1 00\nE po\nch\nM od\nel s\nPr ec\n. R\nec .\nF1 sc\nor e\nPr ec\n. R\nec .\nF1 sc\nor e\nPr ec\n. R\nec .\nF1 sc\nor e\nPr ec\n. R\nec .\nF1 sc\nor e\nPr ec\n. R\nec .\nF1 sc\nor e\nPr ec\n. R\nec .\nF1 sc\nor e\nVa ni\nlla N\nE R\nm od\nel s\nB iL\nST M\n+C R\nF 5.\n0% 80\n.0 %\n9. 4%\n8. 0%\n7. 0%\n14 .4\n% 0.\n0% 0.\n0% 0.\n0% 5.\n0% 90\n.0 %\n9. 5%\n2. 0%\n90 .0\n% 3.\n9% 0.\n0% 0.\n0% 0.\n0%\nB E\nR T\n+C R\nF 3.\n00 %\n90 .0\n0% 5.\n80 %\n3. 00\n% 90\n.0 0%\n5. 8%\n0. 0%\n0. 0%\n0. 0%\n4. 0%\n67 .0\n% 7.\n5% 2.\n0% 90\n.0 %\n3. 9%\n0. 0%\n0. 0%\n0. 0%\nFS L\nm od\nel si\nn ge\nne ra\nl\nM at\nch in\ng N\net w\nor k\n5. 12\n% 50\n.0 0%\n9. 29\n% 6.\n88 %\n75 .0\n0% 12\n.6 1%\n5. 12\n% 50\n.0 %\n9. 29\n% 7.\n13 %\n96 .0\n0% 13\n.2 8%\n4. 21\n% 50\n.5 0%\n7. 77\n% 3.\n45 %\n35 .6\n7% 6.\n29 %\nR el\nat io\nn M\nod el\n12 .4\n1% 25\n.6 4%\n16 .7\n2% 8.\n23 %\n24 .6\n1% 12\n.3 3%\n7. 61\n% 21\n.1 1%\n11 .1\n9% 9.\n18 %\n10 .0\n0% 9.\n57 %\n5. 21\n% 28\n.0 0%\n8. 79\n% 3.\n45 %\n16 .6\n7% 5.\n72 %\nPr ot\no N\net w\nor k\n17 .2\n4% 50\n.0 0%\n25 .6\n4% 12\n.3 1%\n40 .0\n0% 18\n.8 2%\n17 .2\n4% 50\n.0 0%\n25 .6\n4% 7.\n04 %\n24 .0\n0% 10\n.8 8%\n5. 70\n% 26\n.5 0%\n9. 38\n% 4.\n28 %\n18 .6\n7% 6.\n97 %\nPr ot\no+ D\not 22\n.8 0%\n48 .0\n0% 30\n.9 2%\n19 .7\n0 %\n40 .0\n0% 26\n.4 0%\n16 .2\n0% 36\n.7 0%\n24 .4\n8% 7.\n12 %\n26 .4\n0% 11\n.2 2%\n7. 12\n% 24\n.0 0%\n10 .9\n8% 6.\n12 %\n16 .6\n7% 8.\n95 %\nPr ot\no+ A\ntt 23\n.0 0%\n46 .0\n0% 30\n.6 7%\n20 .1\n0% 42\n.0 0%\n27 .1\n9% 17\n.2 4%\n36 .0\n0% 23\n.3 0%\n9. 18\n% 28\n.0 0%\n13 .8\n3% 7.\n31 %\n24 .1\n2% 11\n.2 2%\n5. 82\n% 16\n.6 7%\n8. 62\n%\nC R\nFba\nse d\nm od\nel s\nC O\nN TA\nIN E\nR (D\nas et\nal .,\n20 21\n) 27\n.5 0%\n52 .3\n4% 35\n.4 8%\n- -\n- -\n- -\n32 .2\n6% 45\n.4 5%\n37 .7\n4% -\n- -\n- -\n-\nFE W\n-N E\nR D\n(D in\ng et\nal .,\n20 21\n) 24\n.9 0%\n68 .3\n9% 36\n.5 0%\n19 .6\n3% 68\n.5 4%\n30 .5\n2% 15\n.7 0%\n61 .1\n5% 24\n.9 8%\n26 .5\n7% 82\n.4 5%\n40 .1\n9% 24\n.5 2%\n20 .1\n9% 22\n.1 5%\n22 .1\n1% 6.\n66 %\n10 .2\n4%\nD M\nL (M\na et\nal .,\n20 22\n) 25\n.3 1%\n51 .6\n7% 33\n.9 8%\n38 .1\n2% 34\n.1 2%\n36 .0\n1% 14\n.8 9%\n50 .2\n3% 22\n.9 7%\n29 .7\n4% 69\n.2 3%\n41 .6\n1% 19\n.3 5%\n26 .7\n8% 22\n.4 7%\n22 .4\n3% 6.\n12 %\n9. 62\n%\nou rm\nod el\n31 .2\n5% 50\n.0 0%\n38 .4\n6% 30\n.3 0%\n50 .0\n0% 37\n.7 4%\n23 .0\n8% 30\n.0 0%\n26 .0\n9% 32\n.2 0%\n66 .0\n0% 43\n.2 8%\n17 .1\n4% 36\n.5 0%\n23 .3\n2% 8.\n51 %\n20 .0\n0% 11\n.9 4%\nTa bl\ne 4:\nPr ec\nis io\nn, re\nca ll\nan d\nF1 sc\nor es\nof di\nff er\nen tm\nod el\ns on\nT m\nV ar\nda ta\nse t.\nB ol\nd m\nar ks\nth e\nhi gh\nes tfi\ngu re\n,u nd\ner lin\ne m\nar ks\nth e\nse co\nnd -h\nig he\nst fig\nur e.\n1W ay\n-5 Sh\not -5\nE po\nch 1W\nay -5\nSh ot\n-1 0E\npo ch\n1W ay\n-1 Sh\not -5\nE po\nch 2W\nay -1\nSh ot\n-5 E\npo ch\n2W ay\n-1 Sh\not -1\n0E po\nch 2W\nay -5\nSh ot\n-5 E\npo ch\nPr ec\n. R\nec .\nF1 sc\nor e\nPr ec\n. R\nec .\nF1 sc\nor e\nPr ec\n. R\nec .\nF1 sc\nor e\nPr ec\n. R\nec .\nF1 sc\nor e\nPr ec\n. R\nec .\nF1 sc\nor e\nPr ec\n. R\nec .\nF1 sc\nor e\nVa ni\nlla N\nE R\nm od\nel s\nB iL\nST M\n+C R\nF 10\n.0 0%\n10 0.\n00 %\n18 .1\n8% 1.\n00 %\n90 .0\n0% 19\n.7 8%\n4. 00\n% 10\n0. 00\n% 7.\n69 %\n0. 00\n% 0.\n00 %\n0. 00\n% 2.\n00 %\n10 0.\n00 %\n3. 92\n% 5.\n00 %\n80 .0\n0% 04\n.1 2%\nB E\nR T\n+C R\nF 4.\n00 %\n10 0.\n00 %\n7. 32\n% 1.\n00 %\n90 .0\n0% 0.\n20 %\n4. 00\n% 10\n0. 00\n% 7.\n32 %\n0. 00\n% 0.\n00 %\n0. 00\n% 0.\n00 %\n0. 00\n% 0.\n00 %\n2. 00\n% 78\n.0 0%\n4. 09\n%\nFS L\nm od\nel si\nn ge\nne ra\nl\nM at\nch in\ng N\net w\nor k\n8. 47\n% 10\n0. 00\n% 15\n.6 3%\n4. 93\n% 10\n0. 00\n% 9.\n39 %\n6. 67\n% 10\n0. 00\n% 12\n.5 0%\n1. 01\n% 40\n.0 0%\n1. 96\n% 2.\n84 %\n90 .0\n0% 5.\n51 %\n3. 13\n% 90\n.0 0%\n6. 04\n%\nR el\nat io\nn M\nod el\n9. 12\n% 90\n.0 0%\n16 .5\n6% 4.\n35 %\n90 .0\n0% 8.\n30 %\n6. 67\n% 10\n0. 00\n% 12\n.5 1%\n1. 41\n% 40\n.0 0%\n2. 72\n% 3.\n12 %\n95 .0\n0% 6.\n04 %\n4. 41\n% 80\n.0 0%\n8. 36\n%\nPr ot\no N\net w\nor k\n11 .1\n1% 20\n.0 0%\n14 .2\n9% 5.\n33 %\n40 .0\n0% 9.\n41 %\n5. 15\n% 10\n0. 00\n% 9.\n80 %\n3. 05\n% 90\n.0 0%\n5. 90\n% 2.\n75 %\n95 .0\n0% 5.\n34 %\n2. 99\n% 60\n.0 0%\n5. 69\n%\nPr ot\no+ D\not 16\n.2 0%\n25 .0\n0% 19\n.6 6%\n6. 89\n% 36\n.0 0%\n11 .5\n8% 8.\n12 %\n90 .0\n0% 14\n.9 0%\n7. 21\n% 90\n.0 0%\n13 .3\n5% 2.\n75 %\n95 .0\n0% 5.\n34 %\n12 .0\n8% 60\n.0 0%\n20 .1\n1%\nPr ot\no+ A\ntt 18\n.1 2%\n20 .0\n0% 19\n.0 1%\n5. 82\n% 40\n.0 0%\n10 .1\n6% 7.\n19 %\n10 0.\n00 %\n13 .4\n2% 6.\n28 %\n10 0.\n00 %\n11 .8\n2% 3.\n56 %\n96 .0\n0% 6.\n87 %\n13 .0\n1% 50\n.0 0%\n20 .6\n5%\nC R\nFba\nse d\nm od\nel s\nC O\nN TA\nIN E\nR (D\nas et\nal .,\n20 21\n) 24\n.8 4%\n32 .7\n7% 28\n.2 6%\n18 .5\n6% 55\n.3 1%\n27 .7\n9% -\n- -\n17 .6\n5% 13\n.2 2%\n15 .1\n2% 21\n.0 4%\n23 .4\n1% 22\n.1 6%\n- -\n-\nFE W\n-N E\nR D\n(D in\ng et\nal .,\n20 21\n) 19\n.2 7%\n51 .3\n0% 28\n.0 2%\n16 .1\n6% 54\n.7 1%\n24 .9\n5% 21\n.4 5%\n37 .4\n5% 27\n.2 8%\n12 .7\n5% 20\n.2 7%\n15 .6\n5% 14\n.8 0%\n54 .7\n2% 23\n.2 1%\n10 .2\n5% 36\n.2 4%\n13 .8\n6%\nD M\nL (M\na et\nal .,\n20 22\n) 25\n.8 0%\n30 .1\n2% 27\n.7 9%\n20 .1\n2% 48\n.9 4%\n28 .5\n2% 33\n.8 0%\n16 .6\n7% 22\n.3 3%\n6. 12\n% 66\n.6 7%\n11 .2\n1% 17\n.4 3%\n33 .2\n4% 22\n.8 7%\n7. 24\n% 34\n.1 2%\n11 .9\n5%\nO ur\nm od\nel 50\n.0 0%\n20 .0\n0% 28\n.5 0%\n50 .0\n0% 20\n.0 0%\n28 .5\n0% 50\n.0 0%\n20 .0\n0% 28\n.5 0%\n50 .0\n0% 10\n.0 0%\n16 .6\n7% 33\n.3 4%\n20 .1\n7% 25\n.0 0%\n50 .0\n0% 10\n.0 0%\n16 .7\n0%\nTa bl\ne 5:\nPr ec\nis io\nn, re\nca ll\nan d\nF1 sc\nor es\nof di\nff er\nen tm\nod el\ns on\nB R\nO N\nC O\n& E\nM U\nda ta\nse t.\nB ol\nd m\nar ks\nth e\nhi gh\nes tfi\ngu re\n,u nd\ner lin\ne m\nar ks\nth e\nse co\nnd -h\nig he\nst fig\nur e."
|
| 24 |
+
}
|
ACL_23_no_limitation/ACL23_1278.json
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1278",
|
| 3 |
+
"Title": "Exploring Drug Switching in Patients: A Deep Learning-based Approach to Extract Drug Changes and Reasons from Social Media",
|
| 4 |
+
"abstractText": "Social media (SM) can provide valuable information about patients’ experiences with multiple drugs during treatments. Although information extraction from SM has been well-studied, drug switches detection and reasons behind these switches from SM have not been studied yet. Therefore, in this paper, we present a new SM listening approach for analyzing online patient conversations that contain information about drug switching, drug effectiveness, side effects, and adverse drug reactions. We describe a deep learning-based approach for identifying instances of drug switching in SM posts, as well as a method for extracting the reasons behind these switches. To train and test our models, we used annotated SM data from internal dataset which is automatically created using a rule-based method. We evaluated our models using Text-to-Text Transfer Transformer (T5) and found that our SM listening approach can extract medication change information and reasons with high accuracy, achieving an F1-score of 98% and a ROUGE-1 score of 93%, respectively. Overall, our results suggest that our SM listening approach has the potential to provide valuable insights into patients’ experiences with drug treatments, which can be used to improve patient outcomes and the effectiveness of drug treatments.",
|
| 5 |
+
"1 Introduction": "SM platforms (e.g., Twitter, Facebook, forums) have been widely used for health-related purposes, to share and exchange experiences about drugs, treatments and diagnosis or to interact with other patients with similar health conditions in online communities. They provide a unique opportunity to observe patient experiences with medication in realworld settings (Colón-Ruiz and Segura-Bedmar, 2020; Garg, 2021; Ali et al., 2021).\nDetecting drug switches and reasons, where patients switch from one medication to another one,\ncan provide valuable insights into medication efficacy, adverse drug reactions, side effects, and patient preferences. A drug switch refers to the substitution of a prescribed medication with a similar drug (Glerum et al., 2020). By monitoring medication switches, researchers and drug companies can gain a deeper understanding of patient experiences with medications and make more informed decisions about treatments. While real-world claims data (e.g., IQVIA claims data) gives medication switch information, it does not provide the reasons of the drug switches. To make use of this amount of user-generated data, it is essential to extract structured data from unstructured information (Badieh Habib Morgan and van Keulen, 2014). Information extraction (IE) is the research domain dedicated to achieve this goal, enabling the use of such a vast amount of unstructured information in a structured and organized manner (Sarrouti et al., 2021a, 2022). While there have been numerous studies examining IE from SM platforms (Liu and Chen, 2013; Denecke and Denecke, 2015; Jenhani et al., 2019; Nemes and Kiss, 2021; Wu et al., 2021; Tu et al., 2022), to the best of our knowledge, there is no study that investigates drug switching in patients and the underlying reasons for such changes through SM analysis. Therefore, our study aims to fill this knowledge gap by providing insights for healthcare professionals and decision-makers to better understand the factors that drive drug switching behaviors among patients. To achieve this, we present an SM listening approach which aims at (1) determining whether a medication switch has occurred based on two drug names mentioned in an SM post, and (2) extract and classify the reasons (e.g., the effectiveness of the drug, adverse reactions, etc.) for the medication change. Our experiments showed that fine-tuning T5 on rulebased annotations achieved good performance (an F1-score of 98% for drug switch detection, and a ROUGE-1 score of 93% for IE and classification).\n127",
|
| 6 |
+
"2 Related work": "Over the last two decades, there has been a growing interest in extracting information from healthrelated SM posts using natural language processing (NLP), largely due to the widespread use and popularity of SM platforms. Chen et al. (2018) have shown that combining named-entity recognition with signal detection and topic modeling can be effective in extracting valuable insights from SM data related to health. In particular, they demonstrated that this approach was successful in detecting potential signals and gaining a better understanding of patients’ behaviors toward drugs, including instances of misuse. Lee et al. (2021) demonstrated that SM, in addition to traditional pharmacovigilance methods, can be utilized to identify potential signals related to new black box warnings, labeling modifications, or drug withdrawals. Although there are still some challenges to be addressed, the authors showed that SM can be a valuable tool for detecting signals associated with commonly mentioned drugs in specialized healthcare social networks and forums. To further advance the field, the authors suggested that additional research is necessary to improve NLP and effectively mine real-world data from SM platforms. Glerum et al. (2020) conducted a study to examine the occurrence of drug switches for certain active substances in the Netherlands. The goal was to gain insight into the use of generic drugs and the process of drug switching in the Netherlands, as well as the factors that influence it. To obtain information on drug switches, the author used in the claims database of the National Health Care Institute in the Netherlands (ZIN), which contains data on prescribed drugs that are dispensed by pharmacists or dispensing general practitioners.\nThe existing SM listening approaches do not detect drug switches and reasons behind these switches from SM. Therefore, we propose a deep learning-based approach to extract drug switches and different reasons behind these switches. Our approach uses rule-based annotations to train deep learning models. The deep learning model can extract more accurate information than rule-based annotations which are not scalable.",
|
| 7 |
+
"3 Our social media listening approach": "Figure 1 presents the flowchart of our SM listening approach which consists of two main components (1) drug switch detection, and (2) IE.",
|
| 8 |
+
"3.1 Drug switch detection": "Given an input SM post SMP consisting of n tokens, i.e., SMP = {w1, w2, ..., wn} and a pair of drug names (drug_a, drug_b) where drug_a ∈ SMP and drug_b ∈ SMP , the drug switch detection model is tasked with predicting the maximum probable label ŷ from the set of labels in annotated data, y ∈ {dsw, no_dsw}. \"dsw\" indicates a medication switch from Drug A to Drug B, and \"no_dsw\" indicates no medication change from Drug A to Drug B. The drug switch detection component is based on T5 (Raffel et al., 2020). The input sequence is “drug_a: [D1] drug_b: [D2] SM post: [SMP] relation: [r]”. We fine-tuned T5 to generate \"dsw\", \"no_dsw\" tokens.",
|
| 9 |
+
"3.2 IE": "Given an input SM post SMP consisting of n tokens, i.e., SMP = {w1, w2, ..., wn} and a drug name (drug_a) where drug_a ∈ SMP , the IE model is tasked with generating spans and their classification classes listed in Table 1. The IE component is also based on T5. The input sequence for the IE task is “drug_a: [D1] SM post: [SMP] classes and their spans: [CLASS: TEXT SPAN]”. We fine-tuned T5 to generate classification classes and text spans for each class listed in Table 1. Figure 2 shows an example of both the input and output of our model.",
|
| 10 |
+
"4.1 Datasets and processing": "We used internal datasets which contained SM posts that were automatically annotated using handwritten rules (e.g., the pattern {DrugName > 5 (negation_pos > 2 lemma_work)}, a drug name followed by a text span of 5 words or less away that includes a negation part-of-speech two words away or less from a lemma of the word “work” for extracting the text span of DEFF). The\nrules are based on distance of tokens, entities, and linguistic features such as lemma and POS tags. The datasets, which include SM posts from Facebook and forums, contain rule-based annotations such as text span and classification classes listed in Table 1. Figure 3 presents an example of pseudo SM post and rule-based annotations.\nIn order to detect drug switches, we used examples in our internal datasets as positive instances and automatically generated negative examples using predefined rules. This is because the datasets do not include negative examples. For negative examples, we applied the following criteria: (1) if an SM post mentions two drug names, drug_a\nand drug_b, but no drug switch occurs, then drug_a + drug_b + SMP is considered a negative example, and (2) if an SM post mentions drug_a and drug_b and there is a drug switch from drug_a to drug_b but not from drug_b to drug_a, then drug_b+drug_a+SMP is considered a negative example. These negative examples are created to consider the directionality in drug switching. The training, development and testing sets consist of 107,793, 11,977 and 13,308 annotated SM posts, respectively.\nFor IE, we used examples of SM posts, which included classes and text span (as listed in Table 1), as our training and testing instances. The training, development and testing sets consist of 426,361, 10,659 and 14,109 annotated SM examples, respectively.",
|
| 11 |
+
"4.2 Results": "To assess the effectiveness of the drug switch detection model within our SM listening approach, we used standard evaluation metrics such as precision, recall, and F1-score. Our results, as presented in Table 2 using T5, demonstrate that our model performs well in accurately identifying instances of drug switching in SM posts with an F1-score of 98%. Furthermore, Table 3 presents examples of drug switches, along with the corresponding posts we created and the model’s predictions. These examples show that our model is capable of detecting the directionality of drug switching, which is a valuable feature for understanding patient behavior.\nOn the other hand, to evaluate the effectiveness of our text span extraction model, we used the standard ROUGE metric (Lin, 2004) and reported the ROUGE-1, ROUGE-2, and ROUGE-L scores. The results, as presented in Table 4, show that our model is capable of accurately extracting the reasons behind medication changes in SM posts (a ROUGE-1 of 93%). Figure 2 shows an example of SM post and our model results. In this work, we only evaluated T5 for its effectiveness in various natural language processing tasks such as question answering (Sarrouti et al., 2021b) and text summarization (Yadav et al., 2021), etc. Models evaluation goes beyond the scope of the paper, which is introducing drug switch detection and extracting reasons behind the drug switches from SM.\nOn the other hand, we conducted some tests\nwith ChatGPT (Brown et al., 2020) and shared the results of an SM post along with two questions as shown in Figure 4. Our assessment showed that ChatGPT could manage both tasks. But, it might be expensive to apply it on a large amount of SM posts. It can also take time to process high-volume requests.\nOverall, our SM listening approach provides an effective way to identify drug switches from SM posts, and valuable insights into patient behavior and treatment preferences by extracting the reasons of drug switches. Our approach thus represents a significant contribution to the field of SM listening and healthcare research.",
|
| 12 |
+
"4.3 Error analysis": "We conducted a manual analysis of our test sets and compared the performance of our deep learningbased approach to rule-based annotations. Table 5 presents some examples.\nThe error analysis showed that our method was\nExample\n(1) Drug name: drug_a, SM Post: See the SM post presented in Figure 2. Rule-based annotations: PSMT: did not have any problems when I was taking drug_a, DSW: put him on drug_a, and within a month he started having some worst side effects such as muscle aches, difficulty in breathing and pain or tenderness around my eyes and cheekbones. With all these problems, the doctor switched him over to drug_b, NADR: did not have any problems when I was taking drug_a Our approach: PSMT: did not have any problems when I was taking drug_a, NSMT: drug_a, and within a month he started having some worst side effects such as muscle aches, difficulty in breathing and pain or tenderness around my eyes and cheekbones., DSW: put him on drug_a, and within a month he started having some worst side effects such as muscle aches, difficulty in breathing and pain or tenderness around my eyes and cheekbones. With all these problems, the doctor switched him over to drug_b, DSE: drug_a, and within a month he started having some worst side effects such as muscle aches, difficulty in breathing and pain or tenderness around my eyes and cheekbones, ADR: drug_a, and within a month he started having some worst side effects such as muscle aches, difficulty in breathing and pain or tenderness around my eyes and cheekbones, and NADR: did not have any problems when I was taking drug_a\n(2) Drug name: drug_a, SM Post: Hello, here is my short story: I am taking drug_b for now. Regarding drug_a, I don’t take it since I have panic disorder and agoraphobia. Although drug_b worked well for me, I am having mood changes. Rule-based annotations: NSMT: drug_a, I don’t take it since I have panic disorder DSE: drug_a, I don’t take it since I have panic disorderADR: drug_a, I don’t take it since I have panic disorder Our approach: NSMT: drug_a, I don’t take it since I have panic disorder and agoraphobia DSE: drug_a, I don’t take it since I have panic disorder and agoraphobia ADR: drug_a, I don’t take it since I have panic disorder and agoraphobia\n(3) Drug name: drug_a, SM Post: I was on drug_b and drug_a for a long time, and never had hair loss (Male Hair). I started having hair loss with drug_c , I am having some problems like nausea and no appetite. Drug_b did not help me with seizures. But drug_a helped me a lot and was able to control seizures , but did nothing for nausea. On the other hand, drug_c has helped me with clonic seizure. Rule-based annotations: NSMT: drug_a for a long time, and never had hair loss (Male Hair). I started having hair loss with drug_c , I am having some problems like nausea DSE: drug_a for a long time, and never had hair loss (Male Hair). I started having hair loss with drug_c , I am having some problems like nausea and no appetite DSE: drug_a for a long time, and never had hair loss (Male Hair). I started having hair loss with drug_c , I am having some problems like nausea DSE: drug_a for a long time, and never had hair loss (Male Hair). I started having hair loss ADR: drug_a for a long time, and never had hair loss (Male Hair). I started having hair loss with drug_c , I am having some problems like nausea and no appetite Our approach: NSMT: drug_a helped me a lot and was able to control seizures, but did nothing DNEFF: drug_a helped me a lot and was able to control seizures, but did nothing\nTable 5: Examples of pseudo SM posts, rule-based annotations and our model output.\nable to extract more information and identify additional classification classes and spans. For example, in example #1, our model identified six classes (PSMT, NSMT, DSE, DSW, ADR, and NADR) while the rule-based annotations only had three (PSMT, DSW, and NADR). Our model was also able to address conflicting sentiments about the same drug, such as in example #1 where PSMT and NSMT spans about drug_a were correctly identified.\nIn addition, the error analysis showed that our approach accurately extracted the corresponding spans for each class. For example, in example #2, the rule-based annotations missed the span for \"Agoraphobia\" due to an incomplete dictionary or distance length restrictions, while our model was able to extract it. Additionally, our model was able to handle the challenge of multiple drugs with different spans within the same SM post and accurately extract the corresponding spans for a given drug name. In example #3, rule-based annotations erroneously added information related to drug_b to drug_a, while our model correctly identified the text span for each drug.",
|
| 13 |
+
"5 Conclusion": "In our paper, we presented our SM listening approach to extract valuable insights from patients’ conversations and understand the reasons why patients switch drugs during treatment. To achieve this, we developed a drug switch detection model that can determine whether a drug switch has occurred by analyzing mentions of two drug names in an SM post. Furthermore, we described an IE model that can extract the reasons for the medication change, such as adverse reactions, side effects, the effectiveness of the drug, etc. The results showed that our approach achieved good performance in drug switching detection and IE tasks."
|
| 14 |
+
}
|
ACL_23_no_limitation/ACL23_1281.json
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1281",
|
| 3 |
+
"Title": "An end-to-end neural model based on cliques and scopes for frame extraction in long breast radiology reports",
|
| 4 |
+
"abstractText": "We consider the task of automatically extracting various overlapping frames, i.e, structured entities composed of multiple labels and mentions, from long clinical breast radiology documents. While many methods exist for related topics such as event extraction, slot filling, or discontinuous entity recognition, a challenge in our study resides in the fact that clinical reports typically contain overlapping frames that span multiple sentences or paragraphs. We propose a new method that addresses these difficulties and evaluate it on a new annotated corpus. Despite the small number of documents, we show that the hybridization between knowledge injection and a learning-based system allows us to quickly obtain proper results. We will also introduce the concept of scope relations and show that it both improves the performance of our system, and provides a visual explanation of the predictions.",
|
| 5 |
+
"1 Introduction": "In this study, we will address the task of structuring breast radiology reports.1 This end to end task consists in extracting different structured entities (frames) of various types, each one composed of multiple labels and multiple mentions organized as a list of fields. The creation of a frame is triggered by a \"trigger\" mention, and some of its fields may be justified by \"attribute\" mentions. An example of structured lesion frames extracted from a fictitious document is illustrated in Figure 1 and Table 1, using the scheme described in Appendix A. A first difficulty is related to the size of the documents: frames can group mentions that span several sentences or paragraphs. The second difficulty is related to overlapping frames that may share multiple mentions and even trigger mentions.\n1This study was approved by the institutional review board at APHP (CSE 190022) as part of the EZMammo project. Only previously pseudonimized documents were used in this study (Paris et al., 2019).\nTherefore, modelling the relations between different mentions is necessary to distinguish between multiple overlapping frames, such as Frames 1 and 3 in the example. Our contributions 2 are the following: • an end-to-end model to extract frames from texts • a clique-based method for dealing with overlaps • the concept of scope-relations to group mentions",
|
| 6 |
+
"2 Related works": "The extraction of structured information from clinical reports has been the subject of many studies. Most of these works are not specific to breast imaging reports and the objectives vary greatly, in terms of their scope, granularity and form. Interested readers can refer to existing surveys on the state of NLP in radiology reports (Bitterman et al., 2021; Miwa et al., 2014). Several works are only concerned with the extraction of a few report-level attributes, and therefore view the task as a classification or term extraction task for items such as ACR scores, histological grade or primary site of\n2An implementation of the model presented in this article can be found at https://github.com/percevalw/ breast-imaging-frame-extraction\n156\nlesions (He et al., 2017; Qiu et al., 2018; Alawad et al., 2018; Moore et al., 2017; Castro et al., 2017). Other features have also been the subject of specialized systems such as locations (Datta et al., 2020). An extensive survey of the different systems proposed for different features was conducted by Datta et al. (2019). Other works have sought to produce a more detailed and global extraction, and to detect several types of entities at the same time. The earliest work was the one of Taira et al. (2001), who proposed a frame based representation and method for annotating abnormal findings, anatomy, and medical procedures frames in radiology reports. Lacson et al. (2015) used a rule-based system and terminologies to extract abnormal findings and ACR scores. The DeepPhe system was proposed by Savova et al. (2017) as a fully integrated software built on cTakes (Savova et al., 2010) to extracts document and patient level cancer summaries (akin to frames) in clinical reports. Steinkamp et al. (2019) proposed a fact-based scheme, in which each fact is structured around an anchor and may contain modifiers. However, their model makes the assumption that all the mentions that characterize an entity are adjacent inside the fact span. Several methods decompose the problem into a first NER step followed by a relation detection step that allows arguments to be non adjacent. Roberts et al. (2019) proposed a frame based scheme for annotating cancer information in clinical reports and a method to perform the prediction (Si and Roberts, 2018). Their method first extracts triggers and modifiers with a NER system, and predicts their relations to form frames, but makes the assumption that there is no overlap between the different entities. Recently, a more complex scheme has been proposed by Jain et al. (2021) to annotate nested relationships between different entities. However, this work does not specifically address the case of\ncomplex or distant relations between entities. The closest task to ours is the one of Event Extraction, in which models extract one event per trigger mention, and look for related mentions that might be part of the same event. However, trigger mentions (e.g., the first [cysts] in Figure 1) may belong to different overlapping frames (the 3cm 8 o’clock lesion frame and the 2cm 6 o’clock lesion frame in Figure 1) that can only be distinguished by considering relations between their attribute mentions ([2cm], [3cm], [8 o’clock radius], and [6 o’clock radius]). To address this issue, an approach consists in listing all the possible combinations of mentions, then filtering them with a classifier (Miwa et al., 2010; Heimonen et al., 2010; Björne and Salakoski, 2011, 2013, 2015; Liu et al., 2015; Trieu et al., 2020). However, this solution becomes computationally unsatisfactory when the number of mentions that compose a frame grows.",
|
| 7 |
+
"3 Method": "We now detail a neural network based end-to-end method to automatically extract frames from clinical reports. We encode each document as word embeddings and share these with the downstream decoding components. Like most relation and event extraction models, our model operates as a pipeline. As illustrated by Figure 2, the first two mentionlevel decoders extract the named entities, or mentions (step 1 ), that are likely to be used in the composition of structured entities, and normalize them (step 2 ) to obtain the value of the field they apply to. The next two decoders focus on frame-level extractions. The frame extraction decoder (step 3 ) extracts groups of mentions to form frames. For each frame, the last frame classification decoder (step 4 ) predicts the values of the fields for which no explicit mention was found, such as a past temporality, which could be indicated by a verb tense.\nand 4 predict the values of the fields for which no explicit mention was found",
|
| 8 |
+
"3.1 Text encoder": "Our documents are written in French, therefore we use a pretrained CamemBERT model (Martin et al., 2020). To reduce the sequence size and ensure that the NER step does not predict boundaries inside words, we average the wordpieces embeddings of a word to obtain one embedding per word. Moreover, we split the document into sentences using a regular expression. We add the left and right contexts (\"document context\") of each sentence before running it through the Transformer, up to a maximum total number of wordpieces, as it proved useful in other studies (Devlin et al., 2019; Kantor and Globerson, 2020; Yu et al., 2020; Schweter and Akbik, 2020; Luoma and Pyysalo, 2021). Next, we apply multi-layer BiLSTM on the concatenation of the BERT embeddings generated for each sentence of the document. BERT models usually focus on sentences and replace the \"line break\" character by a single space. To keep this informative token in our long clinical documents, we replace all line breaks with the rarely used \"_\" character.",
|
| 9 |
+
"3.2 Mention recognition and normalization": "We use a sequence labeling NER model based on the BIOUL tag scheme with multiple parallel CRF layers. Each independent layer is responsible for the extraction of entities of a given type, such that predicted entities of different types may overlap.\nEach mention is then classified, or normalized, to obtain the values of the fields to which it applies. A subset of the available values for each field/mention is given in Table 7.\nFor example, \"bilateral\" is normalized as both \"left\" and \"right\". The allowed multi-label combinations is defined manually. We compute a maxpooled representation for each mention m and project it to obtain one score per label:\nscorelabel(m) = V label · maxpool w∈words(m) E(w)\nFinally the score of each possible legal label combination Lmention is computed as the score of the labels present in the combination. The probability is computed by normalizing over all legal combinations.\nTo be processed in the next layers, each mention is represented by the average embedding of its words.",
|
| 10 |
+
"3.3 Frame extraction": "We now seek to extract the frames, that is, group the extracted named entities together. We will now describe a method to overcome the previously discussed issue of overlapping frames. The overall frame extraction component and its training procedure are described in Figure 3.",
|
| 11 |
+
"3.3.1 Clique extraction": "Our approach consists in answering the following question for each pair of mentions: \"are these two mentions part of the same frames ?\" We can then extract maximal cliques of entities, i.e., groups in which mentions agree with each other on belonging to the same frame. For two mentions u and v, we compute the score r(u, v) computed by u of\nv belonging to its frames, and the score r(v, u) computed by v of u belonging to its frames: the final agreement score between the two mentions is the maximum\nR(u, v) = max T r = max(r(u, v), r(v, u))\nmeaning that one of the two mentions can be uncertain about the relationship. At this point, we could have assumed a symmetric function for r to avoid the max computation, but as we will see in the next sections, both biaffine and scopes scores are asymmetric.",
|
| 12 |
+
"3.3.2 Biaffine relation scores": "A simple baseline to compute r(u, v) consists in a biaffine model. In our case, we compute this score as an attention score between the mentions representations. Additionally, we inject the relative distances between mentions inside the attention mechanism using a similar mechanism to He et al. (2020). This attention is the sum of a contentcontent attention (the original dot product attention of Vaswani et al. (2017)), a content-position attention and a position-content attention.",
|
| 13 |
+
"3.3.3 Scope relation scores": "We propose another approach for the same relation extraction task, based on the concept of scopes. Scopes are annotations of contiguous text zones on which a named entity referred to as a \"cue\" applies its meaning. Scopes have been mostly studied in the context of negation and uncertainty detection (Vincze et al., 2008; Li and Lu, 2018; Dalloux et al., 2020; Khandelwal and Sawant, 2020). We extend this concept to all types of named entities and make it the primary mode of relation extraction in our task. Indeed, it may be simpler for the model to detect where the scope of a mention starts and stops, and to retrieve all entities between these boundaries, than inferring the value of the relation for each pair of mentions. In the example of Figure 1, the scope of laterality [Left] covers all the section and therefore applies its effect to all frames composed of these mentions.\nFor the mathematical details of our formulation, we will call u and v two mentions, and t a word. Each scope is represented with the BIOUL format. We compute two attention matrices SB(u, t)\nand SL(u, t) between the mentions and words, using the relative attention mechanism previously described to obtain start (B) and end (L) scope scores for each word. We constrain B to be before the mention and L after. The score SU of the tag U (scope that only contains one word) can be computed as the sum of the start and end scores, and the scores SI and SO of I and O tags are set to zero and will be inferred by a CRF layer (Lafferty et al., 2001).\nTo predict if a word is in the scope of a mention, i.e., is labeled I, B, L or U, we compute the marginalized probabilities Sm··· of a CRF with the forward-backward algorithm on the scope of each mention. The Scope CRF is parameter-less but illicit transitions (such as I −→ B or L −→ I) between tags are prevented, i.e., all CRF weights are 0 or −∞. The score rscope(u, t) of each word t being in the scope of u is therefore: rscope(u, t) = ln [ eS m B + eS m L + eS m U + eS m I − eSmO ]\nwith SmBIOUL(u) = ForwardBackward(SBIOUL(u))\nand, the score rscope(u, v) of v being in the scope of u, i.e., the average of the scores of each word of v of being in the scope of u: 1|v| ∑ t∈v r\nscope(u, t) Using a CRF allows us to never explicitly compute the score that a word is in the scope of a given mention. Instead, we let the network predict the start and end of the scope for each mention and use the CRF to \"paint\" the inside of the scopes in a differentiable way.",
|
| 14 |
+
"3.3.4 Score combination": "The scope relation and biaffine relation scores are combined together. Because we defined scopes as being continuous spans of text, it is possible that a mention falls in the scope of another mention and yet does not belong to its frame. In the example \"Mammography: we find the left mass biopsied in 2010. Nothing else in the right breast.\", the scope of [Mammography] contains the temporality [2010] but the two mentions are not part of a same frame. Therefore, a relation between two mentions is only predicted if both components (biaffine-based and scope-based) predict this relation, which we formulate as r(u, v) = min(rscope(u, v), rbiaffine(u, v)).",
|
| 15 |
+
"3.3.5 Frame relation supervision": "Training the frame extraction module raises several difficulties. For two compatible mentions u and v,\nwe supervise R with the supervision matrix Rtarget via a binary cross-entropy loss.\nRtarget(u, v) = { 1 if u and v are linked 0 otherwise\n(1)\nThe score R(u, v) is the result of the maximum of a matrix r(u, v) and its transpose, which, from a scope perspective, means that one mention can be within the scope of another without the reverse being true. This non-differentiable maximum can be hard to learn for the model.\nFor this reason, we propose to supervise one of the two relation directions scores (i.e., r(u, v) or r(v, u)) specifically, instead of the maximum, with the asymmetric target matrix rtarget(u, v). The difference between these two supervision modes is illustrated at the top of the Figure 3. If the two mentions u and v are not part of the same frames, both directions scores should be negative, since max(r(u, v), r(v, u)) = R(u, v) < 0. However, if the two mentions share the same frames, we \"explore\" the two different supervision directions by performing stochastic sampling of rtarget, according to a categorical distribution parameterized by the relation probability computed by the model:\n[rtgt(u, v), rtgt(v, u)] ∼ softmax(r(u, v), r(v, u))\nThe model should explore a few ways of arranging the scopes at the beginning of the training when the probabilities are close to 0.5, and stick to a strategy that leads to low entropy of the above distribution as the training progresses and its confidence increases in either direction.",
|
| 16 |
+
"3.3.6 Supervision heuristics": "We also experiment with heuristics in the supervision matrix rtarget(u, v). If u belongs to strictly more frames than v, we maximize r(u, v). If both belong to the same number of frames, we choose the direction that leads to the smallest number of wrong erroneous memberships due to the contiguity of scopes. Finally, if no heuristic can be applied, we sample a direction as previously described.",
|
| 17 |
+
"3.3.7 Word-level scope supervision (WSS)": "Finally, we also propose to supervise the scopes scores rscope(u, t) directly using partial word-level annotation rWSS generated from the rtarget matrix, as illustrated on the left side of Figure 3. Indeed, using the rtarget matrix for a given mention u, we can determine which words t of other mentions\nshould be contained in its scope, which words of other mentions should not, and which words are not supervised. Because scopes are contiguous, if a mention v that is not part of the frame of u is contained within its partially supervised scope, i.e., if it is between two mentions that belong to the scope of u, we do not supervise its words and leave the biaffine component handle the non-relation detection.",
|
| 18 |
+
"3.4 Frame classification": "Some labels of a frame such as its temporality or laterality may not be explicitly supported by a mention. Each frame is therefore fed through a constrained multi-label classifier. We represent each frame by an embedding computed as a projection of the max-pooling output of the embeddings of its mentions, and then project it to give a score per label. The score of a label combination is computed as the score of the labels in the combination. The probability is computed by normalizing over all legal combinations. During prediction, the label combinations are filtered to keep only those that contain the normalized labels of the mentions in the frame.",
|
| 19 |
+
"3.5 Optimization": "The different components are trained jointly. We use the CRF Forward algorithm to compute the NER loss, cross-entropy to compute the mention normalization and the frame classification losses. The frame extraction decoder relation loss Lrelation is the sum of binary cross entropy for every valid supervised mention-mention pair and the partial word-level supervision Scope CRF loss Lscope is the CRF Forward algorithm. The losses are combined into a weighted average, the specifics of which are detailed in Appendix B.",
|
| 20 |
+
"3.6 Knowledge injection": "Data augmentation We augment the training data in two ways. First, we randomly extract parts of documents such that no frame is cut, and add them as new documents to the dataset. This is somewhat akin to sentence splitting, but for multisentence entities. Second, we build synthetic sentences from a manually pre-defined lexicon of mentions, and add these sentences as NER samples to the dataset. The sentence creation process is the following: we randomly pick a synonym from the lexicon such as [ACR 6] and insert it in a randomly picked context from a predefined list such as\n\"There is {} .\" to generate \"There is [ACR 6].\" The documents generated from these augmentations are mixed with the original documents such that every batch approximately contains 13 of each (original, doc parts and lexicon sentences).\nOutput constraints Some background knowledge can be injected by constructing rules such as the fact that \"left\" and \"right\" are exclusive, or the fact that a mammogram is always performed on the breasts. During the frame extraction step, relations between mentions that cannot be part of the same frame are filtered out during learning and prediction. Similarly, as mentioned in Section 3.2, illegal label combinations are filtered out during training and prediction. This filtering reduces the number of possibilities that the model must evaluate, and alleviate the need for the model to \"learn\" the annotation scheme.",
|
| 21 |
+
"4 Experiments": "We evaluate our proposed approach on the test set of a new annotated dataset described in Appendix A, and perform several ablation experiments to investigate the design choices of our model. The dataset is composed of 120 French breast imaging clinical reports annotated with frames. There are five types of objects: ACR (cancer risk) scores, breast density scores, diagnostic procedures, therapeutic procedures and lesions. The document-level statistics are detailed in Table 2. The model is evaluated with 3 retrieval metrics: the mention metrics evaluate the mention and normalization prediction with approximate boundaries, the Frame Support evaluates the frames through their mentions, and the Frame Label evaluates them through their labels. These metrics are further described in the following section.",
|
| 22 |
+
"4.1 Relaxed retrieval metrics": "We use three metrics to evaluate the predictions at the mention and frame level, and provide algorithms to compute them in Appendix C.\nUnlike the exact match NER metric for which a true positive is unambiguously counted when two elements of the predicted and gold entities match, defining and computing relevant metrics between more complex sets of objects becomes more difficult as the number of element attributes increases. One option is to lower the minimum similarity threshold required between predicted and gold features to account for small errors such as mismatch between mention boundaries. However, this leads to ambiguities in the metric computation, since several predicted elements may match a single gold element, and vice versa. We explicitly formulate a greedy matching procedure to compute a maximum bipartite greedy match between the elements of two sets, in the algorithm 1 to avoid double counting true positives.\nThe NER metric (Algorithm 2) uses a score function that returns 1 if the Dice overlap of words in two mentions is higher than 0.5. The procedure is described in the Algorithm 2.\nThe Frame Support metric (Algorithm 3) scores a pair of two frames with a non-zero match score if some of their mentions overlap, and a perfect score if all their mentions overlap, and 0 otherwise. This score between 0 and 1 is the Dice/F1 overlap between the mentions of the two frames. It is used as a \"relaxed\" true positive when computing the retrieval metrics.\nFinally, the Frame Label metric (Algorithm 4) scores a pair of two frames with a matching score of 1 if their labels match and their trigger mentions overlap, and 0 otherwise. This score is used as a true positive when computing retrieval metrics.",
|
| 23 |
+
"4.2 Experimental setup": "Hyperparameters were manually selected by trial and error on 20 documents from the training dataset, and the model were trained for 2000 steps with a batch size of 16. Hyperparameters are further described in Appendix B. All experiments were averaged on 3 runs.",
|
| 24 |
+
"4.3 Main results": "Table 3 shows the performance on the different types of frames. The model performs better for frames with fewer fields such as Cancer risks or Breast densities. We visualize the predicted scopes of the proposed model on the right side of Figure 4. We observe that the scopes coarsely follow the structure of the document, i.e., that the predicted boundaries are located at the beginning or\nthe end of the different sections. This observation suggests that our approach may effectively leverage the structure commonly found within clinical documents. It is worth keeping in mind that these scopes have only been supervised with the requirement that they contain or exclude certain mentions, and that no information regarding the precise location of their boundaries has been given. Moreover we note that the reading of these scopes gives a par-\ntial explanation of the predicted relations, whereas the outputs of relation prediction models are usually hardly explainable.",
|
| 25 |
+
"4.4 Impact of scopes": "Table 4 shows the effect of ablating the model scopes. In this configuration, the model can only predict the relations through the biaffine model. We can observe that ablating scopes results in an overall loss of 5.3 pt for the Frame Label metric and 4.9 pt for the Frame Support metric. We believe that this is due to the inability of standard neural components to reason with intervals, i.e., to answer queries such as \"what word is between these two words\". Given that scopes improve the quality of predictions, the question arises as to what kind of supervision is needed for learning them. As shown in Table 4, when the scopes are learned directly using word-level partial annotations, the model performs better than with distant supervision on the r(u, v) matrix. If we directly supervise the symmetric matrix R(u, v) instead of the asymmetric matrix r(u, v), the performance collapses and we lose between 10 and 15 pt for the Frame metrics. The learning of scopes must be hindered by the uncertainty related to the supervision of this matrix alone and the small amount of data. Interestingly, if we remove the relation supervision heuristic described in Section 3.3.6 and let the model explore different configurations on its own, the performance shown in Table 4 remains on par with the proposed approach. Since these heuristics aim at injecting information about the hierarchy of mentions and the structure of the text, this suggests that the model is able to infer this information itself.",
|
| 26 |
+
"4.5 Impact of the relative attention": "We evaluated the effect of the added information on the relative position of the word-mention and mention-mention attention mechanisms. From the\nTable 4, we can observe that this added information leads to a performance gain of 1.3 pt of F1 frame support and 1.8 pt of F1 frame label. Without it, a mention is \"positionally blind\" and must rely on the inductive bias of the LSTM to find its neighboring words or mentions. Therefore, we expected a larger drop in performance, especially in the context of long documents. Nevertheless, relative attention proves to be an effective way to improve performance.",
|
| 27 |
+
"4.6 Impact of the size of the training data": "Figure 5 shows the overall performance of the model when trained with different numbers of annotated samples. On one hand, we can note that our system requires only a small amount of documents to achieve \"correct\" accuracy, i.e., it can be used to pre-annotate more documents. This \"data efficiency\" is important when tackling new domains in order to allow quick feedback and possible changes regarding the annotation scheme. However, given the complexity of the task and the evolution of performance with the training set size, we also note that a large number of annotated documents might be needed to approach a perfect score.",
|
| 28 |
+
"4.6.1 Impact of the augmentations": "We remove the augmented samples from the training data and show the effect on performance in Table 5 and Figure 5. We observe that adding synthetic lexicon sentences only slightly helps improving the model mention detection performance (+0.3 pt). However, this improved performance has a larger effect of 1.5 pt on the Frame Label metric. This is typical of the phenomenon of error propa-\ngation, since a missing or mislabelled mention can have an effect on multiple frames.\nAs we reduce the number of annotated documents in the training set, the effect of augmentation becomes more important, and with only 4 annotated documents we obtain an average performance of 89.4 F1 in mention extraction versus 81.1 F1 without, and an average performance of 45.7 F1 in Frame Label F1 versus 34.7 without. Finally, we can see that a model trained only with synthetic sentences, i.e., 0 training document in Figure 5) already obtains decent retrieval performances, which is valuable when tackling a new domain with unlabeled data only. The non-zero Frame metrics can be explained by the presence of frames containing only one mention, and the constraints preventing the system from predicting illicit label combinations.",
|
| 29 |
+
"4.6.2 Impact of constraints": "We train the model without the constraints described in section 3.6 (but we still apply these constraints during the evaluation phase to avoid illicit predictions). In this configuration, the model learns that each pair of mentions is legal. We observe in Table 5 this leads to a loss of 2.3 pt in the Frame label F1-score and 1.3 pt in the Frame support F1score. This can be explained by the fact that the model has to \"learn\" the annotation scheme and its inevitable imperfect representations of the reports. These constraints can also help the model focus on the actual uncertainties of the task, and leave what\nis already known to the modeled constraints.",
|
| 30 |
+
"5 Acknowledgment": "We thank the clinical data warehouse (Entrepôt de Données de Santé, EDS) of the Greater Paris University Hospitals for its support and the realization of data management and data curation tasks.",
|
| 31 |
+
"6 Conclusion": "In this work, we presented a system for extracting structured entities from clinical breast radiology reports. We have shown that the addition of synthetic sentences can improve the performance in the context of a small amount of data. This information is valuable for the annotation and development of new information retrieval systems in other domains, where key words or phrases are known in advance. The method we described introduces the notion of frame extraction in the form of mention cliques, and we have shown that a formulation of the relation extraction task via scopes improves the performance of our system. Future work will evaluate this approach on other structured entity extraction tasks such as event extraction.",
|
| 32 |
+
"A Annotation scheme appendix": "We detail here the annotation scheme and the resulting dataset. We focus on entities related to therapeutic (e.g. surgery) or diagnostic (e.g. mammography) procedures, radiological observations (e.g. cysts or masses), and breast density or ACR (or BI-RADS) cancer risk scores. The relevant entities to extract were the result of discussions with a physician expert in the field. The annotation scheme itself was the result of many iterations between annotations and scheme revision. The corpus consists of 120 annotated clinical documents, 80 for the training set and 40 for the evaluation set. The document-level statistics are detailed in Table 2.\nA.1 Mention annotation\nFirst, we annotate several types of mentions, each justifying the value of a field in a frame. In our scheme, each mention has an effect that can be combined with other effects to describe an entity. Some mentions have the effect of justifying the existence of a frame: we will refer to these mentions as \"triggers\". Other mentions have the effect of specifying an attribute of an object: we will refer to them as \"attribute\" mentions. No frame is created if there is no trigger, even if several attributes are present. In the example 1, the trigger [Ultrasound] mention has the effect of creating at least one \"Diagnostic procedure\" frame, whereas the [millimetric] attribute has the effect of giving a size to the frames that it is part of.\nThe trigger mention types are ACR score, Breast density, Diagnostic procedure, Therapeutic procedure and Radiological lesion. The additional attribute mention types are Diagnostic procedure type, Therapeutic procedure type, Breast density type, ACR score type, Organ, Laterality, Temporality, Size, Distance, Angle and Breast quadrant.\nWe have chosen to annotate mentions describing attributes (such as laterality or size) even if they are not part of any frame. On the other hand, trigger mentions are not annotated if they do not justify the presence of an object. In the sentence \"No suspicious mass on the right\", only [right] is annotated as potentially justifying the laterality of an object, but not [mass] since it is preceded by a negation, and therefore does not justify the creation of any radiological lesion object.\nFinally, each mention is classified, or normalized, according to a predetermined set of values.\nFor example, a trigger mention \"Breast density\" may be labeled exclusively \"type 1\", \"type 2\", \"type 3\", \"type 4\". A laterality can take the values \"left\", \"right\", or \"left + right\".\nA.2 Frame annotation\nFrames describe conjunction of triggers and attributes that share their effect (or concept) on a given entity. In the above example, [8 o’clock radius] (applying an angle), [3cm] (applying a distance), [Left] (applying a laterality), [Breast] (applying an organ) and the trigger [cysts] (applying the effect of existing) share their respective effect on a same slice of an object. These mentions may be located in different sentences or paragraphs, and a field in a given frame may be justified by several mentions. On the other hand, if an object is described in several places in the text, we annotate it with several distinct frames. The notion of \"several places\" and the choice to split a same object into multiple frames is sometimes ambiguous. We choose to annotate a single frame for an object if it is described on several juxtaposed sentences, and split it into multiple frames otherwise. For instance, the [cysts] trigger is combined with the [nodules] trigger because they are found in juxtaposed sentences, and [nodules] is clearly referring to the previously mentioned [cysts].\nAll frames follow a specific scheme that constraints the set of labels and mentions (or effects) combinations. A summary of the frame schemes is shown in Figure 7. In practice, these constraints take the form of a list of 2502 label tuples that enumerates every possible mention / label combination. For example, a ACR Cancer Risk type 0 on the right breast at the time of the exam is described by the following tuple:\n(acr_trigger, acr_type_0, temp_overlap, organ_breast, lat_right)\nAs shown in the structured output 1 of example 1, five frames are annotated:\n• the ultrasound \"Diagnostic procedure\" frame for its left location, composed of the [Breast], [ultrasound] and [left] mentions on lines 1 and 2\n• the ultrasound \"Diagnostic procedure\" frame for its right location, composed of the [Breast], [ultrasound] and [right] mentions on lines 1 and 7\n• the first \"Finding\" frame of the first nodule, with two trigger mentions: [cysts] and [nodules] and attribute mentions [8 o’clock position], [3cm] and [millimetric] on lines 1, 2, 3, 4 and 5\n• the first \"Finding\" frame of the second nodule, with two trigger mentions: [cysts] and [nodules] and attribute mentions [6 o’clock position], [2cm] and [millimetric] on lines 1, 2, 4 and 5\n• the second \"Finding\" frame of both nodules in the conclusion: composed of the trigger [cysts] and the laterality [left] on line 11\nSince the mass negation on line 8 is not an indication of the presence of an object, we do not annotate it. The temporality of each frame overlaps the exam, although no explicit mention can\nsupport this fact, so we fill the temporality field of the frames with the value \"overlap\" and leave the justification empty.\nA.3 Object annotation\nFinally, the different frames are grouped into objects, although we do not extract them in the model presented in this study. Objects are union of frames. For a given set of concepts, multiple frames might be required to describe a same object. In the context of growing lesions, a union of multiple (temporality, size) conjunctions can represent the evolution. In an other setting with moving objects, a union of (temporality, localisation) labels could be used. In our case, as we represent lateralities with two exclusive \"left\" and \"right\" concepts, bilateral objects are described with two co-referent frames.\nIn the previous example, three objects are an-\nnotated, grouping two frames for the ultrasound procedure and two frames for each cyst. The last nodule frame in the conclusion is a case of plural coreference, since it its attributes apply to both objects. In this case, the frame describing several objects is added to each one. The statistics of objects in the annotated documents are described in Table 8. This step amounts to annotating coreferences between frames. We did not address this task of frame-coreference prediction in this study.\nA.4 Annotation process Clinical documents were de-identified automatically beforehand and the manual annotation was performed with BRAT (Stenetorp et al., 2012) by two annotators. 120 clinical reports were sampled from a from of query the APHP clinical data warehouse that combined the substrings \"mamm\" (to obtain breast related reports), \"ACR\" and \"BI?RADS\" (to obtain ACR scores). Some sampled\nreports were not breast radiology reports, yet we annotated and kept them as negative samples. Using the \"Event\" or \"Relation\" annotations in BRAT turned out to be impractical. We choose instead to annotate frames using a mix of identifier attributes (frame1, frame2, ...) on mentions, and relations on close-by mentions. Co-references, i.e., object annotation, were annotated using identifier attributes (objectA, objectB, ...) for the same reason. The BRAT annotations of Example 1 are shown in Figure 6. The direction of the annotated relations is only used to extract the paths along which the frames are clustered, but is not used as directed relation in our model, since it is not consistent.",
|
| 33 |
+
"B Hyperparameters appendix": "We optimize the model weights with the Adam optimizer (Kingma and Ba, 2015) without weight decay and use a first learning rate lrBERT, linearly decayed from 5 × 10−5 with a 10% warmup, for the pretrained CamemBERT (base) weights, and a second lrmain, linearly decayed from 5× 10−4, for the other parameters. The models were trained with a batch size of 16 for 2000 steps. The maximum wordpiece sequence size is 192, a dropout of 0.5 is applied on the output of BERT, and a dropout of 0.2 in the attention matrices computation. There\nare 3 BiLSTM layers of hidden size 200. The loss weights are set to αNER = 2, αnormalization = 1, αrelation = 1, αWSS = 1, αframe_classification = 0.5",
|
| 34 |
+
"C Relaxed retrieval metrics algorithms": "Algorithm 1 Procedure to compute the maximum sum of greedily matched items between two sets of predicted and gold items P and G according to the MATCH_SCORE function\n1: function MATCH_SUM(P, G, MATCH_SCORE) 2: scores← empty matrix . match scores 3: matched← {} . matched (p,g) items 4: result← 0 . aggregated score 5: for each predicted item p ∈ P do 6: for each gold item g ∈ G do 7: scores[p, g]← MATCH_SCORE(p, g) 8: while |P \\matched×G\\matched| > 0 do 9: Take the 1st p ∈ P\\matched\n10: g← argmaxg∈G\\matched(scores[p]) 11: if scores[p, g] > 0 then 12: result← result + scores[p, g] 13: matched← matched ⋃ {p, g} 14: return result\nAlgorithm 2 Procedure for the approximate mention retrieval metric\n1: function SCORE_NER(p, g) 2: . return 1 if p and g have a word dice\noverlap ≥ 0.5 and the same label, 0 otherwise\n3: return 2·|p.words ⋂ g.words| / (|p.words| + |g.words|) > 0.5 and p.label = g.label\n4: function HALF_NER(P, G) 5: tp← MATCH_SUM(P, G, SCORE_NER) 6: f1← 2 · tp/(|G|+|P|) 7: return f1\nAlgorithm 3 Procedure to compute the Frame Support retrieval metrics\n1: function OVERLAP(a, b) 2: . return 1 if a and b share a word and\nhave the same label, 0 otherwise\n3: function SCORE(p, g) 4: . return the Dice score between spans\n(=mentions) of p and g, between 0 if there is no overlap and 1 if all mentions match)\n5: tp← MATCH_SUM(p.spans,g.spans,OVERLAP) 6: return 2 · tp/(|g.spans|+ |p.spans|)\n7: function FRAME_SUPPORT(P, G) 8: . return the retrieval metrics, where relaxed\ntrue positives between P and G are computed with SCORE\n9: relaxed_tp← MATCH_SUM(P, G, SCORE) 10: f1← 2·relaxed_tp/(|G|+|P|) 11: return f1\nAlgorithm 4 Procedure to compute the Frame Label retrieval metrics\n1: function SCORE(p, g) 2: . return 1 if all labels of g are in p, all labels\nof p are in g or a non conflicting frame of the same object and triggers overlap, 0 otherwise\n3: function FRAME_LABEL(P, G) 4: . return the retrieval metrics, where true\npositives between P and G are computed with SCORE\n5: tp← MATCH_SUM(P, G, SCORE) 6: f1← 2 · tp/(|G|+|P|) 7: return f1"
|
| 35 |
+
}
|
ACL_23_no_limitation/ACL23_1285.json
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1285",
|
| 3 |
+
"Title": "ADEQA: A Question-Answer based approach for joint ADE-Suspect Extraction using Sequence-To-Sequence Transformers",
|
| 4 |
+
"abstractText": "Early identification of Adverse Drug Events (ADE) is critical for taking prompt actions while introducing new drugs into the market. These ADEs information are available through various unstructured data sources like clinical study reports, patient health records, social media posts, etc. Extracting ADEs and the related suspect drugs using machine learning is a challenging task due to the complex linguistic relations between drug – ADE pairs in textual data and unavailability of large corpus of labelled datasets. This paper introduces ADEQA, a questionanswer(QA) based approach using quasi supervised labelled data and sequence-tosequence transformers to extract ADEs, drug suspects and the relationships between them. Unlike traditional QA models, natural language generation (NLG) based models don’t require extensive token level labelling and thereby reduces the adoption barrier significantly. On a public ADE corpus, we were able to achieve state-of-the-art results with an F1 score of 94% on establishing the relationships between ADEs and the respective suspects.",
|
| 5 |
+
"1 Introduction": "Everyday hundreds of drugs are being introduced to the market. However, every drug has contraindications. A study conducted by (Hazell and Shakir, 2006) showed that 7000 deaths are being caused by Adverse Drug Events (ADE) annually. Organizations like the World Health Organization (WHO), the Food and Drug Administration (FDA), the European Medicines Agency (EMEA), and the Medicines and Healthcare products Regulatory Agency (MHRA) maintain a reporting system that enables individuals to spontaneously report the experienced adverse effects related to the use of medicines or healthcare products (Hazell and Shakir, 2006). Although these systems store the adverse event information in a structured format, a vast amount of information still remains in the\nunstructured textual data like clinical trial reports, patient health records, medical transcripts, social media posts, etc. It’s a tedious process to have humans go through each of these documents and record the mentioned adverse events and the related suspect drugs.\nWith the advancements in machine learning, specifically in the field of Natural Language Processing (NLP), information extraction models are being widely used to extract useful information from unconstrained texts. Such models can learn contextual patterns to identify and extract specific entities, after being trained using large corpus of annotated data. Similar approaches have been applied to extract ADEs and suspect drugs using Named Entity Recognition models (Wikipedia contributors, 2023). However, since the ADEs are semantically similar to any other unrelated symptoms, most often such models predict false positives. Hence, improving precision in this task depends on the ability to contextually relate the ADEs to the relevant suspect drug(s), instead of extracting the ADEs independently. Unfortunately, generalized extraction of relationships among entities in <subject, predicate, object>form is still a challenging task in the NLP ecosystem. In this work, we wanted to address this shortcoming by modelling the Drug-ADE relationship extraction task as Question Answering tasks.\nDeep Neural Network based, supervised NLP models require tens of thousands of annotated data to learn the contextual information and identify hidden patterns. For tasks like NER (Wikipedia contributors, 2023), annotation of text data is a critical prerequisite needing manual effort, coupled with domain knowledge. The classical approach of annotating entities with B-I-O offsets (Huang et al., 2015) increases the efforts further. Considering the ongoing exponential growth of data, annotating huge corpus of new data to train or retrain the models in future would be very expensive, if not\n206\nentirely impossible. This is where models which require light weight labelling come into the rescue. Sequence to sequence transformer (Vaswani et al., 2017) models like T5 (Raffel et al., 2020) can be trained by transforming the entities and relationships to be extracted into text sequences, so that these models can learn the contextual patterns to identify and generate the required entities, without any explicit token offset information.\nIn this paper, we intend to elaborate two approaches of modelling Drug – ADE relation extraction as Question-Answering solution using natural language generation (NLG) technique via sequence-to-sequence modelling. First approach is a two-step solution that first extracts Drugs and ADEs and subsequently confirms associations between them, while the second one directly discovers the potential Drug – ADE pairs from a given text. One of our approach achieved state-of-the-art F1 scores of 94% in establishing the relationships between ADEs and the respective suspects on the public ADE benchmark corpus (Gurulingappa H, 2012).",
|
| 6 |
+
"2 Related Works": "Several works have already been tried out for ADEsuspect identification task. Earlier approaches focused mainly on pipeline design with a NER model to extract entities and their offsets, followed by a relation classification model which takes two pair of entities and identify the relation between them (Gurulingappa H, 2012)(Li and Ji, 2014). With the advancements in deep learning, RNN based sequence models like LSTMs (Hochreiter and Schmidhuber, 1997), GRUs (Cho et al., 2014) started being applied on all NLP use cases. (Li et al., 2016) used a feed forward neural network to jointly extract drug-disease entity mentions and their relations.\n(Li et al., 2017) explored bidirectional LSTMs for learning entity representations from text sequences. They used Shortest Dependency Paths (SDP) between probable entities to identify related ADEs and suspects.\n(Ramamoorthy and Murugan, 2018) proposed a self-attention-based Bi-LSTM model for facilitating intra-sequence interaction in the given text sequence. The same work conceptually considered ADE extraction as a question answering problem, where the text sequence becomes the context and the drug whose adverse effects are to be predicted, becomes the query. However, rather than selecting an answer (adverse effect) from a vocabulary, they consider each token in the sequence as a potential ADE and embed this logic directly into the modeling than really having QA model. This adds additional computational complexity. Several other studies were also conducted using bidirectional LSTMs for the ADE-suspect extraction task (Sorokin and Gurevych, 2017)(Henry et al., 2019)(Christopoulou et al., 2019)(Lample et al., 2016)(Yang et al., 2018).\nAttention based models like transformers (Vaswani et al., 2017) which can learn contextual patterns efficiently have more or less replaced LSTM based models lately. (Wei et al., 2020)(Alimova and Tutubalina, 2020) applied pretrained BERT (Devlin et al., 2019) models for the ADE extraction task. (Wang and Lu, 2020) created shared layers between NER and RE model for joint ADE-suspect identification. Current stateof-the-art model on this task by Haq et al., (Haq et al., 2021) uses BioBERT (Lee et al., 2019) as the base in a NER-RE pipeline design with RE models placed sequentially after the NER model, and are fed the results of the NER model, the context, embeddings, and dependency tree for feature gen-\neration to classify the relationships. Multi-turn QA (Li et al., 2019) also casts the NER-RE problem as a multi-turn question answering task. MRC4ERE (Zhao et al., 2020b) improves on this question answering approach by leveraging a diverse set of questions. However, both the approaches consider deterministic methods for extracting the answers and uses BERT (Devlin et al., 2019) for modeling.\nSequence-to-sequence transformer models like BART (Lewis et al., 2020), T5 (Raffel et al., 2020), etc. are being studied for ADE-suspect extraction task recently and shown positive results. Our work is heavily inspired from some of the latest researches which adopted NLG models for the relation extraction tasks. REBEL (Huguet Cabot and Navigli, 2021) which achieved state of the art results in multiple RE benchmark datasets transformed the entity relationships as text sequence of triplets and used BART (Lewis et al., 2020) to generate these triplets. Similarly, TANL by Paolini et al., 2021 (Paolini et al., 2021) frame this as a translation task by generating an augmented text with entity and relation information marked. (Raval et al., 2021) explored T5 model for medical product safety monitoring in social media.",
|
| 7 |
+
"3 Dataset": "We have used the ADE dataset (Gurulingappa H, 2012), which is an annotated data of adverse events and drugs identified from biomedical texts. The original corpus is distributed in three files i.e., drugade relation data, drug-dosage relation data and ade-negative data, out of which we used the drugade data for our experiments.\nAlthough there are begin and end offsets annotated for the ADEs and suspects, our approaches\nAlgorithm 1 NER-RE pipeline using multitask learning\n1: Input: 2: Q = question 3: C = context 4: Output: 5: A = answer 6: Start: 7: ADEs = [] 8: Suspects = [] 9: ade_sus = []\n10: for every text do 11: Q = \"what are the ADEs?\" 12: C = text 13: A = get_ades(Q, C) 14: example A=<Start>ade1<next>ade2<next>ade3 15: ADEs += [ade1,ade2,ade3] 16: Q = \"what are the suspects?\" 17: C = text 18: A = get_suspects(Q, C) 19: example A=<Start>suspect1<next>suspect2 20: Suspects += [suspect1,suspect2] 21: for ade in ADEs do 22: for suspect in Suspects do 23: Q = \"is ade casused by suspect?\" 24: C = text 25: A = confirm_association(Q, C) 26: example A = ‘Yes’ or ’No’ 27: if A == ’Yes’ then 28: ade_sus += [(ade,suspect)] 29: end if 30: end for 31: end for 32: end for\ndo not require that information. There are 6,821 texts available in the corpus with only 20% of them including more than one ADE or suspect, as seen in Fig. 1 and 2. Altogether, there are 2984 unique ADEs and 1050 unique suspects in the whole corpus. A sample row from the dataset looks like this \"10030778|Intravenous azithromycin-induced ototoxicity.|ototoxicity|43|54|azithromycin|22|34\". Columns 2, 3, and 6 provide the text, ADE, and suspect information, respectively.",
|
| 8 |
+
"4 Approaches": "Sequence-to-sequence transformer models like T5 are capable of handling several NLP tasks concurrently. As explained in (Raffel et al., 2020), every NLP task we consider including translation, question answering, classification, etc. is cast as feeding the model text as input and training it to generate some target text (Raffel et al., 2020). This allows us to use the same model, loss function, hyperparameters, etc. across diverse set of tasks. In order to train a single model on the diverse set of tasks described above, T5 cast all of the tasks we consider into a “text-to-text” format that is, a task where the model is fed some text for context or conditioning and is then asked to produce some output text (Raffel et al., 2020). T5 framework provides a consistent training objective both for pre-training and fine-tuning. Specifically, the model is trained with a maximum likelihood objective (using “teacher forcing” (Williams and Zipser, 1989) regardless of the task (Raffel et al., 2020). To specify which task the model should perform, we add a task-specific (text) prefix to the original input sequence before feeding it to the model (Raffel et al., 2020). We use questions as the prefix in our experiments.\nIn this section we introduce two approaches for solving ADE- suspect extraction problem using\nAlgorithm 2 Joint ADE-Suspect relation extraction as single task\n1: Input: 2: Q = question 3: C = context 4: Output: 5: A = answer 6: Start: 7: ade_sus = [] 8: for every text do 9: Q = \"what are the ADEs and suspects?\"\n10: C = text 11: A = get_relations(Q, C) 12: example A=<Start> ade1<next>suspect1<next>ade2<next>suspect2 13: ade_sus+=[(ade1,suspect1),(ade2,suspect2)] 14: end for\nsequence to sequence modeling.",
|
| 9 |
+
"4.1 NER-RE pipeline using multitask learning": "A Named Entity recognition (NER) model followed by a Relation Extraction (RE) model is the conventional method to solve this task. In this approach, we first extract ADEs and suspects from the text independently. Then, link the ADEs and suspects using one-to-one mapping, to identify whether they are related or not. Although transformer (Vaswani et al., 2017) based models have been shown to learn patterns from the textual contexts, there is no guarantee that the extracted event or suspect is the actual adverse event or suspect. A Relation extraction module followed by the NER, can eliminate such false positives efficiently. In addition to improved performance, generative models like T5 (Raffel et al., 2020) require less annotation effort compared to traditional NER models which require data to be annotated in B-I-O format, which is time-consuming and inconvenient. For example, a sentence like \"A man was rushed to the hospital for metformin induced severe fever\" should be annotated as \" O O O O O O O O B-SUS O B-ADE I-ADE\". In real-life, most often we won’t have such extensive annotated data which is necessary in order to use any standard NER and RE models (Haq et al., 2021). The algorithm for this approach is shown in Algorithm 1. Detailed explanation of the approach is available in section 5.",
|
| 10 |
+
"4.2 Joint ADE-Suspect relation extraction as single task": "If we can transform the pair of ADEs and suspects and their relationship into a text sequence, we can use sequence to sequence models like T5 (Raffel et al., 2020), BART (Lewis et al., 2020), etc. to perform any language generation task end-to-end. In this second proposed approach, we use a single T5 model to perform the end-to-end extraction task. Specifically, extract ADEs, suspects, and their relationships all at once. This strategy fully exploits the learning capabilities of the T5 model. The algorithm to perform this task is shown in Algorithm 2. Section 5 provides an in-depth explanation of the strategy.",
|
| 11 |
+
"5 Experiments and Setup": "For the first approach of NER-RE pipeline, as shown in Fig. 3, we used a single model to execute multi-task learning in order to identify suspects, ADEs, and examine the relationships between the two. This is equivalent to mapping an input sequence of n words to an output sequence of m ADEs or supects, conditioned over a question and a context as shown in (1) and (2). We employed questions such as \"What are the ADEs?\" and \"What are the suspects?\" for ADEs and suspects extraction, respectively. Since there can be multiple ADEs and suspects within the same text, the model should be able to generate all available entities. We used a special token <next>in the ground truth to teach the model to generate the next entity one after the other. Since <next>token can appear multiple times in the output, we removed the repetition penalty in the T5 model. Additional post processing was used to eliminate duplicate results.\np(yADE1i , y ADE 2i ..., y ADE mi | xQseqi, xCseqi) (1)\np(ySuspect1i , y Suspect 2i ..., y Suspect mi | x Q seqi, x C seqi) (2) Relationship extraction module in the approach 1 was also modeled as a QA task. Initially, we trained the model by using questions like “what caused the <ADE>?” by providing the whole text as the context and allowing the model to predict the suspects directly. Although the model was performing well in identifying the suspects, it made mistakes when there are more than one drug names present in the text. Since it is difficult to derive an\naccurate confidence score from a seq-to-seq model like T5, we had to formulate some strategy to identify negative relations. To combat this, we created questions with binary responses that the model may produce. For example, given a context like “A person was rushed to the hospital due to metformin induced fever. He was feeling better after taking tylenol.”, the questions were framed like “Was the fever caused by metformin?” and “Was the fever caused by tylenol?”. In this way, the model was able learn and understand the context and provide ‘Yes’ or ‘No’ answers as shown in (3). We could discard the negative relationships using the ‘No’ output hence improving the overall precision. This would not have been possible if we had used traditional QA models. They provide deterministic results by outputting a phrase from the input text itself. However, NLG based models can generate answers which are not present in the input text.\np(y Y es|No i | x Q seqi, x C seqi) (3)\nIn order to execute our second approach of endto-end extraction, we used a single question like “what are the ADEs and suspects?” and tuple generation method similar to (Huguet Cabot and Navigli, 2021). We used additional tokens like <ade>, <suspect>to demarcate the tuples as shown in Fig. 4. It was also observed that, unlike (Huguet Cabot and Navigli, 2021) we didn’t have to perform any entity sorting, based on the positions in the text. Model was able to perform the extractions of tuples accurately without sorting. This is equivalent to mapping an input sequence of n words to an output sequence of m pairs of ADEs and supects, conditioned over a question and a context as shown in (4).\np(y (ADE−Suspect) 1i , ..., y (ADE−Suspect) mi | x Q seqi, x\nC seqi) (4)\nModels were trained using NVIDIA T4 GPUs with a batch size of 4. We used G4dn.xlarge instances of AWS which provides T4 GPUs. Additional hyperparameter tuning was also performed using baysian optimization (Nguyen, 2019). We have used input sequence length of 128 and target sequence length of 32. It took around 10-15 minutes to finetune a T5 model for a training set of 5500 texts with the remaining texts from the corpus used as the evaluation data. While evaluating the extracted ADEs and suspects, we considered\npartial match along with strict match, as prediction or ground truth sometimes contain adjectives which goes missing in either side. For example, a text like “a man was rushed to the hospital for metformin induced severe fever”. Here the ground truth might be just “fever”, while the model would learn to predict “severe fever” or vice-versa. For the partial match calculation, we used levenshtein distance based distance computation between generated sequence and ground truth sequence.",
|
| 12 |
+
"6 Results": "Comparative performance of both the approaches are shown in Table 1. Here we considered partial match F1 score to compare both the results. On strict evaluation, approach 1 found to be better than approach 2. Also, Table 2 shows the comparison of performance with the existing baselines. Approach 1 achieved state-of-the-art results on establishing the relationship between ADEs and suspects. Since RE is treated as a separate task in approach 1, we evaluated the RE performance independently without tying with the NER output. i.e. we checked if a pair of an event and a drug in the evaluation text are related by adverse effect or not. This gives the intuition that, even if the NER task predicts false results, we can effectively eliminate them using the subsequent RE model, hence improving the end-toend precision. Fig. 5 shows the confusion matrix of the relation extraction task from approach 1.\nIndividual metrics for ADE, suspect and RE from the approach 2 are evaluated by splitting the result into ADEs and suspects and by checking if a pair of ADE-Suspects are related correctly or not. it’s also observed that the approach 2 suffers when there are more than 3 pairs of ADEs and suspects with in a text. As seen in Fig. 6, identification of ADEs and suspects is most effective when there are 3–4 or less of these entities per text. Ratio of correct to wrong prediction goes up as the num-\nber of entities per text increases. However, given that the majority of literature only mentions one to two ADEs or suspects in a text, as evidenced by the original data distribution (Fig. 1 and 2), this performance is ideal for real-life situations.",
|
| 13 |
+
"7 Conclusion": "In this paper, we propose a question answer based approach for solving ADE-suspect extraction problem by using a sequence-to-sequence transformer architecture, T5 (Raffel et al., 2020). We also detail our performance relative to the current baselines and present several experiments carried out utilizing various QA methodologies. We found that QA based RE approach outperforms existing baselines on the benchmark dataset. For industry usecases, it is recommended to use state-of-the-art NER followed by our QA based RE modeling for best results. This approach can be extended to extract ADEs and suspects from social media posts, clinical trial docs, medical transcripts, etc. We think that this work will be helpful when introducing a specific drug to the market or researching the negative effects of an existing drug since it will enable quick decisions to be made with little delay, preventing future causalities.",
|
| 14 |
+
"8 Future works": "Although we used a sequence-to-sequence model in our study, Large Language Models (LLM) using decoder only transformers can also be used with the same methodology. With LLMs, we expect\neven better results than with the comparably small T5 models. An observed flaw in Approach 2 is that, it will still produce results even when there are no relationships between the drugs and ADEs in the text, which will impact the precision. To prevent this, we suggest to train the model to produce output like \"no-suspect\" and then post-process the predictions to remove them."
|
| 15 |
+
}
|
ACL_23_no_limitation/ACL23_1288.json
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1288",
|
| 3 |
+
"Title": "Multiple Evidence Combination for Fact-Checking of Health-Related Information",
|
| 4 |
+
"abstractText": "Fact-checking of health-related claims has become necessary in this digital age, where any information posted online is easily available to everyone. The most effective way to verify such claims is by using evidences obtained from reliable sources of medical knowledge, such as PubMed. Recent advances in the field of NLP have helped automate such fact-checking tasks. In this work, we propose a domainspecific BERT-based model using a transfer learning approach for the task of predicting the veracity of claim-evidence pairs for the verification of health-related facts. We also improvise on a method to combine multiple evidences retrieved for a single claim, taking into consideration conflicting evidences as well. We also show how our model can be exploited when labelled data is available and how backtranslation can be used to augment data when there is data scarcity.",
|
| 5 |
+
"1 Introduction": "In today’s age of easy access to the internet, information exchange among people has increased rapidly, which has also resulted in the spread of misinformation (Vosoughi et al., 2018) within the society. Misinformation has been found to spread faster than real news, and the rise of social media popularity has aided the spread of misinformation (Vosoughi et al., 2018). Research on health misinformation is still an ongoing area, as it is different from political misinformation on the basis of the complexity level of fact-checking (Deka et al., 2022b). Manual fact-checking of health information requires domain-specific experts, which increases both time taken and cost incurred. Automated fact-checking of health information found online has been aided by the release of datasets such as SCIFACT (Wadden et al., 2020), HEALTHVER (Sarrouti et al., 2021), COVIDFACT (Saakyan et al., 2021). Fact-checking of\nhealth information comprises of retrieving evidences from reliable resources which either supports or refutes the key claim (Zeng et al., 2021; Guo et al., 2022). Recent works have focused on building end-to-end fact-checking models evaluating on the aforementioned datasets (Pradeep et al., 2020; Zhang et al., 2021; Li et al., 2021; Wadden et al., 2022). However, they do not take into account conflicting evidences retrieved for a single claim. Any claim can have more than one evidence, and these evidences can be conflicting in real-world scenarios wherein one evidence would be supporting the claim and another evidence may be refuting the claim.\nIn this work, we have focused on the subtask of classifying a claim-evidence pair as either supporting, refuting, or neutral as shown in Figure 1. We assume in this work that evidences for claims are already retrieved. We proposed a domain specific BERT-based model using a transfer learning approach where the model is trained over textual entailment data which can then be applied directly over fact-checking data. We have also used the Dempster-Shafer theory (Dempster et al., 2008; Shafer, 1976) of evidence combination for mitigating the conflicting evidences issue to provide an end result. We then extend our work by showing\n237\nhow data augmentation techniques can help in a more robust training for smaller datasets with the help of neural machine translation language models. We also analyse the performance of our model when it is trained over other similar datasets. We further share our trained model publicly for further research1.",
|
| 6 |
+
"2 Related work": "In this section, we will discuss the research work that has been done for fact-checking scientific claims using evidences retrieved from existing medical article repositories. With the release of the SCIFACT dataset, various transformer-based methods of predicting the veracity labels using evidences for scientific claims have been proposed and evaluated using the dataset. (Wadden et al., 2020) established a pipeline model using a RoBERTa-large (Liu et al., 2019) model to retrieve evidences from PubMed abstracts. The retrieved evidence sentences are then passed along with the claims to predict whether the evidences SUPPORT or REFUTE the claims using a RoBERTa-large model fine-tuned over the training set of SCIFACT.\nVerT5erini (Pradeep et al., 2020) uses a T5 (Raffel et al., 2020) model-based pipeline for their work. For the evidence sentence selection task for the claims, as well as for label prediction from PubMed abstracts, they used two different T5 models. For the sentence selection task, the T5 model used is fine-tuned over the MS-MARCO (Bajaj et al., 2016) dataset and then further trained on SCIFACT. For the label prediction task, the T5 model is trained on the SCIFACT dataset.\nPARAGRAPH-JOINT (Li et al., 2021) uses a RoBERTa-large model similar to (Wadden et al., 2020) for both the evidence sentence selection as well as the label prediction task which is fine-tuned over SCIFACT. However, the training approach is different, as (Li et al., 2021) uses a multitask learning approach for model training. Both the tasks of sentence selection and label prediction are done using a joint cross-entropy loss as the training objective. For the label prediction task, the authors have also used two different approaches which includes a simple sentence-level attention and KGAT which is a Kernel Graph Attention Network (Wang et al., 2019a).\nSimilarly, ARSJOINT (Zhang et al., 2021) also\n1https://huggingface.co/pritamdeka/ PubMedBERT-MNLI-MedNLI\nuses a joint approach where their proposed method jointly learns the three tasks of abstract retrieval, sentence selection, and label prediction. Similar to (Wadden et al., 2020), they have also used RoBERTa-large for their work together with BioBERT-large (Lee et al., 2020).\nAll the above works focus on the three tasks of abstract retrieval, evidence sentence selection and label prediction as a pipeline approach. However, there is a difference in the sentence selection task as well as the label prediction. VerT5erini selects sentences independently, whereas PARAGRAPHJOINT and ARSJOINT use the abstracts to select the sentences. The label prediction also differs as both PARAGRAPH-JOINT and ARSJOINT use a joint approach unlike VerT5erini. The models used in the tasks also differ as PARAGRAPH-JOINT and ARSJOINT use BERT-based models whereas VerT5erini uses a much larger T5 model having superior performance. However, the current stateof-the-art method, MULTIVERS (Wadden et al., 2022) differs from these works in the approach and the transformer model used. MULTIVERS uses a Longformer (Beltagy et al., 2020) architecture to encode both claims and abstracts together so that there is a minimum loss of information. The authors have used a weak supervision approach, in which the Longformer model is trained on available scientific data before fine-tuning on SCIFACT. However, the overall pipeline training method is a multi-task approach similar to (Li et al., 2021). It outperforms the other approaches in the label prediction task of SCIFACT.\nContrary to the above works, our approach is different in the way that for a given pair of claim and evidence, our model can predict the labels in a zero-shot approach, surpassing the state-of-theart results without the need for any supervision. The above mentioned works have the end goal of predicting the labels of the claim-evidence pairs. However, to have a final prediction for the claims whether it is a “True” claim or “False” claim, we need to have a combined judgement of all the evidence sentences for that claim which is not addressed by the above works. We have extended our approach to include the final prediction for the claims taking into consideration conflicting evidences as well.",
|
| 7 |
+
"3 Task Formulation": "In this section, we will first discuss the problem statement and then proceed with the formulation of the tasks. The problem statement is “Given a claim and a number of evidence sentences, determine whether the claim is True or False or Neutral”. We can formulate the problem as two tasks:\n• Classification of claim-evidence pair Given a claim c and an evidence sentence s for that claim, classify the claim-evidence sentence pairs as supporting, refuting or neutral.\n[c, s] classify−−−−→ (support, refute, neutral)\n• Prediction of the claims Given a list of supporting or refuting evidence sentences S where S = [s1, s2 . . . sn] for a claim c, the task is to predict whether the claim is True or False or Neutral by combining all the evidences.\n[c, S] predict−−−→ c(true, false, neutral)",
|
| 8 |
+
"4 Methodology": "In this section we will describe in detail the proposed methods we have adopted for the formulated tasks.",
|
| 9 |
+
"4.1 Classification of claim-evidence pair": "Previous studies have focused on using factchecking datasets such as FEVER (Thorne et al., 2018) for training models for the task of factchecking. However, we have modelled the classification task as a natural language inference (NLI) problem, since (Pradeep et al., 2020) found in their study that models learn better from NLI data than datasets such as FEVER. Textual entailment or NLI is defined as the task of determining if, given a “premise”, a “hypothesis” is true (entailment) or false (contradiction) or not determined (neutral) (Williams et al., 2017). Fact-checking has similarities with the NLI task, in which premises can be modelled as evidences and the hypothesis as claims (Thorne et al., 2018). The idea is to train domain specific BERT (Devlin et al., 2018) model using NLI data to see if the model can learn knowledge that can be transferred to fact-checking task in biomedical domain. In order to achieve this, we have trained PubMedBERT2 (Gu et al., 2021)\n2https://huggingface.co/microsoft/ BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext\nwhich is a domain specific BERT model on the multi-NLI (MNLI) dataset (Williams et al., 2017) by minimizing the cross-entropy loss. We also experimented with other models such as BioBERT (Lee et al., 2020) and SciBERT (Beltagy et al., 2019), however, we achieved the best performance with the PubMedBERT model which is why we chose this model.\nIn order to fine-tune PubMedBERT on the MNLI data, the sentence pair of hypothesis-premise is used as the initial input sequence. Once the model is trained over the MNLI dataset, we can directly transfer the model to the claim-evidence sentence pairs for the fact-checking task. We then trained it on the MedNLI (Romanov and Shivade, 2018) dataset, which is a domain-specific NLI dataset, as previous research (Phang et al., 2018; Wang et al., 2018) has shown that further training over a similar domain-specific dataset increases model performance. The learned model can then make predictions whether an evidence supports or refutes a claim or if it is undetermined. The model can also be adapted in a supervised way when fact-checking datasets are available. We have done extensive experiments to show how the model can also be adapted for unseen data.",
|
| 10 |
+
"4.2 Prediction of the claims": "The classifier can assign a claim-evidence sentence pair as support or refute. In order to further predict the claims as true or false, we need to combine all the evidences for each of the claims. In more complex scenarios, when some evidences support a claim whereas others refute the same claim and yet others may be undetermined, this task is not trivial. To resolve such conflicting cases, we have used an improvised Dempster-Shafer (D-S) theory of evidence combination. Research has mainly focused on using the D-S theory for multi-sensor domain where evidences from multiple sources are combined to achieve a final decision (Xiao, 2019; Khan and Anwar, 2019; Smets, 2000; Jiang et al., 2016). This is similar to our work and we have used the D-S theory for combining multiple conflicting evidences for claims in order to achieve a final decision for the claims. The D-S theory is mathematically defined as follows (Dempster et al., 2008): Definition 1. The set of all the possible sets of the hypotheses or class categories is known as the frame of discernment (FOD). A frame of discernment consisting of N elements where each element\nai is mutually exclusive to each other can be defined as: θ = {a1, a2, a3 . . . , aN} (1) Definition 2. If A is a subset of P (θ) where P (θ) = 2θ, the basic probability assignment (BPA) or mass function, m(A) is a function that maps A → [0, 1] and satisfies the following conditions:\nm(ϕ) = 0, ∑\nA⊆θ m(A) = 1 (2)\nDefinition 3. If m1(Ai) and m2(Aj) are the BPA of two bodies of evidence (BOE), then according to D-S combination rule, they can be combined as follows:\nm(A) = 1 1−K ∑\nAi∩Aj=A m1(Ai)m2(Aj), A ̸= 0 (3)\nwhere K is a normalization factor defined as follows:\nK = ∑\nAi∩Aj=ϕ m1(Ai)m2(Aj) (4)\nDefinition 4. The combination formula can be extended for n terms as well which is defined as:\nm(A) = 1 1−K ∑\nAi1 ...∩Ain=A m1(Ai1) . . .mn(Ain),\nA ̸= 0 (5)\nand K is defined as follows:\nK = ∑\nAi1 ...∩Ain=ϕ m1(Ai1) . . .mn(Ain) (6)",
|
| 11 |
+
"4.3 Illustrative example": "In order to understand the working of the D-S theory, let us take a few examples. According to our work, let us take three classes for the FOD, θ = {a, b, c} where a, b, c are “support”, “refute” and “neutral” respectively. Example 1. Let us take two conflicting evidences with respective probabilities for a, b and c.\nE1 : m1(a) = 0.062 m1(b) = 0.937 m1(c) = 0.001 E2 : m2(a) = 0.952 m2(b) = 0.048 m2(c) = 0\nWe can see that for both E1 and E2, equation 2 is fulfilled. According to equation 4, we get\nK = m1(a)m2(b)m2(c) +m1(b)m2(a)m2(c)\n+m1(c)m2(a)m2(b)\nPutting the respective values, we get K = 0.896. Using equation 3, we get the following\nm(a) = m1(a)m2(a) (1−K) , m(b) = m1(b)m2(b) (1−K) and\nm(c) = m1(c)m2(c)\n(1−K)\nAfter calculation, we get m(a) = 0.436, m(b) = 0.563 and m(c) = 0. We can see that m(b) has the highest probability value using the D-S combination theorem.\nIn certain situations, the D-S theorem fails. Let us look at one such example. Example 2. Let us take four conflicting evidences with respective probabilities for a, b and c.\nE1 : m1(a) = 0.889 m1(b) = 0.106 m1(c) = 0.005 E2 : m2(a) = 0.0 m2(b) = 0.999 m2(c) = 0.0 E3 : m3(a) = 1.0 m3(b) = 0.0 m3(c) = 0.0 E4 : m4(a) = 0.481 m4(b) = 0.515 m4(c) = 0.004\nWe can see that for both E1 and E2, equation 2 is fulfilled. However, here, we find that K = 1 which means that the denominator is 1−K = 0. In such situations, the D-S combination rule will fail as division by zero is mathematically undefined. Definition 5. In order to overcome such situations, we adapted the base belief function from (Wang et al., 2019b) which is defined as follows: Let δ be a set of N possible values that are mutually exclusive. The power set of δ is 2δ, in which the number of elements is 2N . According to (Wang et al., 2019b), the base belief function mbase is then defined as:\nmbase(Ai) = 1\n2N − 1 (7)\nwhere Ai is the subset of δ except for the empty set ϕ. The modified BPA then becomes\nm′(Ai) = m1(Ai) +mbase(Ai)\n2 (8)\nwhere m1(Ai) is the original BPA. This modified BPA allows us to mitigate situations when BPA values are 0. However, this leads to the violation of the condition ∑ A⊆θ m(A) = 1. In order to preserve the condition, we normalize the value of m′(Ai) and therefore the final BPA is:\nm′norm(Ai) = m′(Ai)∑ m′(Ai)\n(9)\nExample 3. Using the modifications, from Example 2, the modified BPAs are as follows\nE1 : m1(a) = 0.723 m1(b) = 0.174 m1(c) = 0.104 E2 : m2(a) = 0.100 m2(b) = 0.801 m2(c) = 0.100 E3 : m3(a) = 0.801 m3(b) = 0.100 m3(c) = 0.100 E4 : m4(a) = 0.437 m4(b) = 0.461 m4(c) = 0.102\nWe can see that for both E1 and E2, equation 2 is fulfilled. From equation 6, we can calculate K = 0.0318 and 1−K = 0.968. After that we can use equation 5 and get the values of m(a) = 0.795, m(b) = 0.201 and m(c) = 0.0033. Using the\nmodified BPAs have helped overcome situations where the D-S combination rules fail.\nFrom the illustrative examples, we have shown how the modified D-S method can be used for the task of combining evidences for a claim taking into account conflicting evidences as well.",
|
| 12 |
+
"5 Experimental Details": "In this section, we will describe the experimental details for our tasks.",
|
| 13 |
+
"5.1 Dataset used": "For evaluation purposes, we have used the SCIFACT dataset. The SCIFACT dataset has a train file, a dev file and a test file. However, the test file is part of a shared task and as such the labels are not available. This is why we are evaluating directly on the dev file. However, some claims in the dev set do not have the evidence sentences and as such we cannot evaluate on those claims which is why we have dropped those claims.",
|
| 14 |
+
"5.2 Classification of claim-evidence pair": "For this task, we have experimented on different scenarios by doing an evaluation study over different classification settings. First we experimented directly on the dev set where we use our PubMedBERT fine-tuned model directly on the dev set examples by passing the claim-evidence pair and predicting the labels. The results are shown in Table 1 showing improvements over other models where P, R and F-1 are the precision, recall and f score respectively.\nFor the second experiment, we fine-tuned the MNLI fine-tuned model over MedNLI to see if there is any performance difference. Experiments using this model yielded a very good result which shows that in order to achieve an increased performance while fine-tuning over a smaller dataset, it is better to first fine-tune over a larger dataset and then use that model to further fine-tune over the smaller dataset. To confirm the results, we also compared the performance of a few more models from Table 1 by further fine-tuning these models\nover MedNLI. We can see from Table 2 that there is a performance increase in all models. This is in line with the findings by (Phang et al., 2018; Wang et al., 2018; Clark et al., 2019; Sap et al., 2019).\nWe also experimented in a zero-shot setting for the SCIFACT pipeline where we first retrieve relevant PubMed abstracts for the claims in the dev set using the corpus provided in the dataset (Deka et al., 2022b). After that, the top n evidence sentences are extracted from the abstracts (Deka et al., 2022a) and then we use the claim-evidence pairs to predict whether the evidence supports or refutes the claim. We compared our model with stateof-the-art zero-shot as well as few-shot baselines evaluated on the SCIFACT dataset. However, it should be noted that the baselines have different trade-off points in calculating the results due to our method being different from theirs. The results for the experiment are shown below in Table 3 where top n sentences are the evidence sentences from the relevant abstracts For each setting, we retrieve the top 2, 3, 5 and 10 evidence sentences and then the label for claim-evidence pair is predicted.\nThe baselines use a supervised approach where they use the train set of the SCIFACT dataset. In our method, however, we are using a transfer learning approach where we directly use our method over the dev set without using the train set. From Table 3, we can see that we have outperformed the baseline models in the zero-shot setting. We can also see that our best-performing model setting outperforms even the few-shot baselines as well as the fully fine-tuned VERISCI (Wadden et al., 2020) baseline.",
|
| 15 |
+
"5.3 Prediction of the claims": "In order to use the D-S theory for our work in resolving conflicting evidences, we first need to calculate the probabilities of the classes. As we have approached our task as an NLI problem, we have three different classes: SUPPORT, REFUTE and NEUTRAL. For each claim-evidence pair, our\nclassifier calculates the probabilities of the three classes. We are using these probabilities as the BPAs from Equation 2. Once we have the BPAs, we then use equation 9 to calculate the final modified BPAs to mitigate the denominator error. Once we have calculated the modified BPAs, we use equations 5 and 6 to combine the BPAs according to the D-S combination rules.\nThe SCIFACT dataset does not have labels that can be used for the evaluation of the combination method. In order to infer these labels for the evaluation, we give the final class label for a claim as either “Fake”, “Truth” or “Neutral”. This label is based on the gold standard label of the evidences. We have seen that in the SCIFACT dataset, all evidences for a claim can either be “SUPPORT” or “REFUTE”. Based on this, we label claims as “True” which has evidences labelled as “SUPPORT” and “False” for claims that have evidence labels as “REFUTE”. Some of the evidences do not enough information to either “SUPPORT” or “REFUTE” claims. These are labelled as “Neutral”. This will be our gold standard and the results from the D-S combination theory will be evaluated against this gold standard. As evaluation metrics, we have used macro precision, recall and f-1 score. We experimented in two different scenarios. Initially, we experimented directly on the dev set using our model from Table 2. We got the results as follows: Precision = 0.898, Recall = 0.893, F-1 score = 0.894.\nFor the next experiment, we have used the whole pipeline process where we first retrieve top n abstracts and then from these abstracts we retrieve the top n evidences. Once we have the evidence sentences, we then use the classifier to classify them accordingly. The results are shown in Table 4.",
|
| 16 |
+
"6 Supervised approach using augmented data": "In situations where we have labelled data, our model can be used to train over such data in a supervised way which means that the knowledge from our model can be transferred over such data. However, a problem with the available datasets such as SCIFACT, is the fact that it has very less labelled data for training which may not lead to improved performance of the model. To improve it, there should be more data for training, and data augmentation is one way of increasing the number of training examples (Shorten et al., 2021; Feng et al., 2021). There are various ways of augmenting data such as rule-based, interpolation-based and model-based (Shi et al., 2022). In rule-based methods, words and phrases are manipulated in order to generate augmented text. But a problem with such methods is that changing words or phrases may lead to change in the meaning of the sentences (Niu and Bansal, 2018). In the context of biomedical text, if the meaning of the sentence changes then the sampled augmented data may lead to negative performance in model training. By performing interpolation operations directly on the source text (Chawla et al., 2002; He et al., 2008) or latent space representations (Chen et al., 2020), interpolation-\nbased approaches produce new instances. However, such methods can be error-prone due to noisy generated data (Chawla et al., 2002). Model-based methods use language models such as BERT to generate new training examples. One popular way of using these models to generate new training examples is back translation (Edunov et al., 2018). Recent research works have explored these language models for data augmentation via back translation (Melton et al., 2022).\nFor our work, we explored the following research questions:\nRQ 1. Can we use back-translation method for data augmentation on domain specific fact checking task without loss of context?\nRQ 2. How well does the model fare without using augmented examples vs the model which uses augmented examples?\nIn order to answer the research questions, we explored two different ways: using Google Translate and transformer-based language models. Using Google Translate for back translation has been studied in previous research (Pappas et al., 2022). We have used Google Translate to convert the claimevidence pairs to different languages such as German, French, Russian, Chinese and Spanish. Each language has a different language structure and since biomedical text is different than general text, a comparison of all the different languages would show which languages can be better suited for such tasks in the medical domain. We have used the deep translator python API 3 for the Google Translate method.\nTransformer-based language models have been proven to be very good in neural machine translation tasks (Przystupa and Abdul-Mageed, 2019; Uhrig et al., 2021). For the study, we have used the OpusMT (Tiedemann and Thottingal, 2020) models which are pretrained transformer models for the neural machine translation task based on the Marian MT framework (Junczys-Dowmunt et al., 2018). We have used the models from the HuggingFace repository for the OpusMT 4 models.\nFor the experiment using the Google translator API, we translate all the claims as a batch to different languages and then back to English and the same approach is taken for the evidences as well. We then merge the synthetic data with the original\n3https://deep-translator.readthedocs.io/en/ latest/\n4https://huggingface.co/Helsinki-NLP\ndata by removing any duplicates. However, for NMT models, all claims and evidence are backtranslated one at a time using the HuggingFace pipeline5. Once we get the back-translated examples, we then merge them with the original data. For evaluation, we use our model from Table 3 with the best results and train it over the augmented data. The results are shown below in Table 5.\nAs seen from the table above the model without fine-tuning on train set performs poor which is expected. Training the model with the train set but without augmentation results in slight improvement on the results. However, we can see there is a significant improvement in the results once we augment the train file using the back translation approaches. Out of the two different approaches that we have experimented with, the transformer-based NMT models perform better than Google translate API. However, these models are also time consuming while performing the back-translation task unlike the Google translate approach. The results show that data augmentation using back translation gives us better results for such domain-specific fact checking tasks which answer both RQ1 and RQ2.",
|
| 17 |
+
"7 Transferring over other datasets": "In order to know how well our model generalizes over other similar data, we experimented with two similar fact-checking datasets on biomedical data, HEALTHVER (Sarrouti et al., 2021) and COVIDFACT (Saakyan et al., 2021). Both datasets focus on Covid-19 data, however, the way claims and evidences are collected in both these datasets differ. HEALTHVER claims are collected from CORD-19 (Wang et al., 2020) corpus article snip-\n5https://huggingface.co/docs/transformers/ main_classes/pipelines\npets which were retrieved to answer questions for TREC-COVID (Voorhees et al., 2021). The claims in HEALTHVER are complex and evidences are provided for each claim. The dataset has three labels, SUPPORT, REFUTE and NEUTRAL based on the evidences which are collected from the article snippets itself. COVIDFACT, on the other hand, has claims collected from Covid-19 subreddit and evidences are collected from linked scientific papers and documents collected from Google search. The claims in COVIDFACT are also complex and has two labels for the evidences collected, SUPPORT and REFUTE. Both HEALTHVER and COVIDFACT have one annotated evidence per claim, whereas in SCIFACT, there may be more than one evidence for one claim. Also, in SCIFACT, relevant abstracts are needed to be retrieved first from the corpus provided and then evidence sentences are needed to be retrieved from those abstracts.\nAlthough our model has not been trained over Covid-19 specific text, we wanted to experiment how well it generalizes over such data by performing two different experiments. In the first experiment, we applied our model to the test set of both HEALTHVER and COVIDFACT without using the training set in a zero-shot approach. For SCIFACT, we have used the dev set.\nWe can see from Table 6 that in a zero-shot setting, our model performs better with the SCIFACT dataset. This can be attributed to the fact that SCIFACT data contain PubMed abstracts and PubMedBERT (Gu et al., 2021) has been trained over PubMed text which is why it performs better. HEALTHVER and COVIDFACT, on the other hand do not contain PubMed data and as such the model does not generalise well over the other datasets.\nFor the second experiment, we have transferred a trained model over one dataset to the other two to see how well models trained on one dataset generalize to other datasets. We use the train sets of the datasets to train the model and then use the test sets of the other datasets to evaluate the model per-\nformance. Since the HEALTHVER dataset has the NEUTRAL label, we have dropped instances from its test set having that label in order to maintain consistency over all the datasets when the model was trained over SCIFACT and COVIDFACT train sets since these datasets only have SUPPORT and REFUTE labels. The results of the experiment are shown in Table 7.\nFrom Table 7, it can be seen that when the model is trained on SCIFACT and HEALTHVER, transferring to the COVIDFACT test set does not give very good results. This is due to the fact that COVIDFACT contains both scientific as well as non-scientific claim-evidence pairs and therefore a model trained on either SCIFACT or HEALTHVER does not generalize well as they are based on scientific data. We can also see that the model trained on HEALTHVER generalizes better on the SCIFACT data and vice-versa as both these datasets are based on scientific claims and evidences, they learn better and generalize well on each other. However, we can also see that the model trained on COVIDFACT generalizes well on the other datasets since it contains both scientific and non-scientific data. These results confirm the findings by (Saakyan et al., 2021) that models trained on scientific data do not generalize well on data that contain non-scientific data as well. This is important as real-world health misinformation data may contain both scientific and non-scientific claims. In such situations, we need to have both scientific as well as non-scientific data so that models\ncan learn to generalize on such data.",
|
| 18 |
+
"8 Conclusion and future work": "We have explored the prediction of veracity for health-related fact-checking tasks that can be learned from NLI data. By doing experiments, we showed that training domain-specific BERT-based models on domain-specific NLI data improves the model performance for fact-checking task. We also explored a method that can be used to combine different evidences for a claim, even for situations that have conflicting evidences for the same claim. We have also shown by experiments that augmenting data using back-translation helps in situations where there is a lack of training data. Although fact-checking of scientific claims is still a new task, there is a potential for improvement of the current methods being used for the task. With the advent of more capable large language models, new research direction such as prompt based methods can also be explored. As future work, we are interested in exploring such prompt-based approaches along with multimodal data in this space."
|
| 19 |
+
}
|
ACL_23_no_limitation/ACL23_1292.json
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"File Number": "1292",
|
| 3 |
+
"Title": "Comparing and combining some popular NER approaches on Biomedical tasks",
|
| 4 |
+
"abstractText": "We compare three simple and popular approaches for NER: 1) SEQ (sequencelabeling with a linear token classifier) 2) SeqCRF (sequence-labeling with Conditional Random Fields), and 3) SpanPred (spanprediction with boundary token embeddings). We compare the approaches on 4 biomedical NER tasks: GENIA, NCBI-Disease, LivingNER (Spanish), SocialDisNER (Spanish). The SpanPred model demonstrates state-of-the-art performance on LivingNER and SocialDisNER, improving F1 by 1.3 and 0.6 F1 respectively. The SeqCRF model also demonstrates state-of-the-art performance on LivingNER and SocialDisNER, improving F1 by 0.2 F1 and 0.7 respectively. The SEQ model is competitive with the state-of-the-art on the LivingNER dataset. We explore some simple ways of combining the three approaches. We find that majority voting consistently gives high precision and high F1 across all 4 datasets. Lastly, we implement a system that learns to combine the predictions of SEQ and SpanPred, generating systems that consistently give high recall and high F1 across all 4 datasets. On the GENIA dataset, we find that our learned combiner system significantly boosts F1(+1.2) and recall(+2.1) over the systems being combined. We release all the well-documented code necessary to reproduce all systems at this Github repository.",
|
| 5 |
+
"1 Introduction": "NER has frequently been formulated as a sequencelabeling problem (Chiu and Nichols, 2016; Ma and Hovy, 2016; Wang et al., 2022) in which a model learns to label each token using a labeling scheme such as BIO(beginning, inside, outside). However, in recent years people have also formulated the NER task as a span-prediction problem (Jiang et al., 2020; Li et al., 2020; Fu et al., 2021; Zhang et al., 2023) where spans of text are represented and labeled with entity types.\nLet SEQ be the simplest sequence-labeling model which represents each token using a language model and then classifies each token-representation with a linear layer. Let SeqCRF be another popular sequence-labeling model which is identical to SEQ model except that the token representations from the language model are fed into a linear-chain conditional random field layer(Lafferty et al., 2001; Lample et al., 2016). Let SpanPred(Lee et al., 2017; Jiang et al., 2020) be a model that represents every possible span of text using two token-embeddings located at the its boundary, and then classifies every span-representation using a linear layer. We describe all three models in detail in section 4. We evaluate SEQ, SeqCRF, and SpanPred models on four biomedical NER tasks: GENIA(Kim et al., 2003), NCBI-Disease(Doğan et al., 2014), LivingNER(Spanish)(MirandaEscalada et al., 2022), and SocialDisNER(Spanish)(Gasco Sánchez et al., 2022). Despite being simple, the SpanPred and CRF models improve the state-of-the-art on the LivingNER and SocialDisNER tasks.\n(Fu et al., 2021) show that the sequencelabeling approaches(eg. Seq and SeqCRF) and span-prediction approaches(eg. SpanPred) have different strengths and weaknesses while having similar(F1) performance. This motivated us to try and combine Seq, SeqCRF, and SpanPred models using two simple methods and study the results. We refer to the two simple methods as Union and MajVote. Union is inspired by the set(mathematical) union operation and it simply involves \"unioning\" the sets of predictions made by the models. MajVote is the classic majority voting method. We find that MajVote can yield systems that have both high precision and high F1.\nInspired by the boost in recall(and the corresponding drop in precision) resulting from the Union method, we implemented a combiner system (which we refer to as Meta) that aims to combat\n273\nthe drop in precision as a result of the Union method. We find that Meta shows very promising signs of increasing precision while preserving high recall and high F1. Meta borrows ideas from work on generating span representations using \"solid markers\"(Baldini Soares et al., 2019; Xiao et al., 2020; Ye et al., 2022), work on using prompts (Li et al., 2020), and work by (Fu et al., 2021) to combine the span-prediction and sequence-labeling approaches using the span-prediction approach.",
|
| 6 |
+
"2 Preliminaries": "Let every prediction p of an NER system be a tuple of the form\np = (SampleId,EntityType,BeginOffset,EndOffset)\nwhich consists of the identifier of the sample/text in which the entity is found, the type of the entity, and the beginning and ending offsets for the entity.",
|
| 7 |
+
"3 Preprocessing": "For GENIA and NCBI-Disease, each sample is an English sentence. For SocialDisNER, each sample is an entire Spanish tweet. For LivingNER, we use the FLERT(Schweter and Akbik, 2020) approach for document-level NER, in which each Spanish sentence is surrounded by a context of 100 characters to the left and 100 characters to the right.\n4 Models\n4.1 Seq model Token Representation Step Given a sentence x = [w1, w2, ..., wn] with n tokens, we generate for each token wi a contextualized embedding ui ∈ Rd that corresponds to the last-hiddenlayer representation of the language model. Here, d represents the size of the token embedding. Importantly, special tokens like [CLS] and [SEP] are also represented. We find that the performance can drop significantly(especially for SEQ) if they are not incorporated in the learning process.\nXLM-RoBERTa large(Conneau et al., 2020) is the multilingual language model that we use for the LivingNER and SocialDisNER spanish tasks. Inspired by its high performance on the BLURB(Gu et al., 2021) biomedical benchmark, we use BioLinkBert large(Yasunaga et al., 2022) for the NCBI-Disease and GENIA datasets.\nToken Classification Step In this layer, we classify every token representation into a set of named entity types corresponding to the BIO(beginning, inside, outside) tagging scheme. Assuming Θ is the set of all named entity types, then the set of all BIO tags B is of size (2×|Θ|)+1. In other words, a linear layer maps each token representation ui ∈ Rd to a prediction pi ∈ R|B|, where d is the length of the token embedding. Finally, the predictions are used to calculate loss of given sentence x with n tokens as follows:\nLoss(x) = −1 n\nn∑\ni=1\nlog(Softmax(pi)yi) (1)\nHere yi represents the index of the gold BIO label of the ith token.\n4.2 SeqCRF Model This model is identical to the Seq model except that we pass the contextualized token representation U through a a Linear Chain CRF(Lafferty et al., 2001) layer. The CRF layer computes the probabilities of labeling the sequence using the Viterbi algorithm(Forney, 1973). A loss suited to the CRF layer’s predictions is then used to train the model. We directly use the CRF implementation available in the FLAIR(Akbik et al., 2019) framework. The BIO scheme is used for token classification.\n4.3 Span Model Token Representation Layer Same as the token representation layer of the Seq model.\nSpan Representation Layer Let a span s be a tuple s = (b, e) where b and e are the beggining and ending token indices, and s represents the text segment [wb, wb+1, ..., we] where wi is the ith token. In this layer, we enumerate all possible spans and then represent each span using two token embeddings located at its boundary. More precisely, given embeddings [u1,u2, ...,un] of n tokens, there are ( n 2 ) = n 2\n2 possible spans, which can be enumerated and represented as the list [(0, 0), (0, 1), ..., (0, n), (1, 1), (1, 2)...(1, n), ...(n, n)]. Then we removed all spans that have a length longer than 32 tokens – this was important to fit the model in GPU memory with a batch size of 4. Finally, as in (Lee et al., 2017), each span si will be represented by vi = [ubi ;uei ], a concatenation of the beginning and ending token embeddings.\nHence, the output of this layer is V ∈ Rk×(2×d) where k = n 2\n2 and d is length of the token embedding vector.\nSpan Classification Layer In this layer, we classify each span representation with a named entity type. We introduce an additional label Neg_Span which represents the absence of a named entity. Precisely, a linear layer maps each span representation vi ∈ R(2×d) to a prediction pi ∈ R|Ω|, where Ω is the set of all named entity types(including Neg_Span) and d is the size of the token embedding. Finally, the predictions are used to calculate loss of given sentence x with l possible spans as follows:\nLoss(x) = −1 l\nl∑\ni=1\nlog(Softmax(pi)yi) (2)\nHere yi represents the index of the gold label of the ith span.\n4.4 Union combiner model This model doesn’t learn weights. For a given list P0, P1, ..., Pn where Pi is the set of predictions(as defined in section 2) made by the ith NER model and n is the total number of models, it returns the set P1 ∪ P2 ∪ ...Pn.\n4.5 MajVote combiner model This model doesn’t learn weights. This is the classic majority voting combiner model. Precisely, when given a list P0, P1, ..., Pn where Pi is the set of predictions(as defined in section 2) made by the ith NER model and n is the total number of models, it returns a set which only includes predictions in P1 ∪ P2 ∪ ...Pn that have been predicted by more that ⌊n2 ⌋ models.\n4.6 Meta combiner model\nThe job of meta is simple : \"Learn to tell if a prediction made by SEQ or SpanPred is a mistake or not\". In other words, Meta looks at a prediction made by SEQ or SpanPred on the validation set and learns to classify the prediction as being either \"correct\" or \"incorrect\". \"correct\" means that the prediction is a good prediction, and that it should not be removed. \"incorrect\" means that the prediction should be removed. In other words, if PSEQ is the set of all predictions of the SEQ and PSpan is the set of all predictions of SpanPred, then Meta acts as (and learns to be) a filter for PSpan ∪ PSEQ. During evaluation, Meta filters PSpan ∪ PSEQ, generating a final set of predictions.\nFigure 1 illustrates the role of meta in the pipeline. We borrow the idea of using markers made with special tokens (Baldini Soares et al., 2019; Xiao et al., 2020; Ye et al., 2022) which, intuitively, help models \"focus their attention on the span-ofinterest\". In other words, by introducing special tokens(which act as markers) like [e] and [/e] in the language model’s vocabulary, and then surrounding the span-of-interest with them, one can help the model \"focus\" of the span of interest while making some prediction. In Meta’s case, the markers are supposed to help locate/identify the entities predicted by SEQ or SpanPred in raw text. See subsection 4.7 for an example input prediction with markers highlighting the entity.\nWe also borrow the idea of prompting(Li et al., 2020), which involves pre-pending some text(prompt) to the original input text with the goal of priming(or aiding) a model’s decision making with a useful bias. In particular, every input to Meta includes the type of the predicted entity as prompt. Intuitively, this helps Meta recognize the type of the entity it is dealing with. See subsection 4.7 for an example of prompting with the entity type \"disease\".\nNote that prompting and special markers are only used to prepare the training data for Meta using the predictions of SEQ and SpanPred on the validation set. Meta itself is a simple binary classification neural model. Just like SEQ, SeqCRF and SpanPred, it first creates contextualized token representations from raw input using the appropriate language model(XLM-RoBERTa or BioLinkBERT) and then classifies the pooler token([CLS] or [s]) representation using a linear layer. As in SpanPred and SEQ, cross-entropy loss is used to train the model.\nBecause META acts as a \"filter\"(it allows certain predictions and disallows others), it cannot improve recall – it can only improve precision. Ideally, Meta will learn the true nature of the mistakes that SEQ and SpanPred make and remove all false positives, resulting in a perfect precision score of 100 and no drop in recall.\nPreparing the training data for Meta: all predictions(with \"correct\" and \"incorrect\" labels) on the validation set for all 20 epochs by both SEQ and SpanPRED, and all gold predictions(that only have \"correct\" labels) from the original training data make up the training set for Meta. We hold out 15 percent of Meta’s training set for\nvalidation. Note that we incorporate the predictions of SpanPred and SEQ from earlier epochs because the fully trained high-performing models don’t make that many mistakes(which META needs for its learning). As expected, the test set is not touched while training Meta. During evaluation, Meta filters the predictions made by SEQ and SpanPred on the test set.",
|
| 8 |
+
"4.7 Meta input example": "Assume the example sentence \"Bob has HIV and flu.\" and the task of identifying diseases. Now assume that SEQ predicted (id, disease, 8, 11) (see section 2 for the definition of prediction) and correctly identified the disease \"HIV\" in the input. Then, the input to meta will be the the text \"disease Bob has [e] HIV [/e] and flu\" and the associated gold label of correct. Prompting with disease informs Meta that it is dealing with a prediction representing a disease. Meta has to make a judgement on whether the prediction is correct or not.",
|
| 9 |
+
"4.8 Training and Optimization": "Both XLM RoBERTa large(Conneau et al., 2020) and BioLinkBERT large(Yasunaga et al., 2022) are fine-tuned on the training data using the Adafactor(Shazeer and Stern, 2018) optimizer with a learning rate of 1e-5(see code) and a batch size of 4 for all 4 datasets. Specifically, we used the implementation of Adafactor available on HuggingFace(Wolf et al., 2019). It was not possible for us to use the same learning rate and batch size for every dataset with Adam(Kingma and Ba, 2015) because we noticed it was prone to over-fitting(and then collapsing) mid-training on LivingNER, NCBI-Disease, and GENIA – batch-size had to be increased to avoid overfitting. Moreover, we found that SEQ, SeqCRF, and SpanPred converged to better solutions with Adafactor on all datasets. However, we found that Meta consistently converged to better solutions on the NCBI disease dataset using Adam.\nThe best model is selected using early stopping with a patience(in terms of epochs) of 5.",
|
| 10 |
+
"5 Evaluation Methodology": "All tasks evaluate systems using the strict(no partial matching) Micro F1, Precision and Recall. For SocialDisNER, all systems were submitted to the corresponding CodaLab(Pavao et al.,\n2022) competition website for evaluation. For LivingNER, all our systems have been evaluated using the official evaluation script that the organizers made available. For Genia and NCBIDisease, we unfortunately couldn’t find official CodaLab websites, so we had to use our own script, which can be inspected here.",
|
| 11 |
+
"6 Analysis of Results": "Note that among the 3 models, SpanPred consistently outperforms the other two on all datasets. This is anticipated on tasks with overlapping entities like LivingNER and GENIA(because SEQ and SeqCRF cannot represent them), but not on \"flat\" NER tasks like SocialDisNER and NCBI-Disease.\nNote that any system resulting from a Union combination should have higher recall than any of the involved systems because a set union operation is incapable of removing a correct prediction (the set of false negatives can only shrink with more systems). Also, the resulting system’s precision cannot be higher than the highest precision observed in any sub-system. Table 1 adheres to both of these expectations. On the other hand, a system resulting from a MajVote combiner is inclined to have higher precision when the systems being combined are diverse and comparable because – intuitively – MajVote can be a more \"picky\" system (only allowing a prediction if it has been voted on by several). In Table 1, note that both SpanPredxSEQ and SpanPredxSEQxCRF consistently boost precision across all datasets. Also note that the best MajVote systems significantly outperform all other systems on precision while maintaining the highest F1 on all datasets except Genia, where Meta outperforms all other systems on F1 for the first(and last) time. Also on Genia is the only time when a Union model (SpanPred∪ SEQ) outperforms the MajVote models due to a significant boost in recall. Finally, note how Meta, across all datasets, outperforms SpanPred, SEQ, and SeqCRF models on Recall and delivers an F1 that is at least as high as any of the three models.",
|
| 12 |
+
"7 Conclusion": "Our implementation(code available) of CRF and SpanPred, two simple models, improves the state of the art on LivingNER and SocialDisNER datasets. We used two simple approaches called\nUnion and MajVote to combine the NER models’ predictions and studied the results. MajVote on the three NER models seems to be effective at generating systems with high precision and high F1. While Union can generate systems with higher recall, it is only at the cost of F1 due to a significant drop in precision. Meta seems to be effective at alleviating Union’s issue, generating systems with both high recall and high F1."
|
| 13 |
+
}
|