id
stringlengths 1
4
| year
int64 2.01k
2.03k
| title
stringlengths 12
519
| abstract
stringlengths 7
12.7k
| pdf_url
stringlengths 36
61
| content
stringlengths 7
46.5k
| __index_level_0__
int64 0
41.4k
|
|---|---|---|---|---|---|---|
41
| 2,021
|
Automatic Detection and Classification of Mental Illnesses from General Social Media Texts
|
Mental health is getting more and more attention recently, depression being a very common illness nowadays, but also other disorders like anxiety, obsessive-compulsive disorders, feeding disorders, autism, or attention-deficit/hyperactivity disorders. The huge amount of data from social media and the recent advances of deep learning models provide valuable means to automatically detecting mental disorders from plain text. In this article, we experiment with state-of-the-art methods on the SMHD mental health conditions dataset from Reddit (Cohan et al., 2018). Our contribution is threefold: using a dataset consisting of more illnesses than most studies, focusing on general text rather than mental health support groups and classification by posts rather than individuals or groups. For the automatic classification of the diseases, we employ three deep learning models: BERT, RoBERTa and XLNET. We double the baseline established by Cohan et al. (2018), on just a sample of their dataset. We improve the results obtained by Jiang et al. (2020) on post-level classification. The accuracy obtained by the eating disorder classifier is the highest due to the pregnant presence of discussions related to calories, diets, recipes etc., whereas depression had the lowest F1 score, probably because depression is more difficult to identify in linguistic acts.
|
https://aclanthology.org/2021.ranlp-1.41
|
## introduction an analysis performed by @xcite estimates that approximately 10% of the world's population is living with a mental illness. the global burden of disease @xcite states that depression is a very common illness and there are more than 264 million people affected by it. at its worst, the illness can lead to suicide and it is the second highest cause of death for people between 15 and 29. between 76% and 85% of the potentially diagnosed people, do not benefit from any treatment for their illness due to living in impoverished areas and not having access to mental care. it is difficult to discuss about digital solutions in the context of isolated areas with low data availability and limited access to professional help. social stigma is another obstacle present regardless of age, gender or race, which makes early intervention difficult. persons facing difficulties often avoid discussing their issues from various reasons. however, researchers working with machine learning algorithms can draw plenty of expertise from the unstructured data roaming the world wide web. the advent of social media platforms brings up an influx of large quantities of various types of unstructured textual data. the continuous advancements made in the field of machine learning enable the possibility to analyse such volumes of data efficiently. experiments in this interdisciplinary domain managed to bring up useful input for mental health practitioners, sociolinguists, computer scientist and other researchers in the field. @xcite perform one of the most influential quantitative studies, which reveals the way patterns of parts of speech, as labelled by liwc founders, correlate with types of personalities and types of mental illnesses. the classes and the psychological dimensions mapped together served as a start for many projects including the prediction of dark triad personality traits by @xcite and the risk of selfharm by @xcite . research in the area is conducted mainly on texts from mental health support groups, on just a few illnesses and some groups of individuals. our main research questions for this article are if and to what extent it is possible to detect and classify mental illnesses from general texts, if there ## related work nlp researchers have shown an increased interest in the area at the intersection of machine learning and psychiatry in the last years. social media is an indispensable resource for research. yet, the particularities of the online setting rise a range of challenges. as there are not any standards established for using social data, practitioners from many fields pointed to the dangers of using such data without a clear framework. @xcite address the issue of "biases, methodological pitfalls, and ethical boundaries" -discussing the problems often left unaddressed by researchers working with this kind of data. @xcite analyse not only the ethical dilemma revolving around this type of studies, but also their feasibility and the integration of the social component into the compound of a socio-technical system. when it comes to detecting mental illnesses from social media data, we have many examples at hand, which often look at data coming from those reddit communities, which are support groups for people struggling with an illness or another. most articles look at a single illness in comparison to a control group: @xcite and @xcite schizophrenia. our goal is to detect a wide range of mental illnesses using deep learning techniques, which seem like the best candidates for this task. @xcite employ deep learning methods similar to ours, but we concentrate on obtaining better results by training the models on individual posts rather than posts grouped by users, which might not work as expected. for example, if a user produced few contributions or has a fresh account, they would probably have few posts available. on the other hand, some types of user are the observing type and rarely contribute to discussions. one aspect worth mentioning is the nature of the data used in many classification tasks. texts containing explicit content and linguistic cues pertaining to the properties of a certain illness are often used. @xcite al. (2020) and thorstad et al. (2019) perform automatic text classification by their author's mental illnesses, with good results, on texts that specifically discussed these conditions on dedicated forums. nevertheless, these classifications are of little help in finding risk population, when looking at general text, which does not include mental illness topics. among the few researchers who report using datasets containing general discussions coming from people who self-reported their diagnosis in one of the support communities are @xcite and @xcite . the results are favorable and leave room for improvement. we believe it is important to experiment further for a better understanding of the ways in which mental illnesses can be detected in earlier stages and how even general discussions contain traces of how mental illnesses manifest themselves in language. in addition, this is a direction worthy of exploration because the persons asking for guidance represent a very small and idiosyncratic part of the population battling with mental illnesses, thus early mental illness detection from general text might be of a real help. ## data we used the smhd dataset introduced by @xcite . this dataset contains non-explicit texts: a large-scale resource for exploring online language usage for multiple mental health conditions. they test some classification algorithms, but no deep learning. also, employed liwc categories for classification. these categories include standard linguistic dimensionspro-nouns, articles, present tense, future tense; psychological processes -positive emotions, negative emotions, anger, anxiety; personal concerns -work, achievements. the smhd dataset contains texts extracted from reddit's general discussion communities grouped on users and illnesses. individuals diagnosed with a mental illness were detected by searching for self-reports in the dedicated support groups. the dataset features multiple illnesses, which are present in the psychiatric taxonomy dsm-5 (american psychiatric association, 2013). as stated by the authors of the dataset, "six conditions are top-level dsm-5 disorders: schizophrenia spectrum disorders (schizophrenia), bipolar disorders (bipolar), depressive disorders (depression), anxiety disorders (anxiety), obsessive-compulsive disorders (ocd) and feeding and eating disorders (eating). the three other conditions are one rank lower: post-traumatic stress disorder (ptsd) is classified under trauma-and stress-related disorders, and autism spectrum disorders (autism) and attention-deficit/hyperactivity disorder (adhd) under neurodevelopmental disorders". the opposing group of users is the control one, whose members are selected based on having no posts in the support groups and at least 50 posts on reddit. the complete dataset contains 20,406 diagnosed users and 335,952 control users. the texts do not contain any terms related to mental health, neither the diagnosed groups, nor the control ones. our experiments will use just a selection of each group of illnesses to speed up the computation process. the models are not user centered and will learn from each individual post. selecting data based on a fixed number of users was not suitable for our tasks due to the imbalance at the user level when it comes to the number of comments and posts available. therefore, we selected randomly 50,000 posts for each group of users. the numbers shown in tables 1 and 2 might reflect certain particularities about an illness, how the diagnosed users communicate in the online environment. this variation depends also on how the users engage, whether they create posts or comment on somebody else's and on the format adopted by each community -if pictures are posted often, then the comments are on the shorter side, if story telling is the center of the community, people engage with the purpose of telling their opinion or a similar story, hence the lengthier texts. the authors of the dataset conducted a linguistic analysis based on liwc categories. several differences were observed between the diagnosed groups and the control users. @xcite and @xcite underline that pronounced usage of firstperson singular with most conditions is consistent with the theory that illness drives one towards selffocus. an interesting finding underlining the bias of the dataset towards the predominantly male demographic is the female references that point to discussions about relationships and love related issues with the bipolar, depression and anxiety groups. reddit does not impose a very strict post limit hence we have diverse lengths. however, the deep learning models we used impose a limit for training. the smhd has already undergone preprocessing, but we needed more cleaning. we remove any posts shorter than 4 tokens. very short texts are often noise like thankful comments or very short approval phrases, which would confuse the model and do not contain significant meaning the next section will look at another data related problem, namely the ethics and biases of working with social media data. ## ethics and biases reddit represents a social media application whose users are part of communities and engage in discussions. each social media network represents a cluster of people who are defined by certain characteristics. the hootsuite yearly report @xcite shows that more than 60% of the reddit users are males aged 18 to 34. accordingly, studies show that there is a tendency in males to display less emotionally charged input due to the social stigma in the offline world. @xcite find that men often avoid seeking professional help or talking about their problems. concealing their emotional state in real life is a strategy in order to avoid prejudices and is not something specific for the female population. ireland and mehl's (2014) research conducted in the psychology area proves that manifestations of negative emotions are muted across many settings and situations. alternatively, @xcite and @xcite demonstrate that people tend to discuss personal things in anonymous spaces and share unpopular opinions. in this situation, reddit represents a good source of data for a population, which is underrepresented in clinical studies. @xcite prove that some platforms might be more attractive for a demographic than others. behavioral biases imply that users of a platform display a particular behavior, observable in how they interact with each other or what type of content they create. one such bias is the way in which users seek and share information. de @xcite discovered that users diagnosed with one illness behave differently in this aspect from the others. nevertheless, we cannot claim that this is representative for all the individuals diagnosed with a mental illness. there are certain biases plaguing the studies based on social media, which should be at least mentioned for awareness. here, we consider the population bias a positive fact, which enables studies targeting the young adult and adult males. however, this bias does not affect much our dataset, because the data collected comes from neutral communities where a variety of topics is discussed. ## discriminative features we run a naïve bayes classifier in order to find out the most informative features from each category in our dataset. we used the classifier implemented in the scikit-learn library by @xcite to get a top of n most informative words by scores. our experiment includes the 9 illnesses as labels, and the control group. the top n words can be seen in ## classification methods identifying significant differences between our groups was the main drive for training classifiers. we trained 3 different models based on the transformers architecture to see how each performs binary classification between a diagnosed group and a control one. we obtained state-of-the-bertforsequenceclassification by @xcite . in order to setup this model, we experimented using different hyperparameters, loss functions, batch sizes and number of epochs. the authors of bert recommend using it with the following specifications: our machine needed smaller batch sizes to be able to train the model, so we used 3. we established a learning rate (adam 𝜀) of 1e-5 for the adamw loss function implemented by @xcite . we trained the model for 3 epochs only, because we noticed overfitting starting with the 4th epoch. the second method we used is xlnet, which is another method for pre-training language representations introduced by @xcite . xlnet was meant to overcome the limitations imposed by bert with its autoregressive model and does so by outperforming it on 20 tasks as shown by @xcite . for this method, we have a different formatted input and there is no limit for the length of the input texts. however, the input arrays need to be of the same size. this is addressed by padding the inputs that do not meet the size of the longest sequence. padding means simply adding 0s until the length is met. for this classifier we had to limit the length of sequences to 126 due to computational resources. the optimum batch size was 8. the loss function we used was adamw with the same hyperparameters as for bert. we trained this model for 4 epochs. with a training set of approximately 100000 texts, we get a number of 50000 training steps. the last model we used, roberta implemented by @xcite , is facebook ai's training method and it promises to improve on bert. the researchers involved in implementing roberta prove that bert was undertrained and there is still a long way to go in terms of design choices and the way in which the improvements are reported. we did not use the full size of our dataset due to its large size and subsequent long training times. finetuning roberta implies loading the weights of the pretrained model, in our case, the robertaforsequenceclassification model. we use a sequence length of 256 and a batch size of 8. the loss function used here is adamw with adam 𝜀 of 2e-5. ## results we obtained the results using 50,000 posts for each group alike. the compound of 100,000 posts for each binary classifier was split in 80,000 for training and 20,000 for testing. we trained our models with different hyperparameters until we reached the optimum ones detailed in no. of testing posts 20,000 20,000 20,000 we used a naïve bayes classifier to discover the most important features for each group of users. our results add to the group of articles showing good prospects for this field. an encouraging finding is the sufficiency of focusing on general text rather than mental health support groups and classification by posts rather than individuals or groups. another takeaway is the sufficiency of post-level classification and avenue to improve this approach in future work by paying attention to contextual cues such as time, events, entailment of posts or any other possible triggers that might help the earlier detection of a mental illness. further experimentation with different setups and data that are more diverse is also required. this would benefit our research and increase the possibility of future integration of automated tools, which could assist clinicians in the earlier detection of mental health issues.
| 11,477
|
534
| 2,023
|
Beyond Candidates : Adaptive Dialogue Agent Utilizing Persona and Knowledge
|
To build ultimate dialogue agents, previous studies suggest models that ground both persona and knowledge. However, applying the dialogue system directly to the usual conversation is still limited because the system requires a complete sentence-formed persona and knowledge candidate sets from the given dataset. In contrast to the dialogue setting in the dataset, humans utilize semantic concepts in their minds rather than a set of pre-defined candidate sentences. Following this manner of human dialogue, we suggest an adaptive dialogue system that is applicable to situations where complete sentence-formed candidates are not given. Our model generates consistent and relevant persona descriptions and identifies relevant knowledge for engaging and knowledgeable responses, even with fragmentary information. We show that our model outperforms previous baselines that utilize persona and knowledge candidate sentences and conduct the human evaluation on the machine-generated responses. In addition, we conduct ablation studies to demonstrate the effectiveness of each component of our model. Furthermore, we apply our model to other dialogue datasets that only ground knowledge or persona to showcase its adaptability. Our code is available at https://github.com/dlawjddn803/BeCand.
|
https://aclanthology.org/2023.findings-emnlp.534
|
## introduction in usual conversations, humans utilize the semantic concept in their minds in terms of the dialogue topic and the preference of the interlocutor. with the semantic-level of concepts, humans communicate each other by aggregating the concepts to convey knowledgeable and empathetic responses @xcite . it implies that people converse by adaptively reorganizing and retrieving additional information with their semantic concepts, encompassing knowledge and persona, not by relying on pre-defined sources @xcite @xcite . it seems that @xcite and @xcite adhere to this human-like approach on the conversation by referring to persona and knowledge. however, it neglects the humans' semantic concept reconstruction and retrieval capability by requiring pre-defined candidate sets to ground as in figure 1 @xcite . as knowledge and persona candidates for the agents are not given in usual conversation, the dependency on the candidates eventually limits their applicability to candidate-free situations as depicted in figure 1 (a). to build the dialogue agents adaptive to the candidate-agnostic situation, two branches of studies are conducted. in knowledge-grounded conversation, the knowledgeable agents employ the non-parametric memory-based retrieval to overcome candidate-agnostic situations @xcite . similarly, personaaware dialogue agents consider the out-of-persona situations by extending persona sentences from a few persona concept @xcite @xcite . even though both streams of research focus on the candidate-agnostic conversational situation, they only leverage a single source for grounding, rather than utilizing both persona and knowledge, simultaneously. in this paper, we propose a dialogue agent utilizing persona and knowledge that is adaptive to the candidate-free situation. to this end, our method consists of 1) a knowledge-retriever 2) a concept-based persona generator, 3) a dialoguepersona aligner, and 4) a response generator. when the knowledge concept is given, a knowledge retriever finds the relevant knowledge from the knowledge base. our concept-based persona generator then produces complete sentences with fragmentary persona concepts. the generated persona descriptions are then validated based on the persona aligner regarding both consistency and relevancy. the validated persona descriptions are used as the input of the response generator. experimental results show that our candidatefree model outperforms other baselines. also, we show that the concept-based persona generator and persona aligner boost the performance of the dialogue agents with the ablation studies. we conduct the human evaluation of our model's responses, and the result implies that our method is effective in building a persona-knowledge dialogue agent without candidate sentences. moreover, we demonstrate that our method is capable of utilizing other dialogue datasets grounding single source, such as personachat @xcite or wizardof-wikipedia (wow) @xcite , and shows the adaptiveness of our proposed model. in qualitative results, it is shown that the generated responses are comparable to the ground truth answers without the given candidates. ## method we propose adaptive dialogue agents that generate the responses without the persona and knowledge candidates. to this end, we assume that the knowledge and persona concepts are only given to the agent for knowledgeable and engaging responses. first, 1) knowledge retriever retrieves the relevant paragraphs with the knowledge concept, and 2) concept-based persona generator produces the persona descriptions with the given short persona concepts. then, 3) persona aligner decides whether the generated persona descriptions are relevant to the dialogue history and whether the sentences are consistent with the previous dialogue history. afterward, 4) response generator provides knowledgeable and engaging responses with the predicted knowledge paragraphs and persona descriptions. ## conclusions in this paper, we introduced an adaptive dialogue agent utilizing persona and knowledge without the given candidates from the dataset. due to the absence of knowledge candidates, the knowledge retriever retrieves the relevant paragraphs with the knowledge concept from the knowledge base. also, the concept-based persona generator outputs the persona descriptions with the fragmentary persona concepts from retrieve-and-generate architecture. the generated persona descriptions are then validated through a persona aligner regarding relevancy and consistency. from experiments, we showed that our method is effective even though the persona concept and knowledge concept are given with the dialogue. we also presented the ablation studies on each component of our model. moreover, we conducted the human evaluation to show the improved quality of the responses of our models and it is also shown in qualitative results. to show its applicability and adaptiveness, we denoted the experimental results of our method on focus, wow, and personachat datasets.
| 24,623
|
218
| 2,022
|
You can’t pick your neighbors, or can you? When and How to Rely on Retrieval in the k NN - LM
|
Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs. One such approach, the kNN-LM, interpolates any existing LM’s predictions with the output of a k-nearest neighbors model and requires no additional training. In this paper, we explore the importance of lexical and semantic matching in the context of items retrieved by kNN-LM. We find two trends: (1) the presence of large overlapping n-grams between the datastore and evaluation set plays an important factor in strong performance, even when the datastore is derived from the training data; and (2) the kNN-LM is most beneficial when retrieved items have high semantic similarity with the query. Based on our analysis, we define a new formulation of the kNN-LM that uses retrieval quality to assign the interpolation coefficient. We empirically measure the effectiveness of our approach on two English language modeling datasets, Wikitext-103 and PG-19. Our re-formulation of the kNN-LM is beneficial in both cases, and leads to nearly 4% improvement in perplexity on the Wikitext-103 test set.
|
https://aclanthology.org/2022.findings-emnlp.218
|
## introduction recently, a new class of language models (lms) that are augmented with retrieval capabilities have led to substantial improvements over standard neural lms @xcite @xcite @xcite . furthermore, lms with retrieval warrant investigation as they provide benefits for many tasks @xcite . these approaches generally involve a backbone neural lm that interacts with a retrieval component of varying complexity to find relevant documents. in this work, we analyze and improve a specific and simple type of retrieval-enhanced language model, the knn-lm originally proposed by @xcite . the knn-lm is non-parametric -it works by retrieving instances from an external datastore at each decoding timestep, and it improves language model performance without requiring additional training. in essence, the knn-lm interpolates a base lm's predicted probability distribution of the next word with a distribution formed by retrieving vectors similar to the current hidden state. knn-lm includes two tunable hyperparameters: the number of items to retrieve (k) and an interpolation coefficient (λ). the method's effectiveness depends crucially on source and size of the retrieval datastore: it is most effective when using a very large datastore with orders of magnitude more tokens than seen in the training corpus, but @xcite also observe improvements with smaller datastores. modern neural models have massive capacity to memorize their training data @xcite . nonetheless, simply using an lm's training corpus as the source for the datastore works well for knn-lm, as test perplexity on the wikitext-103 dataset decreases substantially from 18.65 to 16.12. however, it remains unclear how and why the knn-lm achieves these improvements. which types of tokens and contexts does it improve most on? as an effort to answer this question and motivate new more effective methods to enhance lms with retrieval we analyze the knn-lm's behavior with respect to parts of speech, semantic similarity between context and retrievals, and lexical overlap. among others, our analysis reveals the knn-lm is helpful beyond factual knowledge (i.e. proper nouns), and improves perplexity across many word types, so it would be difficult to extend knn-lm using syntactic information alone. on the other hand, we find the performance of the knn-lm highly correlates with lexical similarity between the context and retrieved items, although this is somewhat domain specific and does not fully explain its strong performance. semantic similarity is nearly as accurate a predictor of knn-lm performance as lexical similarity, making it a strong candidate to extend the knn-lm. based on our analysis, we devise a simple scheme to extend the knn-lm following the intuition that when retrieval quality is high (measured by semantic similarity), then the model should rely more heavily on the knn-based prediction. since retrieval in the knn-lm is latent, we use semantic similarity as a proxy to measure retrieval relevance. concretely, our method is an adaptive version of knn-lm that assigns the interpolation coefficient according to retrieval quality (see figure 1 ). while it introduces new hyperparameters, we show that the additional hyperparameter tuning comes at negligible cost. importantly, our empirical results demonstrate that our newly introduced re-formulation of knn-lm is beneficial for both encylopedic text and book data, and leads to an improvement of nearly 4% perplexity over the the vanilla knn-lm, measured on the english language modeling wikitext-103 test set. broadly, we hope our insights and methods helps to facilitate future development of retrieval-augmented lms. ## language modeling with knn-lm the knn-lm improves over a base language model by explicitly memorizing the lm's training data. it stores exact sentences from the training data in its datastore that can be accessed during language model inference to produce a k-nearest neighbor next word distribution that is interpolated with the base model's prediction. interpolation is preferred for similar reasons as approximate matrix factorization in collaborative filtering -the universe of text patterns is sparse and lossless compression of the training data alone is not sufficient to model new patterns. in this section, we explain the specifics of the knn-lm's inner workings in order to guide our analysis. ## a new formulation for knn-lm in the previous section, we analysed when knn-lm is most helpful. we use this information to design a new formulation of knn-lm that can exploit this behavior. the original knn-lm uses the same interpolation coefficient (λ) for every example, which may not be desirable. as our analysis reveals, we can predict when the knn-lm is most beneficial, which naturally leads us to a new formulation with an adaptive λ: ## experiments and results to measure the importance of retrieval quality in the knn-lm, we evaluate our approach ( §4) on two english language modeling datasets. the first is the wikitext-103 corpus @xcite used by @xcite . the second is pg-19 @xcite , which we include because it consists of books and is thematically distinct from the encyclopedic documents in wikitext-103. ## discussion in previous sections we use observations of knn-lm to motivate our new approach that adapts the interpolation coefficient to retrieval quality. here we analyze results with our new method to see how they compare with baselines and deepen our understand of retrieval-enhanced language modeling. 6.1 can we adapt to lexical similarity? the original knn-lm has similar performance when its results are stratified by either semantic or lexical similarity ( §3.1), but in our new formulation we adaptive the coefficient only according to semantic similarity. what if we use lexical similarity instead? we explore this possible alternative and report the results for wikitext-103 in in general, we find that both semantic and lexical similarity foot_6 yield similar results when used to bucket queries. for the best setting, when @xmath0, the learned vectors work better, reflecting recent findings that dense vectors outperform sparse representations for various retrieval-related tasks @xcite . hence, throughout this paper we adapt the coefficient using semantic similarity and @xmath1. interestingly, for lower values of k the bagof-words representation has an edge over semantic similarity. perhaps this suggests lexical similarity is more precise, and if retrieving many items is costly, then adapting the coefficient according to lexical similarity might be particularly helpful. ## related work we extend the knn-lm by adapting the interpolation coefficient to retrieval quality (measured by semantic similarity). adaptret @xcite models the interpolation coefficient as a function of the query. this is convenient, since one can skip retrieval if the coefficient is below a threshold, although requires training a separate adaptor network. crucially, their coefficient predictions are based solely on query features, and does not take into account whether retrieval is successful. our approach incorporates the quality of retrieval, and improves language modeling results. it is simple and effective, and only needs lightweight hyperparameter tuning without any additional training. retomaton @xcite provides an alternative means to bypass retrieval. they build a graph over the datastore, and at each time step they either retrieve like the original knn-lm or re-use the previously retrieved neighbors to traverse the graph. this is more efficient than adaptret, providing better results at lower cost. both adaptret and retomaton are designed with efficiency in mind. they rely on approximate distance using product quantization and perform about as well as the exact distance version of the knn-lm. we improve upon knn-lm by about 4% perplexity. there are many recent works that use retrieval components for language tasks besides language modeling, such as question answering @xcite @xcite , dialogue generation @xcite , conversational search @xcite , semantic parsing @xcite , data augmentation @xcite , and machine translation @xcite @xcite . there are alternatives to knn-lm that incorporate document structure @xcite , but their experimental setup is not comparable with ours. in our baselines we only consider models matching the original knn-lm backbone, although alternative architectures show promise for retrieval-enhanced language modeling @xcite @xcite . scaling the datastore @xcite or model size @xcite have shown to effectively improve language modeling. alternatively, text generation may be improved through more advanced ranking @xcite or decoding @xcite algorithms. researchers have explored fundamental extensions to knn that are agnostic to language data. @xcite spatially partition the datastore, adapting the value of k for each region. keeping k fixed, @xcite instead adapt the shape of the neighborhood based on local information. ## conclusion in this paper, we have proposed a novel and effective re-formulation of the knn-lm. our approach adapts the interpolation coefficient to the quality of retrieved documents measured by semantic similarity. we motivate our approach through extensive analysis, which also provides insights on the types of tokens and contexts knn-lm is most helpful for. importantly, we empirically demonstrate the effectiveness of our approach through experiments on two domains, wikitext-103 (encyclopedic text) and pg-19 (book data), and outperform the original knn-lm by 4% test perplexity on the wikitext-103 language modeling corpus.
| 16,675
|
161
| 2,020
|
HERO : Hierarchical Encoder for V ideo+ L anguage Omni-representation Pre-training
|
We present HERO, a novel framework for large-scale video+language omni-representation learning. HERO encodes multimodal inputs in a hierarchical structure, where local context of a video frame is captured by a Cross-modal Transformer via multimodal fusion, and global video context is captured by a Temporal Transformer. In addition to standard Masked Language Modeling (MLM) and Masked Frame Modeling (MFM) objectives, we design two new pre-training tasks: (i) Video-Subtitle Matching (VSM), where the model predicts both global and local temporal alignment; and (ii) Frame Order Modeling (FOM), where the model predicts the right order of shuffled video frames. HERO is jointly trained on HowTo100M and large-scale TV datasets to gain deep understanding of complex social dynamics with multi-character interactions. Comprehensive experiments demonstrate that HERO achieves new state of the art on multiple benchmarks over Text-based Video/Video-moment Retrieval, Video Question Answering (QA), Video-and-language Inference and Video Captioning tasks across different domains. We also introduce two new challenging benchmarks How2QA and How2R for Video QA and Retrieval, collected from diverse video content over multimodalities.
|
https://aclanthology.org/2020.emnlp-main.161
|
## introduction inspired by bert @xcite , largescale multimodal pre-training has prevailed in the realm of vision-and-language research @xcite @xcite . there are many early players in the area, including vilbert @xcite , lxmert @xcite , uniter @xcite , vl-bert @xcite and unicoder-vl @xcite . however, most large-scale pre-trained models are tailored for static images, not dynamic videos. videobert @xcite is the first to apply bert to learn joint embedding for videotext pairs. but since only discrete tokens are used to represent video frames, rich video frame features are not fully utilized. to remedy this, cbt @xcite proposes to use a contrastive loss, but mainly for video representation learning alone, with text input only considered as side information. univilm @xcite takes a step further and considers both understanding and generation tasks. several constraints inherently limit the success of existing models. (i) most model designs are direct adaptation of bert, taking simple concatenation of subtitle sentences and visual frames as input, while losing the temporal alignment between video and text modalities. (ii) pre-training tasks are directly borrowed from image+text pre-training methods, without exploiting the sequential nature of videos. (iii) compared to diverse image domains investigated in existing work, video datasets used in current models are restricted to cooking or narrated instructional videos @xcite , excluding video sources that contain dynamic scenes and complex social interactions. to tackle these challenges, we present a new video-and-language large-scale pre-training framework -hero (hierarchical encoder for omnirepresentation learning). as illustrated in figure 1 , hero takes as input a sequence of video clip frames and their accompanying subtitle sentences. 2 instead of adopting a flat bert-like encoder, hero encodes multimodal inputs in a hierarchical fashion, with (i) a cross-modal transformer to fuse a subtitle sentence and its accompanying local video frames, followed by (ii) a temporal transformer to obtain a sequentially contextualized embedding for each video frame, using all the surrounding frames as global context. the proposed hierarchical model first absorbs visual and textual local context on frame level, which is then transferred to a global video-level temporal context. experiments show that this novel model design achieves better performance than a flat bert-like architecture. four pre-training tasks are designed for hero: (i) masked language modeling (mlm); (ii) masked frame modeling (mfm); (iii) video-subtitle matching (vsm); and (iv) frame order modeling (fom). compared to prior work, the key novelty is vsm and fom, which encourage explicit temporal alignment between multimodalities as well as full-scale exploitation of the sequential nature of video input. in vsm, the model considers not only global alignment (predicting whether a subtitle matches the input video clip), but also local temporal alignment (retrieving the moment where the subtitle should be localized in the video clip). in fom, we randomly select and shuffle a subset of video frames, and the model is trained to restore their original order. extensive ablation studies demonstrate that both vsm and fom play a critical role in video+language pre-training. to empower the model with richer knowledge beyond instructional videos used in prior work, we jointly train hero with both howto100m (narrated instructional videos) @xcite and a large-scale tv dataset (containing tv episodes spanning across different genres) @xcite @xcite . compared to factual descriptions in howto100m, the tv dataset contains more complex plots that require comprehensive interpretation of human emotions, social dynamics and causal relations of events, making it a valuable supplement to howto100m and a closer approximation to real-life scenarios. existing pre-trained models are evaluated on youcook2 @xcite and msr-vtt @xcite datasets. youcook2 focuses on cooking videos only, and the captions in msr-vtt are very simple. to evaluate our model on more challenging benchmarks, we collect two new datasets on video-moment retrieval and question answering, how2r and how2qa. in addition, we evaluate hero on popular retrieval and qa tasks such as tvr @xcite and tvqa @xcite , where hero outperforms existing models by a large margin. we further demonstrate the generalizability of our model by adapting it to (i) diverse downstream tasks: video-and-language inference and video captioning tasks, achieving new state of the art on violin @xcite and tvc @xcite benchmarks; (ii) different video types: single-channel videos (video-only) and multi-channel videos (video + subtitle), reporting superior performance over existing state of the art on didemo (anne hendricks et al., 2017a) and msr-vtt. our main contributions are summarized as follows. (i) we present hero, a hierarchical transformer-based model for video+language representation learning. (ii) we propose new pretraining tasks vsm and fom, which complement mlm and mrm objectives by better capturing temporal alignment between multimodalities in both global and local contexts. (iii) different from previous work that mainly relies on howto100m, we include additional video datasets for pre-training, encouraging the model to learn from richer and more divserse visual content. (iv) we collect two new datasets based on howto100m for video-moment retrieval/qa, and will release the new benchmarks to foster future study. hero achieves new state of the art across all the evaluated tasks. ## related work since the birth of bert @xcite , there has been continuing advancement in language model pre-training, such as xlnet @xcite , roberta @xcite , albert @xcite , unilm @xcite , and t5 @xcite , which epitomizes the superb power of large-scale pre-training. satellited around bert, there is parallel growing interest in model compression @xcite and extension to generation tasks @xcite . branching out from language processing to multimodal, subsequent studies also emerge in vi-sion+language space. prominent work includes vilbert @xcite , lxmert @xcite , vl-bert @xcite , unicoder-vl @xcite , b2t2 @xcite , uniter @xcite and villa @xcite . a detailed review can be found in appendix a.7. contrast to the boom in image+text area, pretraining for video+language is still in its infancy. so far, videobert @xcite , cbt @xcite , mil-nce @xcite , act-bert @xcite and univilm @xcite are the only existing work exploring this space, covering downstream tasks from textbased video retrieval @xcite and video question answering @xcite to video captioning @xcite . in this paper, we aim to propel video+language omni-representation learning in four dimensions: (i) better model architecture design; (ii) better pretraining task design; (iii) diversification of training corpora; and (iv) new high-quality benchmarks for downstream evaluation. ## hierarchical video+language encoder in this section, we explain the proposed hero architecture and the four pre-training tasks in detail. ## experiments in this section, we describe comprehensive experiments on downstream tasks and provide ablation studies for in-depth analysis of different pretraining settings. to validate the effectiveness of hero, we evaluate on a wide variety of downstream tasks, including text-based video/ video-moment retrieval, video question answering, video-and-language inference, and video captioning. we consider 6 existing benchmarks: tvr @xcite , tvqa @xcite , violin @xcite , tvc @xcite , didemo (anne hendricks et al., 2017a), and msr-vtt @xcite . detailed descriptions and evaluation metrics on each task can be found in appendix a.6. ## conclusion in this paper, we present a hierarchical encoder for video+language omni-representation pre-training. our hero model presents a hierarchical architecture, consisting of cross-modal transformer and temporal transformer for multi-modal fusion. novel pre-training tasks are proposed to capture temporal alignment both locally and globally. pretrained on two large-scale video datasets, hero exceeds state of the art by a significant margin when transferred to multiple video-and-language tasks. two new datasets on text-based video-moment retrieval and video qa are introduced to serve as additional benchmarks for downstream evaluation. we consider extension of our model to other videoand-language tasks as future work, as well as developing more well-designed pre-training tasks. downstream task pre-training video ret. moment ret. 18 video moment ret. 18 r@1 r@10 r@100 r@1 r@10 r@100 r@1 r@10 r@100 pre-training greatly lifts hero performance on violin by approximately +2.9%. however, hero, without pre-training, presents worse performance than the sota baseline. unlike multistream, which leverages fine-grained region-level features, our results are reported on global framelevel features. therefore, it may be difficult for hero to capture the inconsistency between hypothesis and video content. for example, changes of hypotheses about region-level attributes (color, shape, and etc.) may result in different conclusions. extending hero for region-level video representations could be an interesting future direction. hero is also extensible to generation task: multi-modal video captioning. our results on tvc show that hero with pre-training surpasses mmt by a large margin. although pre-training is only applied to the encoder, it significantly improves hero performance on tvc across all metrics. when no pre-training is applied, hero is slightly inferior to the sota baseline. our hypothesis is that tvc has short video context (with video length of 9-second on average) but our model is designed for long video representation learning (tvr/tvqa with video length of 76-second on average). how to design pre-training tasks for mmt on tvc or including decoder pre-training for hero are left for future works.
| 3,882
|
939
| 2,023
|
Prompting with Pseudo-Code Instructions
|
Prompting with natural language instructions has recently emerged as a popular method of harnessing the capabilities of large language models (LLM). Given the inherent ambiguity present in natural language, it is intuitive to consider the possible advantages of prompting with less ambiguous prompt styles, like pseudo-code. In this paper, we explore if prompting via pseudo-code instructions helps improve the performance of pre-trained language models. We manually create a dataset of pseudo-code prompts for 132 different tasks spanning classification, QA, and generative language tasks, sourced from the Super-NaturalInstructions dataset. Using these prompts along with their counterparts in natural language, we study their performance on two LLM families - BLOOM, CodeGen. Our experiments show that using pseudo-code instructions leads to better results, with an average increase (absolute) of 7-16 points in F1 scores for classification tasks and an improvement (relative) of 12-38% in aggregate ROUGE-L scores across all tasks. We include detailed ablation studies which indicate that code comments, docstrings, and the structural clues encoded in pseudo-code all contribute towards the improvement in performance. To the best of our knowledge, our work is the first to demonstrate how pseudo-code prompts can be helpful in improving the performance of pre-trained LMs.
|
https://aclanthology.org/2023.emnlp-main.939
|
## introduction prompting with natural language instructions has recently emerged as a popular method of harnessing the capabilities of large language models. in addition to fine-tuning, models are often fine-tuned using instructions on a large collection of datasets listing 1 an example pseudo-code instruction for the task from @xcite . a successful model is expected to use the provided pseudo-code instructions and output responses to a pool of evaluation instances. 1 def generate_sentiment(sentence: str) -> str: 2 """for the given sentence, the task is to 3 predict the sentiment. for positive 4 sentiment return "positive" else return 5 "negative". to help improve the ability of lms to follow instructions and performance on unseen tasks @xcite . however, natural language instructions can be ambiguous and under-specified, and therefore have multiple interpretations -including detailed instructions may not always be beneficial, as it can add to the complexity of reasoning for models. this has led to the growing body of work around 'prompt-engineering' where specialized prompting strategies are developed for different domains and task types @xcite @xcite @xcite . in addition, inference-time prompting strategies that specifically aid multi-step reasoning have also been found to be helpful -e.g: the inclusion of chainof-thought reasoning in few-shot settings results in improved performance over standard prompts @xcite , the infamous "let's think stepby-step"-prompt for boosting 0-shot performance @xcite . input: q, k, and v : input matrices. ## related work finetuning large language models on instruction datasets can enhance their performance and even their ability to generalize to unseen tasks @xcite . many aspects of instruction finetuning such as the number of tasks, model size, and finetuning on chain-of-thought data have been found to be useful @xcite . consequently, significant efforts have been invested in manually creating instruction datasets, as well as using existing generative models to train and evaluate language models @xcite @xcite . the instructions available in instruction tuning datasets are mostly in natural language, but have been applied for both natural language tasks and programming tasks. but alternatives to natural language instructions such as programming language code, pseudo-code, symbols (maccartney and manning, 2007) etc. have not been thoroughly explored even for programming tasks. compared to natural language, code or pseudo-code has less ambiguity due to its inherent nature of using functions or steps that contribute towards accomplishing a task. this makes them a natural choice for specifying instructions. recently, few works (marvinai; @xcite have explored code and pseudocode as inputs. unlike contemporaneous work by @xcite we find that pseudo-code instructions indeed provide better performance over nl instructions on a wide variety of tasks. ## dataset the super-naturalinstructions dataset @xcite comprises 1, 616 diverse nlp tasks, and each task contains the task instruction, positive/negative examples, and instances. we sampled a mixture of 132 tasks that did not require multilingual capabilities and re-wrote instructions for a subset of this dataset using python constructs. note that we borrow python constructs only to express our prompts in pseudo-code and our prompts do not result in executable python code. further, we do not include any additional steps/instructions that were not present in the original natural language instructions. all task instructions follow the schema as described in listing 1. the schema consists of the following elements. function prototype: this defines the prototype of the main pseudo-code function. the function names are descriptive and summarize the task to be performed. they also include all variables passed as input along with their data types and return type. we follow the pep 8 foot_2 style guidelines for writing the pseudo-code and use strongly typed prototypes. we avoid declaring global variables whenever possible and pass them as arguments to a method. to the extent possible, we also avoid the use of classes and enumerations. line number 1 in listing 1 provides an example function prototype for a sentiment classification task. ## evaluation in order to study if instruction specification via pseudo-code results in improved performance over baseline nl english instructions, we choose to experiment with bloom @xcite , code-gen @xcite ) models. our choice of models is motivated by the fact that these models have not been instruction-fine-tuned on the natural instructions dataset. in addition, they have both been trained on code and natural language data. the bloom models are trained on the roots corpus @xcite consisting of 46 natural and 13 programming languages. on the other hand, the codegen models are trained on the pile corpus @xcite ), google's publicly available bigquery and bigpython datasets @xcite . the bloom models have been trained on a mixture of natural language and code simultaneously. as for the codegen models we utilize, they were initially trained on natural language and subsequently received additional ## conclusion and future work in this paper we presented our work on prompting with pseudo-code instructions. we created a collection of pseudo-code instructions comprising of 132 nlp tasks from the super-naturalinstructions dataset @xcite . we evaluated the performance of the following families of models -codegen and bloom at different model sizes and found that prompting all models with pseudo-code instructions results in significant gains as compared to prompting with nl instructions. our work opens up multiple directions of future work. it is interesting to observe that not only do pseudo-code instructions help when used with code models, they also work better on models designed for natural language tasks. in addition, the fact that code mod-els used in our experiments perform better than nl models, even when prompted with natural language instructions, suggests that it could be useful to explore instruction tuning of code models instead of pure nl models for nl applications. based on the findings of this paper it may also be useful to consider the effects of instruction fine-tuning with pseudo-code instructions as opposed to nl instructions. another aspect worth studying is how traditional chain-of-thought may compare with pseudo-code prompts -how would reasoning enabled by pseudocode instructions compare with chain-of-thought reasoning with and without fine-tuning? further, pseudo-code instructions may not only be used as direct inputs to a model, but they could also be used to create intermediate responses that a model needs to generate prior to returning a response.
| 22,714
|
17
| 2,024
|
Can Rule-Based Insights Enhance LLM s for Radiology Report Classification? Introducing the R ad P rompt Methodology.
|
Developing imaging models capable of detecting pathologies from chest X-rays can be cost and time-prohibitive for large datasets as it requires supervision to attain state-of-the-art performance. Instead, labels extracted from radiology reports may serve as distant supervision since these are routinely generated as part of clinical practice. Despite their widespread use, current rule-based methods for label extraction rely on extensive rule sets that are limited in their robustness to syntactic variability. To alleviate these limitations, we introduce RadPert, a rule-based system that integrates an uncertainty-aware information schema with a streamlined set of rules, enhancing performance. Additionally, we have developed RadPrompt, a multi-turn prompting strategy that leverages RadPert to bolster the zero-shot predictive capabilities of large language models, achieving a statistically significant improvement in weighted average F1 score over GPT-4 Turbo. Most notably, RadPrompt surpasses both its underlying models, showcasing the synergistic potential of LLMs with rule-based models. We have evaluated our methods on two English Corpora: the MIMIC-CXR gold-standard test set and a gold-standard dataset collected from the Cambridge University Hospitals.
|
https://aclanthology.org/2024.bionlp-1.17
|
## introduction supervised deep learning for medical imaging classification has accomplished significant milestones. in the chest x-ray (cxr) domain, such models have exhibited predictive capabilities on par with expert physicians @xcite and are being utilized in collaborative * equal contribution. annotating medical images, however, is expensive and arduous: it requires a committee of expert radiologists to resolve the inherently high degree of annotator variance and subjectivity @xcite . this issue is particularly problematic considering the global shortage of radiologists @xcite @xcite . instead, we often have access to a form of distant supervision: the radiology report. radiology reports are semi-structured free-text interpretations of an x-ray image and are generated as a routine part of clinical practice to communicate findings. in the past, rule-based models @xcite have been used to extract structured labels from radiology reports in various imaging datasets, including chestx-ray14 @xcite , chexpert @xcite , mimic-cxr @xcite and brax @xcite . however, those rule-based methods are often based on elementary techniques and, thus, exhibit limited robustness to syntactic variation. naturally, supervised deep learning models offer superior performance through their robustness to syntactic variability @xcite ). in contrast, large language models (llms) represent a significant improvement over rule-based models in an unsupervised setting and have achieved impressive performance in the field of radiology @xcite @xcite . in this paper, we present radpert, a rule-based model built on the radgraph knowledge graph @xcite . radpert leverages entity-level uncertainty labels from radgraph, reducing the need for a comprehensive rule set and enhancing its resilience to syntactic variations. we have evaluated radpert internally on mimic-cxr and externally on a dataset collected from the cambridge university hospitals (cuh). radpert surpasses chexpert, the former rule-based state-of-the-art (sota), by achieving statistically significant improvement in weighted average f1 score. furthermore, we explore the collaborative potential of llms with rule-based models through radprompt. radprompt is a multi-turn prompting strategy that employs radpert as an implicit means of encoding medical knowledge (figure 1 ). in fact, radprompt, based on gpt-4 turbo, manages to outperform both its underlying models in a zero-shot setting. ## related work numerous natural language processing methods have been developed to derive structured predictions from radiology reports @xcite @xcite @xcite . many of those approaches are designed for the multitask classification of radiology reports, written in english, into labels representing prevalent pathologies from cxrs. each such label can exhibit one of four output classes: null, positive, negative and uncertain. chexpert @xcite , the rule-based sota, follows an approach based on regular expression matching and the universal dependency graph (udg) of a radiology report. due to the rudimentary regular expression matching, however, chexpert is sensitive to syntactic variation. thus, multiple over-generalized rules are used in an attempt to alleviate these shortcomings. furthermore, the udg is a type of information extraction that does not explicitly identify negation and uncertainty. therefore, its ability to detect uncertainty in complex phrases is hampered despite the extensive rule set. extensions of chexpert have been developed for brazilian portuguese @xcite and german @xcite . chexbert @xcite is a semi-supervised model pretrained on automatically extracted labels from the chexpert model, fine-tuned on manually annotated reports, and evaluated on 687 mimic-cxr goldstandard test set reports. however, the published model weights 1 of chexbert differ from the original model. this discrepancy complicates compar-isons on the mimic-cxr dataset as the published model is fine-tuned on unspecified mimic-cxr manually annotated reports, which can potentially overlap with the mimic-cxr gold-standard test set. recent work has also explored the adoption of llms for radiology report classification. @xcite examine the zero and few-shot capabilities of llms. however, they mainly treat the task as a binary classification for each pathology. namely, for multitask classification, they only report the few-shot results on an unpublished institutional dataset. chex-gpt @xcite utilizes zero-shot gpt-4 labels as a distant supervision to fine-tune a bert-based model. nonetheless, they also simplify the task into binary classification. alternative approaches to the classification of chest x-rays (cxrs) explore moving away from the distantly supervised paradigm of training unimodal vision models on classifying structured labels extracted from radiology reports. in lieu of structured prediction, vision-language (vl) models are trained to align the embedding representations of cxrs with the representations of the corresponding radiology reports via self-supervised contrastive learning objectives @xcite @xcite @xcite . this alignment task is transformed into cxr classification through the cosine similarity of cxr embeddings to the embeddings of textual prompts representing the existence or absence of pathologies. however, vision models trained with the structured prediction paradigm outperform vl models such as chexzero @xcite , even when the latter utilizes an expert-annotated validation set for selecting optimal classification thresholds. in this paper, we will focus on improving the unsupervised sota for the multitask classification of radiology reports. ## limitations while this study demonstrates promising improvements in radiology report classification using the radprompt methodology, several limitations must be considered. radpert and radprompt are exclusively developed and tested for the english language. the study also centers around a list of pathologies typical of chest x-rays. as such, the extension of our methodologies to other languages, types of medical imaging, and additional pathologies was not verified. furthermore, previous studies have highlighted discrepancies between labels from radiology report annotations and those from the corresponding imaging study annotations @xcite . the source of such inconsistencies includes incomplete radiology report impressions, hierarchical relationships within labels, and the undeniable uncertainty of the task. in future work, we aim to study this effect within the cuh test set. due to ethical considerations, we are currently unable to perform inference for the cuh test set through third-party apis. thus, we have not evaluated radprompt externally for sota llms. we expect to overcome this limitation after the planned release of the cuh dataset. additionally, we cannot estimate the computational cost and carbon footprint for gpt-4-based radprompt due to a lack of specific metrics. in the appendix, we provide carbon footprint estimates for the llama-2-based radprompt, which is significantly higher than radpert and chexpert. nonetheless, radpert delivers performance comparable to gpt-4 while operating on a commercial cpu with minimal carbon emissions, underscoring its benefits in resource-limited environments. finally, there is an inherent degree of ambiguity in classifying radiology reports, especially as it pertains to the uncertainty labels. we aim to extend current datasets with labels from multiple annota- ## conclusions this paper introduced radpert, a rule-based system enhanced by the radgraph information schema, demonstrating significant improvements in the classification of radiology reports. by leveraging entitylevel uncertainty labels, radpert reduces reliance on comprehensive rule sets. our evaluations show that radpert surpasses chexpert, the previous rulebased sota, by achieving an 8.0% (95% ci: 5.5%, 10.8%) increase in f1 score, with confidence intervals strongly supporting this improvement. further extending the application of radpert, we developed radprompt, a multi-turn prompting strategy that utilizes insights from radpert to enhance the zero-shot prediction capabilities of large language models. radprompt demonstrated a 2.1% (95% ci: 0.3%, 4.1%) improvement in f1 score over gpt-4 turbo, indicating its potential to refine predictions in clinical settings. these results highlight the growing synergy between structured rule-based systems and large language models, offering a promising direction for future research in biomedical natural language processing. as we continue to refine these tools, future work will focus on expanding the existing datasets and addressing the discrepancies between goldstandard image labels and those extracted from radiology reports.
| 28,491
|
70
| 2,023
|
Multi-Modal Knowledge Graph Transformer Framework for Multi-Modal Entity Alignment
|
Multi-Modal Entity Alignment (MMEA) is a critical task that aims to identify equivalent entity pairs across multi-modal knowledge graphs (MMKGs). However, this task faces challenges due to the presence of different types of information, including neighboring entities, multi-modal attributes, and entity types. Directly incorporating the above information (e.g., concatenation or attention) can lead to an unaligned information space. To address these challenges, we propose a novel MMEA transformer, called Meaformer, that hierarchically introduces neighbor features, multi-modal attributes, and entity types to enhance the alignment task. Taking advantage of the transformer’s ability to better integrate multiple information, we design a hierarchical modifiable self-attention block in a transformer encoder to preserve the unique semantics of different information. Furthermore, we design two entity-type prefix injection methods to redintegrate entity-type information using type prefixes, which help to restrict the global information of entities not present in the MMKGs.
|
https://aclanthology.org/2023.findings-emnlp.70
|
## introduction multi-modal entity alignment (mmea) is a challenging task that aims to identify equivalent entity pairs across multiple knowledge graphs that feature different modalities of attributes, such as text and images. to accomplish this task, sophisticated models are required to effectively leverage information from different modalities and accurately align entities. this task is essential for various applications, such as cross-lingual information retrieval, question answering @xcite , and recommendation systems @xcite . mmea @xcite @xcite is challenging due to the heterogeneity of mmkgs (e.g., different neighbors, multi-modal attributes, distinct types), which makes it difficult to learn rich knowledge representations. previous approaches such as poe @xcite concatenated all modality features to create composite entity representations but failed to capture interactions among heterogeneous modalities. more recent works @xcite designed multi-modal fusion modules to better integrate attributes and entities, but still did not fully exploit the potential interactions among modalities. these methods also ignored inter-modality dependencies between entity pairs, which could lead to incorrect alignment. generally speaking, although mmkgs offer rich attributes and neighboring entities that could be useful for multi-mdoal entity alignment, current methods have limitations in (i) ignoring the differentiation and personalization of the aggregation of heterogeneous neighbors and modalities leading to the misalignment of cross-modal semantics, and (ii) lacking the use of entity heterogeneity resulting in the non-discriminative representations of different meaning/types of entities. therefore, the major challenge of mmea task is how to perform differentiated and personalized aggregation of heterogeneous information of the neighbors, modalities, and types. although such information is beneficial to entity alignment, directly fusing will lead to misalignment of the information space, as illustrated in figure 1 . firstly, notable disparities between different modalities make direct alignment a challenging task. for example, both the visual attribute of entity ruby in mmkg1 and the neighbor information of the entity ruby in mmkg2 contain similar semantics of programming, but data heterogeneity may impede effective utilization of this information. secondly, complex relationships between entities require a thorough understanding and modeling of contextual information and semantic associations. entities such as the ruby, the perl, and the entity larry wall possess unique attributes, and their inter-relationships are non-trivial, necessitating accurate modeling based on contextual information and semantic associations. furthermore, the existence of multiple meanings for entities further exacerbates the challenge of distinguishing between two entities, such as in the case of the ruby, which has different meanings in the mmkg1 and mmkg3 where it may be categorized as a jewelry entity or a programming language entity, respectively. to overcome the aforementioned challenges, we propose a novel multi-modal entity alignment transformer named moalign foot_0 . our framework hierarchically introduces neighbor, multimodal attribute, and entity types to enhance the alignment task. we leverage the transformer architecture, which is known for its ability to process heterogeneous data, to handle this complex task. moreover, to enable targeted learning on different modalities, we design a hierarchical modifiable self-attention block in the transformer encoder, which builds associations of task-related intra-modal features through the layered introduction. additionally, we introduce positional encoding to model entity representation from both structure and semantics simultaneously. furthermore, we integrate entitytype information using an entity-type prefix, which helps to restrict the global information of entities that are not present in the multi-modal knowledge graphs. this prefix enables better filtering out of unsuitable candidates and further enriches entity representations. to comprehensively evaluate the effectiveness of our proposed approach, we design training objectives for both entity and context evaluation. our extensive experiments on benchmark datasets demonstrate that our approach outperforms strong competitors and achieves excellent entity alignment performance. our contributions can be summarized as follows. multi-modal entity alignment task. multimodal entity alignment @xcite @xcite aims to determine if two entities from different multimodal knowledge graphs refer to the same realworld entity. this involves calculating the similarity between pairs of entities, known as alignment seeds. the goal is to learn entity representations from two multi-modal knowledge graphs ## framework this section introduces our proposed framework moalign. as shown in figure 2 , we introduce positional encoding to simultaneously model entity representation from both modality and structure. to hierarchically introduce neighbor and multi-modal attributes, we design a hierarchical modifiable selfattention block. this block builds associations of task-related intra-modal features through the layered introduction. furthermore, for integrating entity-type information, we design a prefix-injected self-attention mechanism, which helps to restrict the global information of entities not present in the mmkgs. additionally, moalign also design training objectives for both entity and context evaluation to comprehensively assess the effectiveness. ## conclusion this paper proposes a novel mmea framework. it incorporates cross-modal alignment knowledge using a two-stage transformer encoder to better capture complex inter-modality dependencies and semantic relationships. it includes a mmkg transformer encoder that uses self-attention mechanisms to establish associations between intra-modal features relevant to the task. our experiments show that our approach outperforms competitors.
| 24,158
|
21
| 2,023
|
Triple-Hybrid Energy-based Model Makes Better Calibrated Natural Language Understanding Models
|
Though pre-trained language models achieve notable success in many applications, it’s usually controversial for over-confident predictions. Specifically, the in-distribution (ID) miscalibration and out-of-distribution (OOD) detection are main concerns. Recently, some works based on energy-based models (EBM) have shown great improvements on both ID calibration and OOD detection for images. However, it’s rarely explored in natural language understanding tasks due to the non-differentiability of text data which makes it more difficult for EBM training. In this paper, we first propose a triple-hybrid EBM which combines the benefits of classifier, conditional generative model and marginal generative model altogether. Furthermore, we leverage contrastive learning to approximately train the proposed model, which circumvents the non-differentiability issue of text data. Extensive experiments have been done on GLUE and six other multiclass datasets in various domains. Our model outperforms previous methods in terms of ID calibration and OOD detection by a large margin while maintaining competitive accuracy.
|
https://aclanthology.org/2023.eacl-main.21
|
## introduction since many industrial applications involve safety -critical domains such as healthcare @xcite @xcite @xcite , anticipating credit card defaults @xcite and selfdriving @xcite , it's essential for machine learning systems to provide not only accurate but also well-calibrated predictions @xcite , which can help to decide whether it can be trusted. however, models achieving high accuracy usually lead to overconfidence and miscalibration @xcite @xcite . this motivates an interesting and important area that attempts to achieve a better trade-off between accuracy and calibration. in addition to id calibration, it's more important for machine learning models to produce high uncertainty when ood data is observed, rather than to produce wrong yet wildly confident predictions. related works. to overcome the problem of miscalibration, numerous methods have been proposed. the natural way is post-hoc calibration that transforms the output of the original network into calibrated confidence scores while maintaining the network's accuracy @xcite @xcite . the second method to mitigate miscalibration is to add regularizations during training such as label smoothing @xcite , mixup @xcite . @xcite and @xcite further conveys that the aforementioned methods can be applied to improve the calibration of pre-trained language models on nlu tasks. the third way is to design a specific loss function to minimize the discrepancy between accuracy and confidence. for example, @xcite lately propose the id and ood regularizer to leverage the relationship between accuracy and uncertainty, and it obtains a significant improvement over previous methods in id calibration and ood detection. energy-based models. in another line of work, joint ebm (jem; @xcite has been shown great improvements on id calibration and ood detection for images without explicit calibration correction mechanism. the core idea is to reinterpret a joint distribution p θ (x, y) from a neural classifier p θ (y|x) in the perspective of ebms and jointly optimize the marginal distribution p θ (x) and a neural classifier p θ (y|x). @xcite further investigate the ood detection performance with different training approaches for p θ (x) such as stochastic gradient langevin dynamics (sgld; @xcite , sliced-score-matching (ssm; @xcite and variational entropy regularized approximate maximum likelihood (vera; @xcite . besides, @xcite propose an implicit generative models based on ebms (igebm) and apply sgld to optimize p θ (x|y). it performs significantly better ood detection than other generative models. however, as shown by @xcite , the accuracy of igebm has dropped dramatically to 49.1% on cifar10 while standard finetuning can achieve 95.8% accuracy. this result indicates that different loglikelihood factorization leads to great gaps in accuracy, id calibration and ood detection. moreover, these training methods such as sgld, ssm, vera need to calculate the gradients about inputs, the none differentiability of text data limits the application of these methods on both calibration and ood detection for nlu tasks. recently, @xcite proposes a joint training of classifier p θ (y|x) and marginal distribution p θ (x) based on residual ebm @xcite for nlu tasks. different from jem, their model is more flexible by designing various energy functions for marginal distribution without any restriction on joint distribution p θ (x, y). to estimate the parameters of marginal distribution p θ (x), they propose to apply noise contrastive estimation (nce; @xcite to train the energy model by discriminating the real data and the fake data generated by a noise distribution. to make the noise distribution as close as possible to the data distribution, they finetune a task-specific gpt-2 @xcite . though it achieves improvements on id calibration, it's often resource-intensive compared to previous methods to finetune gpt-2 @xcite . moreover, the quality and quantity of fake samples generated by noise distribution has great impacts for nce training @xcite . ## triple-hybrid energy-based model motivation. many works @xcite @xcite have shown that ebms could significantly reduce the expected calibration error and improve outof-distribution detection for image classification. specifically, the jem proposed in @xcite factorizes the joint distribution log p θ (x, y) into log p θ (x) + log p θ (y|x), where log p θ (y|x) is to maintain the classification performance and log p θ (x) is the generative term which contributes to better calibration and out-of-distribution detection. on the contrary, the igebm proposed in @xcite factorizes the joint distribution log p θ (x, y) into log p θ (y) + log p θ (x|y) for implicit generation and surprisingly find that it achieves better ood performance. however, lack of p θ (y|x) leads to terrible classification performance. it's shown in @xcite that the classification accuracy dropped dramatically to 49.1% on the cifar10 dataset, while the accuracy is 92.9% by jem. on the other hand, liu and abbeel (2020) proposed a hybrid discriminative-generative energybased model (hdge) for both classification and generation. the loss function consists of a discriminative conditional log-likelihood log p θ (y|x) and a generative conditional log-likelihood log p θ (x|y). compared to igebm, it includes log p θ (y|x) and thus achieves better classification performance. compared to jem, it includes the conditional generative model, rather than the marginal generative model. in other words, jem targets to reduce the energy for data from the population p θ (x), while hdge aims at reducing the energy for compatible pair (x, y). this motivates us to combine the benefits of both conditional and marginal generative model for better calibration and ood detection. ## experiments in this section, we conduct thorough experiments to investigate the empirical peformance of our proposed methods. we first introduce the criteria for id calibration and ood detection. ## conclusion in our work, we propose a triple-hybrid ebm with combination of classifier, conditional generative model and marginal generative model into a unified framework called them. to train ebms effectively and efficiently, we leaverage contrastive learning to approximate the log-likelihood of ebms with negligible computational resources. extensive experiments demonstrates that our model outperforms the state-of-art methods in terms of id calibration and ood detection with competitive accuracy. we further apply contrastive learning to jem and igebm without considering the generation ability to obtain jem(cl) and igebm(cl) respectively. compared to jem(cl) and hdge(cl), our model is more robust to the hyper-parameters of contrastive learning including the temperature and size of memory bank in terms of id calibration and ood detection.
| 21,405
|
39
| 2,023
|
UMUT eam and SINAI at S em E val-2023 Task 9: Multilingual Tweet Intimacy Analysis using Multilingual Large Language Models and Data Augmentation
|
This work presents the participation of the UMUTeam and the SINAI research groups in the SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis. The goal of this task is to predict the intimacy of a set of tweets in 10 languages: English, Spanish, Italian, Portuguese, French, Chinese, Hindi, Arabic, Dutch and Korean, of which, the last 4 are not in the training data. Our approach to address this task is based on data augmentation and the use of three multilingual Large Language Models (multilingual BERT, XLM and mDeBERTA) by ensemble learning. Our team ranked 30th out of 45 participants. Our best results were achieved with two unseen languages: Korean (16th) and Hindi (19th).
|
https://aclanthology.org/2023.semeval-1.39
|
## introduction in natural language processing (nlp), intimacy can be described as how people communicate their perception and willingness to share personal data and emotions to their audience @xcite . the semeval 2023 task 9, entitled multilingual tweet intimacy analysis (mtia) @xcite , consists of a regression task in which the participants should rate in a score from 1 to 5 the intimacy of short documents written in 10 languages: english, spanish, italian, portuguese, french, chinese, hindi, arabic, dutch and korean. this task was co-organized by university of michigan and snap inc. there are two main challenges concerning this task. on the one hand, the training dataset provided to the participants does not cover all the evaluated languages, but only six of them: english, spanish, italian, portuguese, french, and chinese. however, the evaluation is conducted in those six languages plus hindi, arabic, dutch and korean. on the other hand, participants were only allowed to submit a unique run, which hinders the shared task. our strategy to solve the mtia challenge consists of an ensemble learning composed of three multilingual large language models (llm): multilingual bert @xcite , xlm @xcite , and mdeberta @xcite . besides, we use data augmentation incorporating to the training the dataset suggested by the organizers and provided in the work of @xcite , with more than two thousand english questions from reddit and other sources and annotated with intimacy scores in the range @xcite . our participation achieved modest results in the task, reaching the 30th position in the leader-board, with a pearson's r of 0.53. the best result is achieved by lazybob, with a pearson's r of 0.62. as commented above, as the participants were only allowed to submit a unique run, the analysis of our proposal is mainly based on a custom validation split. additional resources concerning our participation can be found at https://github.com/ nlp-umuteam/semeval-2023-mtia . ## background the organisers of the task provided the participants with the novel mint dataset @xcite , whose original training split consists of 9491 tweets rated with an intimacy score. the tweets were compiled between 2018 and 2022. to obtain tweets in different languages, the authors combined language filters in twitter with language detectors models such as fasttext @xcite . next, the authors created clusters of tweets of each language and several annotators rated the tweets in a scale from 1 (not intimate at all) to 5 (very intimate). as it can be observed, in the histogram plotted in figure 1 , most of the samples are rated with low scores. regarding the six languages involved during the training, these are almost balanced, with 1596 documents written in portuguese and chinese, 1592 in spanish, 1588 in french, 1587 in english and 1532 in italian. an example of the dataset is the spanish text "necesito paz mental" 1 , rated with an intimacy score of 2.8. in figure 2 the rounding label distribution is shown. the majority of labels are between 2 and 3 and with fewer instances of labels near to 0 or 5. the participants of the task were encouraged to use the dataset provided in pei and jurgens (2020); which contains english sentences with an intimacy score between -1 and 1. ## system overview our pipeline for solving the mtia 2023 shared task is depicted in figure 3 . in a nutshell, it can be described as follows. first, we clean and preprocess the mtia dataset and keep a small portion of the training split to create a custom validation 1 in english: i need peace of mind split. second, we perform a data augmentation stage applying google translate to the dataset of @xcite . third, we evaluate three multi-lingual llms and one model based on linguistic features. forth, we build an ensemble learning model that averages the predictions of the three llms to send our final predictions to the organizers of the task. concerning the data cleaning stage, we strip hyperlinks, hashtags, mentions and white space characters. regarding the dataset splitter step, we reserve a 20% of the tweets from the training split for custom validation purposes. next, we enlarge the training dataset proposed by incorporating the dataset provided in @xcite . this dataset contains sentences written in english. we use google translate to translate these sentences to spanish, italian, portuguese, french, hindi, arabic, dutch and korean. this way, we could incorporate 21573 new sentences to the training. as this dataset is rated in rank from -1 to 1, we translate the ratings to a scale from 1 to 5, maintaining the ratio. besides, it is worth noting that none of these new instances are used for custom validation. ## experimental setup during the evaluation phase, apart from the multilingual llms, we evaluate the usage of linguistic features from umutextstats @xcite . the linguistic features from umu-textstats have been evaluated in several nlp tasks, such as author profiling @xcite , satire identification (garcía-díaz and valencia-garcía, 2022), and hate-speech detection @xcite . umutextstats is designed for the spanish language, but it has a subset of language-independent features. these features are stylometric features and features related to named entity recognition (ner) and part-of-speech (pos). to extract these features, umutextstats relies on stanza @xcite to extract some features related to pos and ner. however, not all the languages involved in the mtia shared task have models per stanza, so the linguistic features were not useful for some of the languages involved on this shared task. accordingly, we decided not to include the linguistic features (lf) in the final submission. however, we use the linguistic features to make an analysis of the spanish split of the dataset and we observe a correlation with misspelled words with intimacy followed by morphological features related to proper and common nouns, personal pronouns in first, second person, and third person. we also identify a correlation with stylometric clues concerning the length of the tweets and with the usage of hyperboles, proper from figurative language. these results are depicted in figure 4 . next, the regression neural network architecture is described. for each llm we conduct an hyperparameter optimization stage consisting of training 10 models for each llm evaluating different parameters, including the learning rate, the number of epochs for traning, the warm up steps and the weight decay. the results of the best model for each llm are depicted in finally, we conduct another hyperparameter op-timization stage using keras. we follow this step to be consistent with the lfs and the llms. the results of this experiment are reported in for each feature set, we evaluate 55 models changing the neural network architecture (its number of neurons and hidden layers), the dropout, the batch size and the activation function. we can observe that the best models for the lf and for mbert are complex neural networks with 5 and 8 hidden layers respectively. the lf neural network has brick size (all layers have the same number of neural networks) but mbert has a diamond shape (the inner layers have much more neurons). all models benefit for a strong dropout mechanism and most of them also benefit from large batch sizes. ## conclusion despite the fact that our results are limited, we are very pleased with our participation. first, because this is the first time we participated in a shared task concerning intimacy. second, because the mtia shared task was challenging as we could only send one result and because there are four unseen languages during testing. our proposal based on ensemble learning on three multilingual llm reached position 30th in the official leaderboard from a total of 45 participants. our best results are achieved with two unseen languages: korean (16th) and hindi(19th). after the evaluation of our results, we consider that there are several ways in which we could have improved our results. first, we should have conducted an in-deep analysis of the dataset. however, this was not easy for us because we are not fluent speakers of many of these languages, so we can miss important aspects related to the context. second, it is possible that the data augmentation process was not beneficial for the performance of our model, as the translations could be less accurate in some languages or it is possible that cultural and background differences are not well represented in the dataset. however, we consider that we could have translated all sentences into a common language (spanish or english, for instance) and could include features related to topics to our model. we will explore this path in future multilingual shared tasks. three, our models could be biased to our custom validation split. in this sense, we will incorporate to our pipeline a nested-cross validation evaluation. fourth, our ablation analysis is limited, as we only consider the data augmentation step. however, we need to conduct more experiments in order to gain understanding of other modules such as the preprocessing module.
| 26,317
|
28
| 2,024
|
S ea LLM s - Large Language Models for S outheast A sia
|
Despite the remarkable achievements of large language models (LLMs) in various tasks, there remains a linguistic bias that favors high-resource languages, such as English, often at the expense of low-resource and regional languages. To address this imbalance, we introduce SeaLLMs, an innovative series of language models that specifically focuses on Southeast Asian (SEA) languages. SeaLLMs are built upon popular English-centric models through continued pre-training with an extended vocabulary, specialized instruction and alignment tuning to better capture the intricacies of regional languages. This allows them to respect and reflect local cultural norms, customs, stylistic preferences, and legal considerations. Our comprehensive evaluation demonstrates that SeaLLM models exhibit superior performance across a wide spectrum of linguistic tasks and assistant-style instruction-following capabilities relative to comparable open-source models. Moreover, they outperform ChatGPT-3.5 in non-Latin languages, such as Thai, Khmer, Lao, and Burmese, by large margins while remaining lightweight and cost-effective to operate.
|
https://aclanthology.org/2024.acl-demos.28
|
## introduction the advent of large language models (llms) has radically transformed the field of natural language processing, demonstrating remarkable abilities in text generation, comprehension, and decision-making tasks @xcite @xcite @xcite @xcite . while the proficiencies of these models are extraordinary, the majority of existing llms embody a linguistic hierarchy overwhelmingly dominated by english @xcite @xcite . this dominance undermines the multilingual capability of such models, with particularly prejudicial outcomes for lower-resource and regional languages, where data scarcity and tokenization challenges lead to disproportionately poor model performance. this linguistic disparity not only impedes access to state-ofthe-art ai technologies for non-english-speaking populations but also risks cultural homogenization and the loss of linguistic diversity. while hyperpolyglot models exist @xcite @xcite , they may pay a high cost for high-resource language performance while lacking in multilingual instruction-following abilities. recognizing the urgent need to democratize ai and empower linguistically diverse regions, we introduce seallms foot_0 , a suite of specialized language models optimized for southeast asian languages foot_1 . these languages, while rich and diverse, often lack the extensive dataset support available for more widely spoken languages, resulting in a stark performance gap in existing llm applications. as a long-term continuous effort, as of this writing, seallms come in three versions @xcite . seallm-13b-v1, which was pre-trained from llama-2-13b, eclipses the performance of most available open-source llms in a comprehensive array of tasks including world knowledge assessments, language comprehension, and generative capabilities in sea languages. for english and alike, seallms do not only preserve, but also demonstrate enhanced performance in tasks that were part of the original llama training set. when evaluated on multilingual instructionfollowing tasks with gpt-4 as a judge @xcite , seallm-13b-v1 outperforms chatgpt-3.5 by large margins in less-represented languages such as khmer, lao or burmese. meanwhile, seallm-7b-v2, which was pre-trained from mistral-7b @xcite , demonstrates better performances in math and commonsense reasoning than comparable baselines, surpassing chatgpt-3.5 in reasoning for common sea languages, while being much smaller in sizes. later, seallm-7b-v2.5, which was further pre-trained from gemma-7b @xcite , shows significant improvements in sea languages over seallm-7b-v2. figure 2 illustrates the four-stage training process of seallms. in the first stage, detailed in section 2.3, we conduct continuous pre-training from the foundational models @xcite with an extended vocabulary tailored for sea languages. next, we fine-tune the model in a novel hybrid paradigm with a mixture of multilingual pre-training data and englishdominant instruction fine-tuning data (section 3.2). the following stage subsequently fine-tunes the model on a balanced and custom-built multilingual sft dataset. finally, we conduct self-preferencing alignment optimization using the seallm model itself, without relying on human annotators or more powerful llms (openai, 2023b). ## conclusion in conclusion, our research presents a substantial advance in the development of equitable and culturally aware ai with the creation of seallms, a specialized suite of language models attuned to the linguistic and cultural landscapes of southeast asia. through rigorous pre-training enhancements and culturally tailored fine-tuning processes, seallms have demonstrated exceptional proficiency in language understanding and generation tasks, challenging the performance of dominant players such as chatgpt-3.5, particularly in sea languages. the models' attunement to local norms and legal stipulations-validated by human evaluations-establishes seallms as not only a technical breakthrough but a socially responsive innovation, poised to democratize access to high-quality ai language tools across linguistically diverse regions. this work lays a foundation for further research into language models that respect and uphold the rich tapestry of human languages and cultures, ulti-mately driving the ai community towards a more inclusive future.
| 28,143
|
7
| 2,022
|
USST ’s System for A uto S im T rans 2022
|
This paper describes our submitted text-to-text Simultaneous translation (ST) system, which won the second place in the Chinese→English streaming translation task of AutoSimTrans 2022. Our baseline system is a BPE-based Transformer model trained with the PaddlePaddle framework. In our experiments, we employ data synthesis and ensemble approaches to enhance the base model. In order to bridge the gap between general domain and spoken domain, we select in-domain data from general corpus and mixed then with spoken corpus for mixed fine tuning. Finally, we adopt fixed wait-k policy to transfer our full-sentence translation model to simultaneous translation model. Experiments on the development data show that our system outperforms than the baseline system.
|
https://aclanthology.org/2022.autosimtrans-1.7
|
## introduction simultaneous translation @xcite consists in generating a translation before the source speaker finishes speaking. it is widely used in many real-time scenarios such as international conferences, business negotiations and legal proceedings. the challenge of simultaneous machine translation is to find a read-write policy that balances translation quality and latency. the translation quality will decline if the machine translation system reads insufficient source information. when reading wider source text, latency will increase. recent read-write policies can be divided into two categories: fixed policies such as wait-k @xcite , wait-if* @xcite , and adaptive policies such as mocha @xcite , milk @xcite and mu @xcite . fixed policies are simple to implement, but they neglect contextual information, which might result in quality reduction. dynamic policies are more flexible, they can learn from data to achieve better quality/latency trade-offs, but accordingly difficult to train. in our system, we train a transformer @xcite with a deep encoder @xcite as baseline for abtaining rich source representations, besides we initialize the model with the method mentioned in deepnet @xcite in order to stabilize the training of the deeper model. at the pre-training stage, we firstly pretrain our model on a large general corpus, then we utilize data synthesis methods such as self-training and back-translation to improve model quality. during the fine-tuning phase, we first apply finetuning on a small spoken corpus. for better domain adaptation, we adopt mixed fine-tuning @xcite , which trains on a mixed dataset that includes a subsampled general corpus and an upsampled spoken corpus. thirdly, we propose a method called "in-domain mixed fine-tuning", which further improve the bleu score than mixed finetuning. specifically, inspired by in-domain data filtering @xcite , we mixed upsampled spoken data with selected in-domain data from general corpus rather than random subsampled. in the final stage, we employ the wait-k policy to convert the full-sentence translation model into a prefix-to-prefix architecture that predicts target words with only the source sentence's prefixes. after waiting for k-1 source subwords, the system reads a source subword and then predicts a target subword alternately @xmath0. an example of wait 1 is shown in figure 1 . the contributions of this paper are as follows: ## data we participate in the chinese-english streaming transcription track , where each sentence is broken into lines whose length is incremented by one word until the sentence is completed. an example is shown in similar to @xcite , we preprocess the data as follows: see ## experiments our system is implemented with the paddlepaddle foot_6 framework, and our experiments are carried out on ai studio foot_7 with 4 nvidia v100 gpu each of which has 32 gb memory. (we also benchmarked our code against fairseq foot_8 , see appendix a) ## conclusion in this paper we describe our chinese-to-english simultaneous translation system, which uses a deep transformer to improve translation quality and adopts wait-k policy @xcite to reduce latency. besides, for better domain adaption, we combined mixed fine-tuning @xcite with in-domain data filtering @xcite and proposed a new domain adaption method called "in-domain mixed fine-tuning", which is empirically more effective than fine-tuning and mixed fine-tuning. in our future work, we plan to validate the effective of our proposed in-domain mixed fine-tuning on more datasets, while investigating some novel domain adaption methods. we also plan to research on some dynamic read-write policies in order to better balance quality and latency for simultaneous translation tasks.
| 13,653
|
37
| 2,024
|
Listen Again and Choose the Right Answer: A New Paradigm for Automatic Speech Recognition with Large Language Models
|
Recent advances in large language models (LLMs) have promoted generative error correction (GER) for automatic speech recognition (ASR), which aims to predict the ground-truth transcription from the decoded N-best hypotheses. Thanks to the strong language generation ability of LLMs and rich information in the N-best list, GER shows great effectiveness in enhancing ASR results. However, it still suffers from two limitations: 1) LLMs are unaware of the source speech during GER, which may lead to results that are grammatically correct but violate the source speech content, 2) N-best hypotheses usually only vary in a few tokens, making it redundant to send all of them for GER, which could confuse LLM about which tokens to focus on and thus lead to increased miscorrection. In this paper, we propose ClozeGER, a new paradigm for ASR generative error correction. First, we introduce a multimodal LLM (i.e., SpeechGPT) to receive source speech as extra input to improve the fidelity of correction output. Then, we reformat GER as a cloze test with logits calibration to remove the input information redundancy and simplify GER with clear instructions. Experiments show that ClozeGER achieves a new breakthrough over vanilla GER on 9 popular ASR datasets.
|
https://aclanthology.org/2024.findings-acl.37
|
## introduction recent advances in large language models (llms) have attracted a surge of research interest thanks to their remarkable language generation and reasoning ability @xcite @xcite , which achieve a wide range of success on natural language processing (nlp) tasks @xcite @xcite . powered by llms, latest work @xcite proposes a generative error correction @xcite (ger) benchmark 1 for automatic speech @xcite . left: violate source speech, llm removes the word "think" in first two hypotheses as it rarely appears at the beginning of a sentence and followed by a subject according to grammar, but this actually happens in the source speech. right: information redundancy in n-best hypotheses input, there is only one difference between n-best candidates, making it redundant to send all of them for ger, which confuses llm about which tokens to focus on for correction. recognition (asr), and they release a hyporadise dataset foot_0 that contains over 332k pairs of decoded n-best hypotheses and ground-truth transcription in various asr domains. it has shown great effectiveness in learning the mapping from hypotheses to transcription by parameter-efficient llm finetuning @xcite , which significantly enhances the asr result and outperforms typical lm rescoring methods @xcite . however, ger paradigm is also observed to suffer from two limitations. first, llms are unaware of the source speech during ger process, which could lead to results that do not match the source speech content. for example, as shown in fig. 1 (left), the source speech reads the word "think" at the beginning and followed by "he", which is correctly recognized by the 1-best hypothesis. then during the ger process, llm removes the word "think", as this structure of verb plus noun at the beginning of a sentence is not rigorous according to grammar. however, this is not expected as it violates the source speech content. second, we observe that n-best hypotheses usually only vary in a few tokens. for example, as shown in fig. 1 (right), all the tokens in candidates are the same except "enjoys"/"enjoy"/"joins". in this case, it would be information redundant to leverage all of the hypotheses for predicting the ground-truth transcription, which could confuse the llms about which tokens to focus on for correction and thus lead to sub-optimal ger performance. motivated by the above observations, we propose clozeger, a new paradigm for asr generative error correction. first, we introduce a popular multimodal llm, speechgpt @xcite , to receive source speech as an extra input to the ger paradigm. with the powerful cross-modal ability of speechgpt, we can now constrain ger to comply with the source speech while correcting the errors in decoded hypotheses. then, in order to remove the input information redundancy, we reformat it as a cloze test (i.e., a special multiplechoice question) with logits calibration @xcite , where the identical parts across n-best hypotheses are set as the context and the varying parts are set as blanks (each with several options provided). with such clear instructions for error correction, it would be easier for llms to perform context reasoning and choose the right answer for each blank rather than predicting the entire sentence from redundant n-best inputs 3 . finally, we add a simple post-processing stage to correct the errors in cloze context (i.e., identical parts across n-best list) to further improve the correction result. our contributions are summarized as follows: ## related work large language models. there is recently a surge of research interests in transformer-based llms, such as chatgpt (openai, 2022), gpt-4 (openai, 2023) and llama @xcite . benefiting from the huge model size and abundant training data, llms can well understand the linguistic structures and semantic meanings behind textual data, which shows remarkable performance on a wide range of nlp tasks @xcite @xcite . more recently, researchers have started to explore the potential of llms on multimodal tasks by incorporating other modalities into llms @xcite @xcite @xcite . among them, speechgpt @xcite ) is one of the most popular multimodal llms that represent speech and text using a unified tokenizer, which enables us to add source speech into the original n-best hypotheses input of the ger paradigm. ## methodology in this section, we present our proposed clozeger paradigm in detail. we first introduce the preliminary knowledge of ger in §3.1, and then we investigate to introduce source speech to ger paradigm with multimodal llm ( §3.2). finally, we present the new task format of clozeger in §3.3. ## conclusion in this paper, we propose clozeger, a new paradigm for asr generative error correction. first, we introduce a multimodal llm (i.e., speechgpt) to receive source speech as extra input to improve the fidelity of correction output. then, we reformat ger as a cloze test with logits calibration to remove the input information redundancy and simplify ger with clear instructions. experimental evidence shows that clozeger achieves a new breakthrough over vanilla ger on 9 popular asr datasets. further analysis verifies the effectiveness of different modules in our framework.
| 31,370
|
46
| 2,020
|
Understanding Linguistic Accommodation in Code-Switched Human-Machine Dialogues
|
Code-switching is a ubiquitous phenomenon in multilingual communities. Natural language technologies that wish to communicate like humans must therefore adaptively incorporate code-switching techniques when they are deployed in multilingual settings. To this end, we propose a Hindi-English human-machine dialogue system that elicits code-switching conversations in a controlled setting. It uses different code-switching agent strategies to understand how users respond and accommodate to the agent’s language choice. Through this system, we collect and release a new dataset CommonDost, comprising of 439 human-machine multilingual conversations. We adapt pre-defined metrics to discover linguistic accommodation from users to agents. Finally, we compare these dialogues with Spanish-English dialogues collected in a similar setting, and analyze the impact of linguistic and socio-cultural factors on code-switching patterns across the two language pairs.
|
https://aclanthology.org/2020.conll-1.46
|
## introduction when interlocutors share more than one language, they nearly inevitably engage in codeswitching (cs): shifting from one language to another @xcite @xcite . since most people in the world today are multilingual @xcite , cs is a ubiquitous phenomenon in multilingual communities. it goes beyond simple lexical borrowing to blending of languages at syntactic, grammatical and morphological levels @xcite . code-switching has been studied in linguistics and sociolinguistics for decades @xcite @xcite kya tumhare paas koi dost hai who like to eat mangoes? nahi. mere kisi friend ko aam pasand nahi mere paas bhi 3 dost hai who like to eat apple ins alt alt alt figure 1 : we present a bilingual dialogue system for human-machine conversations in hindi-english (red: hindi, blue: english). we discover that humans positively adopt the agent's code-switching style (alt and ins) and the language choice for keywords (highlighted in bold). advances in dialogue research @xcite @xcite have enabled conversational ai technologies for humanmachine interactions, like alexa and siri. although these technologies are pervasive, they still have limited abilities to accommodate to the user, and they do not account for the ubiquity of multilingual communication. due to the lack of code-switching abilities in existing language technologies, there has been limited work in studying linguistic accommodation in written cs dialogues. with the ultimate goal to enable adaptive codeswitching dialogue agents, in this paper we study user accommodation, i.e., entrainment @xcite in cs human-machine dialogues. our exploratory analysis of user accommodation will facilitate better development of dialogue agents which can eventually accommodate to users in return. to this end, we adopt a collaborative dialogue framework of @xcite , which converses with spanish-english (spanglish) bilinguals. to facilitate a more general analysis, we extend this framework to hindi-english (hinglish), a language pair which is typologically distinct from spanglish and is spoken by millions of people. we begin by providing background on codeswitching ( §2) and linguistic accommodation ( §3) we then introduce our generalized bilingual dialogue system ( §4). in §5, we describe our experimental setup for hinglish data collection and discuss the data statistics. we later provide our exploratory analysis of language accommodation and other socio-linguistic factors affecting the cs patterns in the user utterances ( §6). a case-study comparing code-switching distributions across hinglish and spanglish is presented in §7. finally, we discuss directions for future work in §8. this paper's contributions include: (1) the development of a bilingual collaborative dialogue system easily generalizable to a new cs language pair, (2) a new dataset, commondost, comprising of 439 hindi-english human-machine conversations, (3) adaptation of accommodation metrics and a corresponding analysis of accommodation of language style and choice in cs dialogues, and (4) an exploratory study of linguistic and socio-cultural factors on users' cs patterns across spanglish and hinglish. ## code-switching strategies given that cs is used in very nuanced ways, researchers have been studying how people codeswitch, examining the switch-points of languages syntactically @xcite , prosodically @xcite , lexically (kootstra, 2012), pragmatically @xcite , and so forth. many works have attempted to model code-switching text and speech from a statistical perspective @xcite . recent works and benchmarks such as linguistic codeswitching evaluation (lince) @xcite and gluecos @xcite have provided a unified platform to evaluate cs data for various nlp tasks across various language pairs. our work is in line with these recent efforts to pro-vide nlp capabilities to users with diverse linguistic backgrounds. we extend the human-machine cs dialogue system by @xcite to a new language pair of hindi-english. in order to better understand the style and usage of languages in a code-switched utterance, we cluster and characterize these utterances by a set of predefined cs strategies. previous works have mainly identified two commonly used code-switching (cs) strategies: insertional and alternational, and these strategy distinctions are important in implementations of cs technology @xcite . insertional cs strategy involves one language to be the matrix language (matl) with the other serving as the embedded language (embl). words/phrases from embl are inserted in the sentence while maintaining the grammar and structure of matl @xcite . on the other hand, alternational cs strategy involves alternating between separate independent clauses of the languages, switching from one matl to another. in our work, we focus on the hindi-english language pair. we experiment with 4 cs strategies -(1) en cs is also observed more often in informal and casual settings than formal ones @xcite . we test this hypothesis by inducing informality in the agent's strategies. although recent works @xcite have introduced neural methods to induce informality, we deploy a simple way to moderate formality by adding discourse markers (e.g. "so", "well") at the beginning and ending of sentences. these markers are independent of context and syntax @xcite , and are often associated with informality @xcite . thus, we define four more agent strategies by infusing informality (+ informality) in each of the previously described 4 cs strategies. ## measuring accommodation in dialogue communication accommodation theory posits that people adjust their behaviors or speech styles to their conversational partners' @xcite . linguistic accommodation has proven to reduce interpersonal distance @xcite and is correlated with dialogue success and engagement @xcite . although well-studied in the monolingual dialogues @xcite , it is relatively new in the cs setting. @xcite found rate of code-switching to be accommodated in human-human spanish-english dialogues. choice of language when code-switching can also be adapted in dialogues @xcite . @xcite further discover that part-of-speech of a cs utterance may impact the following language choice. our work adds to this field by studying accommodation of language choice for lexical classes. in terms of quantifying accommodation, we adapt a metric from @xcite to measure accommodation (we refer it to as global accommodation). global accommodation extends the score proposed in @xcite by aggregating a speaker's word usage across an entire dialogue and biases it relatively with other non-partners in the corpus. for two partners a and b, we denote e a,@xmath0. denoting the set of non-partners for the speaker a by n a , we define ratio as for all non-partners @xmath1. the global score for the speaker a is the average of ratio over all the non-partners. the final global score for the dataset is the average of the scores over all the speakers in the dataset. in context of human-machine conversations, we choose the set of non-partners for an agent to be the set of humans that did not interact with this agent. since this metric is defined primarily for lexical accommodation, we redefine different styles as a lexical class to adapt it for measuring stylistic accommodation. @xcite presented another interesting metric which measures accommodation locally across turns within a single dialogue. for two partners a and b, we can formulate this metric as ## bilingual dialogue system our bilingual human-machine dialogue system mainly serves two important purposes: (1) collection of cs data and (2) experimentation of new agent strategies. previous work @xcite developed a rulebased cs dialogue system restricted to a fixed set of prompts. @xcite proposed a more flexible bilingual system for english-spanish as an extension of a monolingual goal-oriented collaborative dialogue framework @xcite , originally designed for the mutualfriends task. this task provides the two conversational partners a and b individually with a knowledge base (kb) of friends, out of which there is exactly one friend common in both kbs. each friend in the kb has several attributes such as hobby, location of work, etc. the goal of the task is to collaboratively find this mutual friend by text conversations between the two partners-which can be human or machine. the modifications made by @xcite for extending this monolingual system to support bilingual spanish-english dialogues were mainly in three components: (1) bilingual readability: supporting instructions and kb available to the users in spanish as well as english, (2) bilingual response generation: procuring parallel spanish sentences using a machine translation (mt) system and applying rule-based transformations for generating code-switched spanglish, (3) bilingual response understanding: translating code-switched spanish-english to monolingual english (using a mt system) and passing it to the pre-existing response understanding system for english. ahn et al. (2020)'s modified spanish-english dialogue system cannot be directly applied across other language pairs due to three key reasons: (1) the dialogue system relies on a robust cs mt system 2 which is more readily available for resourcerich languages like spanish and english. such systems might not be accessible for languages like tagalog and swahili. (2) the linguistic rule-based adaptations for generation are simple in the case of spanish-english as they are typologically closer. on the contrary, linguistically diverse pairs like telugu-english might need further adaptations due to differences in word order and morphology. (3) spanish and english are written using the same script. many other language pairs within which cs is pervasive, like hindi-english, are written in different scripts, and are typically romanized in the cs setting. lack of normalization and robust transliteration models pose challenges to multiple system components for such pairs. in our work, we build a more generalized dialogue system to tackle the challenges stated above. one highlight of this modified system is its simplicity, which helps in adapting to new language pairs easily. we briefly discuss these challenges and our enhancements to various components for our hindi-english dialogue system below. language bias in kb due to social and cultural priors, certain domains and topics in the kb might not be equally represented in both languages. in order to avoid biasing the language usage in the dialogue and promote code-switching, it is necessary to carefully choose equilingual domains. in the case of hinglish, we replace the domain of college majors, which is highly anglicized with respect to hindi, with favourite fruit which is more equally represented in both languages. handling gender-markings third person pronouns and verb forms in hindi are usually gendermarked (eg. karta/karti [he/she does], uska/uski [his/her]). since the spanglish kb does not provide any information about the gender of friends, we consequently notice the dialogues using this system to be gender-skewed. in the common-amigos spanglish data @xcite , the ratio of masculine to feminine word usage was 3.9; whereas for hinglish 3 , this gender-ratio is 27.7. we mitigate this by simply adding a new "gender" attribute to the kb and correspondingly, notice a drastic drop of the gender-ratio to 3.4 for hinglish. dialogue generation the spanglish dialogue system utilizes a mt system 4 to generate parallel spanish-english sentences and leverages rulebased transformations (specific to spanish) to gen-3 tested on a set of 65 pilot dialogues. 4 google translate api in the original implementation. furthermore, the rule-based transformations need appropriate modifications to accommodate the new language pair. for hinglish, we synthesize additional transformations to handle differences in word order and verb conjugations. natural language understanding (nlu) the spanglish dialogue system relies on a robust mt system for converting cs user utterances to english and then exploits an english nlu component for entity extraction. procuring such mt systems foot_2 for other language-pairs is not feasible. this issue is amplified for languages written in non-native script (hinglish) due to lack of normalization in user sentences. we overcome this challenge by building a simple dictionary-based nlu component which can directly understand and extract entities from cs hinglish text. although it cannot handle complex inputs, this simple model still outperforms the translation-based nlu pipeline. ## data we use the modified bilingual dialogue framework ( §4) to collect romanized hindi-english cs data for human-machine dialogues. here, we first describe this data collection process and later discuss statistics for the collected data. ## analysis of hinglish conversations we study the impact of each of the agent strategies (4 cs strategies and their informal counterparts) on the user dialogues using various dialogue-and language-oriented dimensions, as shown in ## comparison of spanglish and hinglish to gain better insights into how linguistic and sociocultural factors influence code-switching patterns, we compare the distributions of the users' usage of various cs strategies in hinglish and spanglish in figure 3 . we observe that en ins --→hi and en ins --→sp are the most dominant cs strategies in hinglish and spanglish respectively. on the other hand, we notice a large difference in the usage of alternational cs strategies in the language pairs. for spanglish, it accounts to roughly 40% while it is merely 10% for hinglish. as attributed by the equivalence constraint, cs points tend to occur only if a syntactic rule is not violated in either of the two languages being mixed @xcite . given this requirement, a pair of languages that have differing word order could have more constraints on where switches can occur. we hypothesize that alternational cs may not work within a verb clause in hindi as it is a verbfinal (sov) language while english is verb-medial (svo). spanish is verb-medial like english, and their word order similarity may facilitate the use of alternational cs. beyond structural differences, sociolinguistic factors may affect cs strategies of speakers. backus (1998) describes a gradient of strategy usage across generations of immigrants. earlier gen-erations of immigrants would progress from simple to complex insertions, and later generations would alternate the two languages, eventually using reverse insertion. as the spanglish dataset includes later generations of immigrants to the us, 90% of hinglish speakers are 1st generation. this would highlight hinglish speakers' affinity towards insertion into the hindi matrix language. additionally, the status of english in the us (for spanglish) and english in india (for hinglish) is different. as found in §6.4, the status of english can vary within regions of india itself, and this can lead to varying uses of cs strategy. attitudes towards language use have been shown to affect code choice in bilingual speakers @xcite . it is likely that attitudes towards cs is not the same in the spanglish and hinglish populations, which can provide further variability in the speakers' language choice. ## conclusion and future work in our work, we proposed a generalized bilingual dialogue system and procured human-machine dialogue data (commondost) for the language pair of hindi-english using this system. adaptation of this dialogue system for newer cs languages could promote collection of more bilingual dialogue data. analysis of the commondost conversations revealed how users positively adopt and accommodate the agent's style of using language in a cs utterance. we also studied how informality and cultural factors independently affect the users' cs patterns. this proves that our findings are extendable across two cs pairs of hinglish and spanglish @xcite . similar analysis can be done for new language pairs (such as arabic-english) and datasets from different domains. another area of potential research would be to compare our findings of the cs patterns and accommodation with human-human cs conversations. finally, we discussed how linguistic and sociopolitical factors affect the distribution of users' cs patterns across the language pairs of hinglish and spanglish. despite their dissimilarities, the similarities across these language pairs is encouraging, as it open avenues to learn about how code-switching functions cross-linguistically. we pave the path for future research on comparisons of multiple cs language pairs.
| 3,580
|
18
| 2,023
|
Extracting Sign Language Articulation from Videos with M edia P ipe
|
This paper concerns evaluating methods for extracting phonological information of Swedish Sign Language signs from video data with MediaPipe’s pose estimation. The methods involve estimating i) the articulation phase, ii) hand dominance (left vs. right), iii) the number of hands articulating (one- vs. two-handed signs) and iv) the sign’s place of articulation. The results show that MediaPipe’s tracking of the hands’ location and movement in videos can be used to estimate the articulation phase of signs. Whereas the inclusion of transport movements improves the accuracy for the estimation of hand dominance and number of hands, removing transport movements is crucial for estimating a sign’s place of articulation.
|
https://aclanthology.org/2023.nodalida-1.18
|
## introduction sign languages -or, signed languages -are languages produced with gestures articulated in space and perceived visually or tactilely. over 200 sign languages have been documented around the globe @xcite but they are minoritized and under-researched. one challenge for quantitative research on sign languages is that they generally lack a conventionalized representation in a machine-readable form, such as phonetic transcription or orthography (see e.g., @xcite @xcite . following technological advances in computer vision, methods have emerged that allow a degree of formbased analysis of body movements, such as gesturing and signing, through human body pose estimation tracking of either real-time or pre-recorded video data @xcite . whereas most body pose tracking utilized in sign/gesture research used to involve either wearable devices (e.g., motion capture sensors) @xcite or 3d cameras (e.g., kinect) @xcite , thus requiring designated hardware, there are now pre-trained models that do human body pose estimation either real-time through a regular video camera or on pre-recorded video data, providing a cost-efficient alternative that has proven to be reliable in estimating human gesturing @xcite . a popular tool for such analysis is openpose @xcite , which has been successfully applied in research on both sign language and gesture @xcite @xcite @xcite . a tool that has become available more recently is google's mediapipe @xcite , which similarly performs human body pose estimation of video data and outputs coordinates of landmarks (joints and anchor points such as eyes, nose and eyebrows). ## conclusions in this paper, i have shown initial explorations of methods to extract basic information about articulation and sign form from sign language video data using mediapipe. the first step of estimating an approximate articulation phase of the sign proved to be possible for most sign videos in the data set, which turned out to be a fruitful endeavor in order to then accurately estimate the place of articulation across signs. for the purpose of estimating hand positions corresponding to a phonological place of articulation, estimating the articulation phase is crucial, since the signal is otherwise disrupted by noise from rest positions and transport movements. being able to automatically segment the articulation phase of signs would have other obvious applications, when extracting phonological information about the actual sign (articulation) rather than contextual noise (transport and rest). however, when estimating hand dominance and number of signs articulating, the full method, which included data from all frames in the sign video, consistently outperformed the short method, for which the data only included frames within the estimated articulation phase. it seems as though the crude method of comparing the relative distance traveled between the two hand benefits from more data than the short articulation phase provides, and that the transport movements to and from the articulation phase are in fact quite useful for magnifying the differences in distance traveled between the two hands. this method works quite well with dictionary data here, with each video containing a single (non-compound) sign. if applied to complex/compound signs or stretches of multiple signs in succession, as in conversational data, transport movements may not be as distinct and more elaborate methods to estimate articulation phases would be necessary. the results of this preliminary and exploratory study has demonstrated some possibilities in ex-tracting sign language articulation from videos with mediapipe, which can be used as a fast and cost-efficient way to analyze pre-recorded but unannotated sign language data in substantially larger quantities than would be feasible with manual annotation.
| 26,007
|
7
| 2,022
|
Part-of-Speech and Morphological Tagging of A lgerian J udeo- A rabic
|
Most linguistic studies of Judeo-Arabic, the ensemble of dialects spoken and written by Jews in Arab lands, are qualitative in nature and rely on laborious manual annotation work, and are therefore limited in scale. In this work, we develop automatic methods for morpho-syntactic tagging of Algerian Judeo-Arabic texts published by Algerian Jews in the 19th–20th centuries, based on a linguistically tagged corpus. First, we describe our semi-automatic approach for preprocessing these texts. Then, we experiment with both an off-the-shelf morphological tagger and several specially designed neural network taggers. Finally, we perform a real-world evaluation of new texts that were never tagged before in comparison with human expert annotators. Our experimental results demonstrate that these methods can dramatically speed up and improve the linguistic research pipeline, enabling linguists to study these dialects on a much greater scale.
|
https://aclanthology.org/2022.nejlt-1.7
|
## introduction application of natural language processing (nlp) to real-world problems has been the field's goal from its early days. as algorithms advance, the contribution of nlp to real problems has become more evident and more substantial. the present study originates from a real-world challenge faced by linguists of semitic languages, in this case researchers of the judeo-arabic dialects of algeria (aja). their challenge, simply put, is how to scale up linguistic analyses of such dialects. semitic languages in general, and arabic in particular, are characterized by a very rich morphology that uses both templatic and concatenative morphemes, combined with the use of a vowelless script ("abjad"). this makes morphological analysis of arabic very time-consuming even for expert linguists. because speakers of the aja dialects are becoming scarce, the attention of linguists in this field has shifted from fieldwork interviews with native speakers to library-based analysis of texts written in those dialects. fortunately, vast collections of aja texts were preserved in printed books, journals and handwritten manuscripts. analyzing this linguistic treasure-trove, however, is proving to be challenging due to its size. the time-consuming manual annotation does not scale, and requires expertise that is hard to find. we aim to scale up the linguistic analysis of this arabic dialect using nlp tools. in particular, our goal is to develop an nlp tool that will assist aja linguists in their real-world task, in a way that they will find it useful. basing our work on the existing linguistically tagged algerian judeo-arabic (taja) corpus (tirosh-becker and becker, 2022), we set out to develop automatic methods for morpho-syntactic tagging of such texts. several specially designed neural network taggers and an off-the-shelf morphological tagger were experimented with, and assessed for their accuracy and likely usefulness. we also considered a hybrid humanin-the-loop approach. finally, we carried out a realworld evaluation of our best performing part-of-speech (pos) taggers, applying them to untagged texts and assessing their quality via a user study with expert aja linguists. our experimental results demonstrate that these methods can dramatically speed up and improve the linguistic research pipeline, enabling linguists to study this language on a much greater scale. judeo-arabic (ja) lies in the intersection of semitic languages and jewish languages. as a semitic language, and more specifically, an arabic language variety, its words are generally composed of 3-letter roots, with added vowels and consonants according to pattern paradigms, as well as affixes and clitics @xcite . arabic is the most widely spoken semitic language, with 300 million native speakers @xcite . in fact, the term 'arabic' refers both to modern standard arabic (msa) and to the arabic dialects spoken throughout the arab world. the two varieties of arabic coexist in a state of diglossia @xcite or continuuglossia @xcite , meaning the language varieties exist side by side, with writers or speakers shifting between varieties according to circumstance. msa is written using the arabic script, which is a right-to-left alphabet. arabic dialects are usually written in arabic script as well, but there is no standardized spelling for dialectal arabic @xcite . arabic uses both templatic and concatenative morphemes. there are two types of templatic morphemes: roots and templates. roots are usually three consonantal radicals that signify some abstract meaning. roots are inserted into abstract patterns called templates. there are two kinds of concatenative morphemes that attach to the templatic morphemes. clitics are morphemes that have the syntactic characteristics of words, but are phonologically bound to another word @xcite , for example "wa", foot_0 meaning "and". affixes are phonologically and syntactically part of the word, and often represent inflectional features, such as person, gender, number, and more. dialectal arabic (da) is a primarily spoken family of language varieties (and in modern days, widely used in written form on social media as well) that exist alongside the written msa. da diverges from msa on several levels. there are differences in phonology, morphology, lexicon, and orthography @xcite . the regional dialects can be broken down into main groups, with one possible breakdown being egyptian, levantine, gulf, iraqi, and maghrebi. even within dialect groups there can be quite a lot of variance between dialects, although in many cases there is a certain level of intelligibility between speakers of different dialects, with more significant difficulty across dialect groups. maghrebi dialects are influenced by the contact with french and berber languages, and the western-most varieties could be unintelligible by speakers from other regions in the middle east, especially in spoken form @xcite . while ja can be looked at as an ensemble of arabic dialects, it is first and foremost a subgroup of jewish languages. jewish languages are a family of language varieties that developed in jewish communities throughout the diaspora. the original language used by jews in the land of israel was hebrew, followed closely by aramaic. as jews spread across the world, they adopted local languages and developed distinctive varieties of these languages. nonetheless hebrew remained their liturgical language, even as it almost died out as a spoken language until its revival in the late 19th and early 20th centuries. perhaps the most well-known of these jewish languages is yiddish, the judeo-german language developed by ashkenazi jews living in central and eastern europe before the holocaust. jewish languages vary in their distance and divergence from their non-jewish sister languages, some being influenced by multiple languages due to language contact. nonetheless, among the features that tie these languages together are the presence of hebrew and aramaic lexical components @xcite , the use of the hebrew alphabet for writing, and more. algerian ja (aja) is a member of the north african judeo-arabic dialect group, i.e., dialects spoken and written by jews of the maghreb. aja is in contact with moroccan and tunisian arabic dialects (both jewish and muslim), with french and to a lesser extent other trade languages such as spanish and italian, and with hebrew and aramaic, the historical jewish cultural languages. in general aja shares many characteristics with other jewish languages, including the use of hebrew script, presence of hebrew and aramaic components, and a mixture of conservative trends, vernacular features, and heterogeneous elements @xcite . to date, aja has been sparsely studied by linguists. the aja dialect of the city of algiers was studied over a century ago by @xcite , with most of the recent work on aja published by tirosh-becker, focusing on constantine, the third largest city in algeria @xcite @xcite . aja research employs fieldwork interviews of informants and the study of selected written texts (e.g., @xcite @xcite . regretfully, the number of aja speakers has decreased following algeria's independence @xcite and the subsequent dispersion of its jewish communities, making fieldwork today almost impossible. hence, this research is now shifting towards an analysis of the vast textual sources left by many of these jewish communities, in both manuscript and print form. most of the linguistic analyses done thus far on aja texts have been based on single or few texts, as each study requires extended effort of poring over texts, dictionaries, and grammars. given the size of these corpora, this is a perfect match for machine learning and nlp approaches. ## data this project has used the tagged algerian judeo-arabic (taja) corpus developed by tirosh-becker and becker (2022). 3 this aja corpus is a collection of modern aja texts published in algeria in the late 19th and the first half of the 20th century. the texts represent a variety of prose genres written by algerian jews, including: 3 the corpus is available through the authors. these texts were manually typed into computerreadable format and subsequently proofread, as hebrew ocr (optical character recognition) failed on these aja texts. this was due not only to the less-thanfavorable conditions under which the books had been stored, leaving the pages grayed and worn, but also because the fonts used in these books are not identical to standard hebrew, as they have ja-specific adaptations, such as diacritics. each text was manually tokenized and annotated by research assistants (ras, usually ma or phd candidates) in a spreadsheet, according to strict guidelines, and most were verified by a senior expert. the digitization and annotation project spanned several years, with some dozen ras contributing to the annotation efforts. approximately 80% of the time spent on the creation of taja was dedicated to the annotation process, as the digitization is a more straightforward (though non-trivial) task. ## preprocessing in this section, we describe several challenges we faced in the preprocessing stage and the steps we took to address them. ## real-world evaluation the end goal of this project is to provide aja language experts with an automatic tagger to help them annotate large volumes of text, a task which is otherwise laborious and time-consuming when tackled manually. to evaluate such real-world usefulness of the taggers we set out to compare the performance of our two best pos models (the hierarchical char-cnn based model and marmot) with that of manual annotation by two expert aja linguists. ## discussion and conclusion the pressing real-world challenge facing researchers of algerian judeo arabic (aja) dialects is how to scale up their linguistic analyses from individual texts to large textual collections. the rich morphology of arabic (as of other semitic languages) and scarcity of expert linguists makes this complex and time-consuming task impractical unless aided by automation. hence, developing automatic taggers that would support realworld linguistic analysis at scale and prove useful for aja linguists is the challenge we aim to tackle. reflecting the linguists' challenges, we focus on the performance of the morphological tagger in tests that are predictive of the real-world setting. for this reason, we did not limit ourselves to purely automated approaches, but also explored a hybrid human-machine approach, wherein the human expert contributes to the automatic approach. the rich morphology of arabic and its use of morpho-syntactic affixes led us to focus on characterbased models (rather than word-based models), as these can identify key morphemes that are essential for annotating oov words. starting from a wordbased lstm neural network architecture, we integrated character-level information via either an lstm or a cnn. subsequently we explored a two-tier hierarchical approach to morphological tagging with pos tags at its base and the morphology tags building on that. this hierarchy mirrors the underlying character of arabic annotation, where each pos tag has a set of legal morphological tags. the two-tier approach also enables exploring a human-in-the-loop step in between the two tiers. our best performing strategy, denoted ajatag for simplicity, is now available for use by aja linguists. 8 to evaluate the usefulness of the ajatag 8 https://github.com/technion-cs-nlp/ nlp4aja strategy we compared it to the off-the-shelf pos and morphological tagger, marmot, which is based on crf. all models were trained on the annotated taja corpus. for the base task of pos tagging, we found that among the evaluated neural network architectures, representing a word using a cnn run on its characters performed better than an lstm or ignoring the characters altogether. training on the taja corpus, the pos accuracy of the char-cnn model was 87.4±0.58%. this accuracy is only slightly lower than the 89.17% accuracy obtained by marmot for this task. the 1.5% difference suggest essentially similar performance for the two models in a real-world setting. morphology tagging, as indicated above, is the most challenging and time-consuming task that takes up 80% of the expert linguist annotation time. here, too, char-cnn performed better than the other neural network models we explored, especially in a two-tier hierarchical approach. the accuracy of this model, denoted herein as 'hierarchical char-cnn (predicted pos)', ranges from 81% to 91% for the different morphology analysis fields (anal-ysis1, analysis2, additional tags). to further improve the performance, we allowed for human input between the two tiers in the form of manual correction of pos tags. using 'true pos' assignments, instead of the predicted assignments, further improved the performance of the 'hierarchical char-cnn (true pos)' morphology tagger. we denote this hybrid strategy ajatag and have compared its performance on aja to marmot. we use mar-mot as is, without modifications or adaptations to a hybrid setting, because for the linguists it is an off-theshelf tool that is to be used as is. evaluation of the morphological tagging by ajatag demonstrated favorable performance across multiple evaluation metrics: it should be noted that the greatest gain in accuracy is in analysis1, which of the morphological analysis fields is the richest and most difficult to assign. both approaches perform well identifying the enclitic field with an accuracy greater than 96%. however, this important performance indicator is where our hybrid ajatag strategy delivered its most important fruits. the accuracy of ajatag in the challenging task of morphologically tagging oov words is 74.91% and 78.42% for the analysis1 and analysis2 fields, respectively, which is significantly better than marmot's oov tagging for these two fields (55.82% and 59.95%, respectively). ajatag also performs much better in the additional tags field for oov words (85.4% compared to marmot's 75.0%). the justification for the hybrid approach explored herein is in its real-world usefulness, outside of the nlp lab. the 56%-60% accuracy of the off-the-shelf solution for the two most important morphological fields, analysis1 and analysis2, when applied to oov words is not sufficient for real linguistic work. in contrast, the hybrid ajatag strategy achieved an accuracy level of 74.91%-78.42% on morphological tagging of oov words, which is expected to be useful for real-world applications, improving upon marmot by 18%-19% for this task on both analysis fields. it is reassuring that even without the added human input, our fully automated hierarchical char-cnn performed better than marmot on pos and analysis1 tagging of oov words. the value of the ajatag strategy was further confirmed by other performance indicators, including its overall accuracy and its accuracy on words with legal tag combinations, as defined above. to assess the feasibility of the human interface element in ajatag, we performed a real-world evaluation of this process. the first-tier pos output was given to two aja linguists to correct, before moving on to the second-tier morphology tagging. pos tags manually corrected by a senior expert were perceived as the 'true' pos assignment, to which the performance of the automatic taggers as well as the corrections by a junior expert were compared. it is reassuring that both automated taggers, our char-cnn model and marmot, performed well at an almost identical accuracy (~89%) relative to the 'true' pos, an accuracy quite similar to the 91% accuracy by the junior expert, who is a phd candidate with several years of experience in aja linguistics. to conclude, while not perfect, the hybrid ajatag approach provides aja linguists with a working solution that already impacts their real-world workflow in a way that off-the-shelf tools cannot provide. in the future we plan to continue improving these tools by addressing limitations such as tagging words with illegal tag combinations. nonetheless, we believe that even in its current form ajatag could prove useful to linguists as they take on the task of analyzing large untagged aja corpora. we hope that in the future we will be able to expand the utility of these tools to other judeo-arabic dialects.
| 18,123
|
141
| 2,024
|
Time is Encoded in the Weights of Finetuned Language Models
|
We present time vectors, a simple tool to customize language models to new time periods. Time vectors are created by finetuning a language model on data from a single time (e.g., a year or month), and then subtracting the weights of the original pretrained model. This vector specifies a direction in weight space that, as our experiments show, improves performance on text from that time period. Time vectors specialized to adjacent time periods appear to be positioned closer together in a manifold. Using this structure, we interpolate between time vectors to induce new models that perform better on intervening and future time periods, without any additional training. We demonstrate the consistency of our findings across different tasks, domains, model sizes, and time scales. Our results suggest that time is encoded in the weight space of finetuned models.
|
https://aclanthology.org/2024.acl-long.141
|
## introduction temporal variation is a fundamental characteristic of language. as we show in §3, it manifests in language model development as temporal misalignment, where deviations in train and test data lead to large performance degradation across different time periods @xcite . this necessitates adaptation techniques for customizing models to specific time periods. designing such techniques is difficult, however, due to the multitude of time scales and the possibility that data from a target time period might be unavailable. recent work has shown that the behavior of neural networks can be edited through closed-form interpolation between parameters of finetuned models @xcite @xcite . in this work, we demonstrate that weight-space interpolation can also be used to cheaply edit language model behavior over time. to this end, we introduce time vectors ( §4), an extension of task figure 1 : we present time vectors, a simple tool to customize language models to new time periods. time vectors (τ i ) specify a direction in weight space that improves performance on text from a time period i. they are computed by subtracting the pretrained weights (θ pre ; left panel) from those finetuned to a target time period (θ i ). we can customize model behavior to new time periods (e.g., intervening months or years) by interpolating between time vectors and adding the result to the pretrained model (middle panel). we can also generalize to a future time period j with analogy arithmetic (right panel). this involves combining a task-specific time vector with analogous time vectors derived from finetuned language models (τ lm j ). we use this structure of time vectors to induce we evaluate language model perplexity (wmt), rouge-l (news summarization), and macro f1 (political affiliation classification). each cell indicates the monthly performance of t5-3b finetuned and evaluated on a single year from that task. we report the percentage difference from the average performance for each year, and find linear degradation as finetuning and evaluation years become more misaligned regardless of task. we display similar trends for t5-small and medium, as well as for other domains and tasks, in §a.1. we measure the linearity of these degradations in appendix our results show that temporal variation is to some extent encoded in the weight space of finetuned models, and that weight interpolation can help customize language models to new time periods. we publicly release our code, data, and over 500 models finetuned on specific time periods. 1 ## data and finetuning in this section, we describe our datasets and finetuning techniques, which serve as the basis for all subsequent experiments. we finetune language models on multiple time-stratified datasets, which we use to analyze temporal misalignment and build time vectors. then, we explore different ways of interpolating between time vectors to generalize to new times. see §4.3-4.5 for more details on interpolation strategies. ## temporal misalignment at multiple time scales we begin with an analysis of temporal misalignment using the new set of models and tasks that we consider in this work ( §2). these findings set the stage for our creation of time vectors in §4. figure 3 : monthly temporal degradation has seasonal patterns. each cell indicates the monthly performance of t5-small finetuned and evaluated on a single month of the wmt dataset. we report the percentage difference in test perplexity from the average on the evaluation month over all finetuned t5-small models (darker is better). the diagonal indicates that each model does best on its finetuning month. models also do relatively better on the same month in other years, visible as the stripes radiating out from the diagonal every 12 months. ## temporal adaptation with time vectors the collection of year and month-finetuned models from §3 presents a new source of data to study temporal misalignment: model weights. in this section, we analyze these weights through the lens of time vectors, formed by taking the difference of a model finetuned on a specific time and the pretrained model. first, we show that the weights of two time vectors become less similar as the times they were finetuned on become more misaligned ( §4.2). then, we attempt to use the reverse relationship to update models to unseen times: reducing misalignment on intervening ( §4.3), future ( §4.4), and multiple time periods ( §4.5) by interpolating time vectors. ## conclusion we connect studies of temporal misalignment and weight arithmetic with time vectors, formed by finetuning a model on a specific time period and then subtracting its pretrained weights. we show that the weights of time vectors are more similar if their corresponding times are closer and vice versa. these similarities are highly correlated to temporal misalignment at both yearly and monthly scales (which exhibit seasonal patterns). leveraging this temporal structure in weight space, we induce new models that perform better on intervening years by interpolating between adjacent time vectors. similarly, we use task analogies to improve downstream performance on future time periods using only unlabeled data from those times. these results show that task arithmetic can be a simple tool for updating models to new time periods.
| 27,308
|
710
| 2,024
|
Effects of diversity incentives on sample diversity and downstream model performance in LLM -based text augmentation
|
The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models. However, more research is needed to assess how different prompts, seed data selection strategies, filtering methods, or model settings affect the quality of paraphrased data (and downstream models). In this study, we investigate three text diversity incentive methods well established in crowdsourcing: taboo words, hints by previous outlier solutions, and chaining on previous outlier solutions. Using these incentive methods as part of instructions to LLMs augmenting text datasets, we measure their effects on generated texts’ lexical diversity and downstream model performance. We compare the effects over 5 different LLMs, 6 datasets and 2 downstream models. We show that diversity is most increased by taboo words, but downstream model performance is highest with hints.
|
https://aclanthology.org/2024.acl-long.710
|
## introduction the emergence of large language models (llms) such as gpt-4, llama, etc., has sparked interest in using them to augment textual datasets @xcite @xcite . in these scenarios, the number of samples is expanded by paraphrasing existing ones through llm prompting. the created paraphrases are then added to the original dataset and used for downstream model training. such methods have been explored for various domains such as sentiment classification @xcite , news classification (piedboeuf and langlais, 2023) and health symptoms classifications @xcite . however, investigation of the effect of various prompts, specific instructions, and selection of seed data inspired by crowd in the text augmentation process when using llms is lacking. crowdsourcing is an established practice for collecting training or validation examples for a variety of nlp tasks. scenarios of data collection using human workers can be similar to those of data augmentation: workers create paraphrases on existing sentences chosen from a dataset. the aim of such data collection is to increase the data diversity and subsequent performance of classifiers trained on the data @xcite . to increase the diversity, various methods are used in crowdsourcing to guide workers. these include taboo words @xcite ) -where most significant words from the collected data are identified and listed in the worker instructions to be avoided during paraphrasing, chaining @xcite -where outliers in the previous paraphrases are identified and used as seed sentences in the next round of data collection, and hints where previous outlier paraphrases are used as examples in the instructions. the hints @xcite ) method itself is similar to llm in-context learning, where examples are included in the instructions for the model to achieve better performance. all of these diversity incentive methods report increased diversity of paraphrases and some also report increased performance of the classifiers trained on the so-collected data. this work is inspired by the parallels between crowdsourcing and llm prompting and by the performance of diversity incentive methods on the diversity of paraphrases and the performance of models trained on them. we investigate the effects of the three diversity incentive methods (originating in crowdsourcing) on data augmentation using llms. the baseline, taken from a previous study @xcite , is a simple prompting for paraphrases. measuring paraphrase diversity and downstream performance of classification models, we assess whether the diversity incentives (added to the base prompt) improve llm outputs similarly as in crowdsourcing scenarios. to our knowledge, this is the first work to investigate the effects of diversity incentive methods on llms. in this paper, we answer the following research questions: rq1: does the usage of diversity incentive methods on llms yield more diverse paraphrases? (compared to base prompting) rq2: do classifiers achieve better performance if trained on data augmented using diversity incentive methods on llms? (compared to base prompting) to answer these questions foot_0 , we have conducted a data augmentation experiment using 5 different llms on 6 different datasets in the tasks of sentiment (movie and app reviews), news, and intent (flight and voice assistant commands) classification. in this experiment, we repeatedly collect llm paraphrases using different diversity incentive methods. then, we compare the lexical diversity of the collected data and the performance of downstream classifiers. additionally, we also conduct an ablation study, where we modify the diversity incentive methods with random data to validate, that the inputs used by these methods (e.g., most influential taboo words, outlier paraphrases) contribute to the method's performance and a combination of the best performing methods for lexical diversity and model performance. in total, we collected 253,500 paraphrases. the most prominent findings are the following: 1) we do not observe statistically significant improvements in lexical diversity of the generated datasets, but only minor improvements using the taboo method, 2) the hints method increases the performance of classification models trained on such data compared to the baseline, while also reducing standard deviation and thus increasing the stability of results, 3) the chaining method and taboo method both do not significantly affect the performance of classification models trained on such data compared to the baseline. ## data collection and evaluation methodology we collected paraphrases for all combinations of the following: 5 different llms, 6 datasets, and 3 diversity incentive methods + 1 base prompting. for each combination, 5 collection iterations were performed: in each 6 random seed sentences per label were drawn from a dataset. for each prompt fired, 5 paraphrases were collected. this totalled in 142,500 collected paraphrases when aggregated all together across datasets and llms. for the ablation study and combination of best methods in section 6 we collected an additional 111,000 paraphrases in total. as the diversity incentive methods need some previously collected data to determine their cues (hints, seeds or taboo words), each iteration consisted of 2 rounds: first we collected data using only the basic prompt and in the second round, we collected data using the given diversity incentive method (or base prompt method). thus, the resulting datasets for each method consist of seed data and data collected from both rounds. the entire data collection process is visualized in figure 1 . after the paraphrases were collected, we evaluated them in several steps. first, we manually checked the validity of a subset (50%) of the collected data (i.e., is the created sample a true paraphrase retaining the label?). second, we computed the diversity of the collected data, comparing the mean vocabulary size (no. unique words) and mean number of unique 3-grams for each diversity incentive method (refers to rq1). third, we evaluated the performance of models trained on the created paraphrases (refers to rq2). for each combination of llm, dataset and method, we finetuned bert-large 5 times and mistral-7b-v0.1 3 times (the dataset also determined the classification task to which a model was finetuned). we evaluated the accuracy of trained model on the full test set for that given dataset specifically and on a subset of the test set for mistral to save computational resources following previous works @xcite @xcite , as the inference time is long and costly. details of the finetuning process can be found in appendix d and e. ## finetuning models on data collected via diversity incentive methods to investigate whether the diversity incentive methods improve the performance of downstream models, we finetuned bert-large 5 times and mistral 3 times for each llm-dataset combination. additionally, as we work with limited data, which was found to cause large variance and instability in finetuning results @xcite @xcite , we sampled data 5 times. this resulted in 25 finetuned classifiers for bert (5 data collection rounds and 5 finetunings for each of those data collection rounds) and 15 for mistral that we evaluate per dataset-llm combination. the full details about hyperparameters and the finetuning setup of bert and mistral classifier can be found in appendices d and e respectively. we report the accuracy of the finetuned models on the test split of each dataset and focus on 2 main attributes: mean accuracy and stability of performance (by measuring standard deviation of accuracy).additionally, we also conducted mann-whitney-u tests (@xmath0.05) between the baseline prompt method and other diversity incentive methods. we are interested in consistent, better performance of a diversity incentive method over the prompt baseline across llms and datasets, as fluctuating performance could be an indicator of random effects. see summary in ## combining diversity incentives as the taboo method achieved best results in lexical diversity in and the hints method achieved best results in model performance, as follow-up, we decided to combine these two methods to see if we can achieve an improvement. we have performed the data collection and finetuning process in the same way as described in section 3. in terms of lexical diversity, the method itself does not have any statistical significance on the results, although the mean number of unique words is higher than the baseline in 18/30 cases and the number of unique n-grams is higher in 16/30 cases. however, in some of the remaining cases a considerable (more than 5%) drop can be observed. in terms of model performance, the combined method statistically significantly decreased the model performance over baseline in 5/30 cases with no increases for bert and increases performance in 4/30 cases for mistral. additionally, it always performed worse as either the hints or taboo method. in summary, the combination of hints and taboo method into one method grants little to no advantage over either of the methods in both lexical diversity and model performance. we hypothesize that this might be due to the more complicated instructions to the llm when collecting the data. a decoupling of the methods in a chain of tasks could potentially improve this approach in the future. ## discussion given the results of our experiments, we note these following observations: first, contrary to the performance of diversity incentive methods observed by related work in crowdsourcing settings (better lexical diversity of paraphrases and better performance of downstream models), not all of the methods show improvement of the lexical diversity when used with llms. the worst performing method is the chaining method, where recent works already pointed out that llms create progressively worse paraphrases when using their own outputs as seed sentences repeatedly @xcite . however, none of the changes in lexical diversity are of statistical significance. second, the best performing method for data augmentation is the hints method, which is similar to in-context learning where demonstrations of samples are provided to the llm as part of the prompt. this might be the reason why this method works so well, as the own paraphrases of the llm guide it to better output, similar to in-context learning. third, we observe that, contrary to some previous works @xcite , the lexical diversity of the paraphrases does not correlate with performance of models trained on them. even though the data collected using the taboo method yield highest lexical diversity, models trained on such data do not achieve consistently better performance against baseline. fourth,the increase in mean performance and stability seems to be small, but in relative terms (compared to the baseline method) it seems to be significant, as the increase of mean performance can range from 0.6% to 2.5% increase over baseline for bert and 1% to 11% increase for mistral. for stability, the increases are even more significant: for bert the range is between 5% to 35% increase over baseline and for mistral from 10% to 66%. fifth, diversity incentives require additional computations (for significant words and outlier paraphrases) and also require larger llm context (e.g., hints use additional paraphrases in instructions of the model), meaning higher costs. as such, the increased computation costs may not warrant the use of diversity incentives. sixth, the combination of the best method for lexical diversity (taboo) and best method for model performance (hints) did not yield the increases in both lexical diversity and model performance, but performed rather poorly. we hypothesize that this might be due to the increased context length for the llm with additional instructions that are hard to perform in one single action. the promising results using the hints method opens possibilities for investigations of incontext learning for text generation in llms, as the quality of such generated data using hints seems to be better than without them. this is in line with the recent results @xcite that indicate that the usage of previous examples in instructions for llms leads to better generated data. ## conclusion in this work, we investigated the effects of different diversity incentive methods used in crowdsourcing on the lexical diversity of llm-augmented textual datasets and performance of classification models trained on such data. we compared 3 of such methods with a baseline of using only prompts asking the llm to paraphrase a given seed. we experimented with 5 llms on 6 datasets. our results indicate that the taboo method increases lexical diversity of the collected data, but that this change is not of statistical significance and affects performance only randomly. the hints method affects lexical diversity randomly, but increases the performance of classification models (both in stability of and mean performance) that were trained on data collected using this method. the chaining method does not improve lexical diversity or model performance of classification models trained on data collected using this method. the combination of hints method and taboo method does not significantly increase the lexical diversity or model performance. a common downside of diversity incentive methods is the increase of inference costs. also, there is still some randomness present when using these methods, as even the best performing methods do not increase lexical diversity or performance of models in all cases. the notable relative increase in stability of performance and mean performance of models trained on data collected using the hints method indicates that llms can produce data of better quality using this method when aiming for downstream task classifier performance.
| 27,878
|
16
| 2,021
|
Interesting cross-border news discovery using cross-lingual article linking and document similarity
|
Team Name: team-8 Embeddia Tool: Cross-Lingual Document Retrieval Zosa et al. Dataset: Estonian and Latvian news datasets abstract: Contemporary news media face increasing amounts of available data that can be of use when prioritizing, selecting and discovering new news. In this work we propose a methodology for retrieving interesting articles in a cross-border news discovery setting. More specifically, we explore how a set of seed documents in Estonian can be projected in Latvian document space and serve as a basis for discovery of novel interesting pieces of Latvian news that would interest Estonian readers. The proposed methodology was evaluated by Estonian journalist who confirmed that in the best setting, from top 10 retrieved Latvian documents, half of them represent news that are potentially interesting to be taken by the Estonian media house and presented to Estonian readers.
|
https://aclanthology.org/2021.hackashop-1.16
|
## introduction this paper presents our results of the participation in the hackaton, which was organised as part of the eacl 2021 hackashop on news media content analysis and automated report generation. we are addressing the embeddia hackathon challenge on identifying interesting news from neighbouring countries @xcite in estonian and latvian context, which is a fully novel document retrieval task performed on recently released em-beddia news datasets. estonian journalists are very interested in identifying stories from latvia, which will attract a large number of readers and are "special". while performing keyword-based search for latvian news, where estonians are mentioned is a simple task, this challenge on the contrary aims to identify a small set of documents from a larger number of topics, e.g. scandals, deaths and gossip that might be somehow connected to estonia: not only by mentioning estonians but by identifying news and stories that estonians relate to (for example, when similar things have happened in estonia or when similar news have been popular in estonia). in our approach, we first automatically create a collection of interesting articles using a stringbased search and cross-lingual document linking, and then rank the query documents based on the proportion of interesting documents in their neighbourhood (where the neighbourhood is defined by a document similarity) by the newly introduced seed news of interest score @xcite . the article first presents the datasets (section 2), introduces the methodology (section 3), and presents our experimental results (section 4). the code and the data are made publicly available (see section 5). finally, section 6 concludes the paper and presents the ideas for further work. ## datasets in this study, we used the following resources. ## methodology our methodology consists of two steps. first, we automatically construct the datasets of interesting latvian articles and next propose a method to retrieve interesting articles by ranking a given query document based on the the proportion of interesting articles in its neighbourhood. ## availability the code and data of the experiments is made available on the github: https://github.com/bkolo sk1/interesting-cross-border-news-discov ery 6 conclusion and future work in this work we tackled the problem of retrieving interesting news from one country for the context of another neighbouring country. we focused on finding interesting news in latvian news space that would be engaging for the estonian public. we used latvian and estonian embeddia datasets to construct the document space. first we used a string matching approach to identify a subset of news in estonian media that originated from latvian news. next, we utilized the methods for ad hoc cross lingual document retrieval to find corresponding articles in the latvian news space. after automatically retrieving this set of latvian news articles of interest, we used this information in a novel metric defined as snir, that analyses a news article's neighbourhood in order to measure it's relevance (interestingness). the assumption of the metric is that if the surrounding documents of a query point are relevant, this new point might be of relevance. the snir scores of randomly selected 20 documents and 20 documents identified as examples of interesting news by an estonian journalist showed that their value differ, which is for the further work we propose exploring the keywords appearing in the clusters of interesting news and exploiting their named entity tags in order to achieve even better performance. we also want to include background knowledge from knowledge graphs to improve the document similarity evaluation. special attention will also be paid to setting a threshold for snir which would allow for real-time investigation of best candidates in a real journalistic practice.
| 10,065
|
11
| 2,022
|
Unmet Creativity Support Needs in Computationally Supported Creative Writing
|
Large language models (LLMs) enabled by the datasets and computing power of the last decade have recently gained popularity for their capacity to generate plausible natural language text from human-provided prompts. This ability makes them appealing to fiction writers as prospective co-creative agents, addressing the common challenge of writer’s block, or getting unstuck. However, creative writers face additional challenges, including maintaining narrative consistency, developing plot structure, architecting reader experience, and refining their expressive intent, which are not well-addressed by current LLM-backed tools. In this paper, we define these needs by grounding them in cognitive and theoretical literature, then survey previous computational narrative research that holds promise for supporting each of them in a co-creative setting.
|
https://aclanthology.org/2022.in2writing-1.11
|
## introduction mixed-initiative co-creative @xcite creativity support tools @xcite for creative writing have recently seen a surge of interest in research communities, coinciding with the introduction of large language models (llms) such as gpt-3 @xcite that can provide coherent suggestions for the continuation of human-written text. several recent efforts have been made to understand the experiences of writers who work with these tools to produce texts @xcite @xcite . however, less attention has been paid to the development of systems that can provide forms of creative writing support beyond short-term suggestions for textual continuation. meanwhile, recent efforts to understand the playful creative writing communities that have emerged around interactive emergent narrative games @xcite and to provide computational support for playful creative writing at the plotstructure level @xcite have revealed a preliminary inventory of several distinct but interrelated creativity support needs among creative writers, including: current large language models are good at addressing the first of these needs, getting unstuck, via short-term suggestions that can prompt writers to take their stories in unexpected new directions. however, they do not directly address consistency maintenance, longer-term plot structure, management of reader experience, or the challenge of refining high-level expressive intent, and some novelists even suggest that llms may actively work against the construction of coherent plot structure due to the highly divergent nature of llm suggestions @xcite . some recent work aims to improve llms in ways that could enable them to meet these needs: for instance, work in long text generation @xcite @xcite could assist users with consistency maintenance; work on hierarchical concept-driven language models @xcite could help to maintain plot structure in generated text; and work in diverse decoding methods @xcite could help users refine their intent by selecting from among diverse potential completions of the same text. however, the possibility of supporting these needs through other forms of technology may also be worth investigating. in this paper, we describe each of these creative writing support needs in more detail, then survey previous research from communities outside of nlp/computational linguistics that have either been shown capable of addressing, or that show potential for supporting these creative needs. our aim with this paper is to create a bridge between the acl community and ai/digital games research community that may yield productive insight towards synthesizing these approaches that have evolved in parallel. we limit the scope of our discussion primarily to narrative fiction, particularly in the form of short stories, novels, and game writing/interactive storytelling, so the suggestions made here may not all be applicable to other forms of creative writing (such as poetry). however, we attempt to avoid limiting ourselves to purely text-based storytelling in which only the written word is used to convey meaning; we are also interested in forms of narrative fiction that target visual, audio, and hybrid renderings of fictional events, such as film and game narrative, since many technologies capable of reasoning about plot structure are readily applicable to these domains. ## technologies and approaches in this section, we overview technologies that have shown promise for addressing the needs outlined in the previous section. ## conclusion we have presented five creative writing support needs, only one of which (getting unstuck) is meaningfully supported by current large language models, and surveyed technologies for addressing the remaining four needs that have arisen from the ai/digital games research community. these technologies are at varying levels of maturity, and most of them have only been tested in purely automated or generative forms rather than in mixed-initiative, co-creative interaction modes. an important line of future work will be to evaluate these technologies in those modes and determine interfaces and interaction protocols that amplify and foster human creativity in the writing process. our goal with this paper is not to assert the superiority of world-model or knowledge-engineering based approaches over llms, but rather to emphasize that there is a set of needs and affordances that these techniques can address and provide that are complementary to the needs addressed and affordances provided by llms. by bridging research communities focused (on one hand) on computing with natural language and (on the other) on simulating story worlds and reasoning about narrative structure, we hope to pave the way for hybrid and unified models that can transform the human creative writing experience-much like the neurosymbolic approaches to automated story generation (martin, 2021) that undergird several recent advances in story generation as a field.
| 17,222
|
114
| 2,025
|
QUST _ NLP at S em E val-2025 Task 7: A Three-Stage Retrieval Framework for Monolingual and Crosslingual Fact-Checked Claim Retrieval
|
This paper describes the participation of team QUST_NLP in the SemEval-2025 Task 7. We propose a three-stage retrieval framework specifically designed for fact-checked claim retrieval. Initially, we evaluate the performance of several retrieval models and select the one that yields the best results for candidate retrieval. Next, we employ multiple re-ranking models to enhance the candidate results, with each model selecting the Top-10 outcomes. In the final stage, we utilize weighted voting to determine the final retrieval outcomes. Our approach achieved 5th place in the monolingual track and 7th place in the crosslingual track. We release our system code at: https://github.com/warmth27/SemEval2025_Task7.
|
https://aclanthology.org/2025.semeval-1.114
|
## introduction semeval-2025 shared task 7 focuses on the retrieval of monolingual and crosslingual factchecked claims, aiming to tackle the global challenge of misinformation spread @xcite . we engaged in two tracks of the semeval-2025 shared task 7: monolingual and crosslingual. the monolingual track demands methods capable of retrieving the relationship between social media posts and fact-checked claims within the same linguistic environment. this task presents challenges such as noise arising from the large volume of data and difficulties related to the imbalance of language resources @xcite . the crosslingual track requires methods that can retrieve factchecked claims related to social media posts regardless of whether the language of the post matches the language of the related fact-checked claim. the primary challenge in crosslingual retrieval lies in translation inconsistencies, particulaedrly for lowresource languages (qi et al., 2023; magueresse * corresponding author @xcite . the absence of high-quality translation tools exacerbates the complexity of achieving accurate crosslingual semantic alignment. to tackle the aforementioned challenges, we propose a three-stage retrieval framework. initially, we evaluate and employ several pre-trained language models for preliminary retrieval of candidate results @xcite , thereby mitigating the noise caused by the large data volume and alleviating the adverse effects of language resource imbalance. subsequently, a re-ranking model is applied to refine the ranking of the candidate results, enhancing the position of fact-checked claims most relevant to the social media posts. for the crosslingual retrieval task, we utilize machinetranslated data for preliminary retrieval, followed by ranking the results using a re-ranking model fine-tuned with english data. finally, a weighted voting strategy is employed to combine the outputs from multiple re-ranking models, further enhancing the system's accuracy. our approach achieved 5th place in the monolingual track and 7th place in the crosslingual track, thereby validating its effectiveness and feasibility in addressing the aforementioned challenges. ## system description our approach utilizes a three-stage retrieval framework: retrieval stage, re-ranking stage, and weighted voting stage. this staged design excels at balancing retrieval efficiency and accuracy, making it particularly suitable for handling large-scale datasets. by generating candidate results during the initial retrieval stage, fine-tuning them during the re-ranking phase, and finally aggregating predictions from multiple models in the weighted voting stage, we are able to obtain the final solution. the detailed process is shown in figure 1 . ## conclusion and limitation this paper introduces a monolingual and crosslingual fact-checked claim retrieval method utilizing a three-stage retrieval framework. by integrating retrieval models, re-ranking models, and weighted voting, we effectively address challenges such as data noise and imbalanced language resources. our findings suggest that employing a mixed input strategy markedly enhances retrieval performance, while fine-tuning further optimizes re-ranking efficacy. our method achieved 5th place in the monolingual track and 7th place in the crosslingual track. we acknowledge that our method has limitations in terms of translation consistency and quality. future work will focus on enhancing translation quality and refine model fine-tuning strategies to overcome these challenges.
| 40,531
|
15
| 2,016
|
Investigating the Impact of Various Partial Diacritization Schemes on A rabic- E nglish Statistical Machine Translation
|
Most diacritics in Arabic represent short vowels. In Arabic orthography, such diacritics are considered optional. The absence of these diacritics naturally leads to significant word ambiguity to top the inherent ambiguity present in fully diacritized words. Word ambiguity is a significant impediment for machine translation. Despite the ambiguity presented by lack of diacritization, context helps ameliorate the situation. Identifying the appropriate amount of diacritic restoration to reduce word sense ambiguity in the context of machine translation is the object of this paper. Diacritic marks help reduce the number of possible lexical word choices assigned to a source word which leads to better quality translated sentences. We investigate a variety of (linguistically motivated) partial diacritization schemes that preserve some of the semantics that in essence complement the implicit contextual information present in the sentences. We also study the effect of training data size and report results on three standard test sets that represent a combination of different genres. The results show statistically significant improvements for some schemes compared to two baselines: text with no diacritics (the typical writing system adopted for Arabic) and text that is fully diacritized.
|
https://aclanthology.org/2016.amta-researchers.15
|
## introduction resolving natural language ambiguity is at the crux of the nlp enterprise. ambiguity refers to the problem of possibly having different interpretations for different segments (words, phrases, etc.) of a sentence. languages such as arabic, hebrew and persian are typically written in a manner that exacerbates this ambiguity problem and increases the homograph rate by underspecifying some of the letters such as short vowels and consonantal gemination, which in turn increases the effect of having multiple interpretations for the same word. this renders text even more ambiguous than typically expected. while context helps native speakers of the language resolve some of the ambiguity, context alone does not always produce adequate clarity for interpretation. the problem is further complicated in arabic by the fact that there are no native speakers of modern standard arabic (msa), which is the language used in education and formal settings. instead, speakers of arabic converse in various dialects of arabic which are at times starkly different from msa. one solution for this problem is diacritic restoration, or diacritization, which refers to ren-dering the underspecified diacritics explicit in the text. we investigate the problem of diacritization within the context of arabic-to-english statistical machine translation (smt) system. we address the problem in msa texts, the majority of which are underspecified for these diacritic marks. we focus here on the most prominent arabic diacritics which are short vowels @xcite , the syllable boundary marker, known as sukoon (o), indefiniteness marker, known as nunation (f, k, n), and the consonantal doubling marker (gemination) known as shadda (∼) 1 . in this study, we aim to investigate what is the appropriate level and type of diacritic restoration that would have the biggest impact on natural language understanding as tested and evaluated via machine translation. hence we experiment with various diacritization schemes based on lexical and/or syntactic information. this current work is a follow on to the pilot work presented in @xcite . however it is different in the following respects: 1-we explore automatically diacritized data; 2-we define more schemes that target both lexical and/or syntactic properties of the arabic language. 3-we test the robustness of our observations taking into consideration varying training size and cross genre evaluation. ## related work automatic arabic diacritization has been addressed thoroughly in @xcite @xcite @xcite . full diacritization indicates rendering the text with all the most prominent diacritics, namely (a, i, u, o, ∼). 2 initial efforts in automatic diacritization include rulebased approaches to add all diacritics in the texts @xcite ; however, it is expensive to maintain these rules to be generalized for unseen instances. most studies focused on full diacritic restoration. for automatic speech recognition (asr), @xcite perform full diacritization on msa speech transcripts for language modeling. they show that developing asr models on fully diacritized datasets improves performance significantly. supervised classifiers such as hidden markov model (hmm) and maximum entropy (maxent) have been employed for diacritization @xcite @xcite . in a study conducted by @xcite , the researchers use maxent trained on msa with lexical and n-gram features to improve asr. another study uses decision trees and stochastic language models to fully diacritize texts in order to render graphemes to synthesized speech @xcite . the buckwalter arabic morphological analysis (bama) @xcite system has been used along with a single tagger or a language model to select amongst the diacritized analyses in context to render text fully diacritized @xcite . in @xcite , the authors show that some inflectional and lexical related morphological features improve the performance of syntactic parsing in arabic. although @xcite have not used diacritics directly in their work, they use the same essential information that is used to diacritize arabic texts. @xcite not only investigate the impact of full diacritization on statistical machine translation (smt) but also introduce the notion of partial diacritization. they also show that several schemes have a small positive effect albeit not significant on smt performance over none and full diacritization despite the significant increase in the number of types. although the results in @xcite are not statistically significant, they provide directions of research that we can exploit to increase the performance of arabic related nlp applications. in a study conducted by alhanai and glass (2014), three partial diacritic schemes have been defined and compared to both fully and non-diacritized versions of the words. in their study, it is found that fully-diacritized text without gemination have statistically better performance than fully diacritized texts including gemination in asr application. our work follows the same general procedure as @xcite where we study the impact of some aspects of diacritization information in nlp applications, smt in particular. for arabic reading comprehension, @xcite studies the impact of partial diacritics in improving arabic speakers' reading comprehension. their study shows the effectiveness of having some level of diacritization between none and fully diacritized forms that help the readers disambiguate homographs that cannot be understood by the surrounding contexts. this shows the importance of having accurate automatic partial diacritization not only in improving different nlp applications but also to diacritize texts to help readers understand arabic texts better. having the goal of helping other researchers develop partial diacritization, @xcite has conducted a pilot study that minimally diacritize the dataset to reduce lexical ambiguity and help generate models to find an optimal level of diacritization in some nlp applications. although the result of this minimally-diacritized annotation has been highly affected by the annotators' subjectivity and background, it has shown some promising results for future studies. the idea of integrating word sense disambiguation (wsd) technologies into the smt framework has been studied previously, tackling different aspects of the phenomenon and showing statistically significant improvement integrating explicit wsd into the smt system @xcite @xcite . mainly, wsd integration improves the ability of the system to choose the target translation if it has been incorporated efficiently. @xcite show an improvement in chinese-to-english smt system in eight different automatic evaluation metrics when they integrate wsd in their translation system at decode time. they use the same parallel corpus used for training and the phrase translation table generated by the smt tool to disambiguate senses of the words by using the aligned phrases in the target language. all of the previous work incorporates features that help disambiguate senses in a supervised or unsupervised manner to generate better quality translation. some of these studies change the smt pipeline to integrate wsd but others implement it as a pre-processing step at decode time. in this study, we have the same goal as theirs which is to appropriately select the correct sense of a target word at decode time. we implement this by adding a certain amount of diacritics in arabic as preprocessing in the data preparation step. thus, the translation quality is not only enhanced by the appropriate choice of target word but also by the fact that the word alignment procedure is improved. ## scheme extraction we investigate the impact of various partial diacritization schemes on smt application. we compare their performance against two baselines, specifically full diacritization where all the diacritics are present and none where no diacritics are present. similar to the extraction strategy of @xcite , each of these schemes is identified from fully diacritized arabic datasets. additionally, the extraction process of some schemes involves the full morphological analysis of the words' part of speech and their lemmas. to identify these morphological features, we use madamira, a morphological analyzer and disambiguator for the arabic language @xcite . the quality of diacritization schemes rely on the performance of the automatic diacritization to predict diacritics. it is important to note that we rely on the underlying diacritized lemma form for ensuring extraction accuracy. @xcite define six different diacritization schemes based on their usage prominence in the arabic treebank (atb) @xcite . namely, they are fully diacritized (full), passive voice diacritic marks (pass), consonant doubling or gemination (gem), pres-ence of the syllable boundary marker sukoon (suk), syntactic case and mood diacritics (cm), and the case of no diacritization (none). in this study, we adopt the same previously mentioned schemes in addition to introducing several new ones: full-cm, pass+cm, pass+gem, suk+gem, pass+suk, pass+suk+gem, full-cm-pass, tanween. 3 the following is a detailed explanation of these diacritic schemes. the schemes are linguistically-motivated reflecting lexical, syntactic, or both types of information. the arabic sentences are written in buckwalter transliteration foot_3 and are tokenized according to the atb style (arabic treebank tokenization). it is crucial to note that if the word is not affected by the defined diacritic pattern, we remove all of its diacritics (i.e. none scheme). baselines: none: indicates that no diacritics are kept at all in the sentence, including the removal of the naturally occurring diacritics.
| 986
|
468
| 2,022
|
Towards Robust Neural Machine Translation with Iterative Scheduled Data-Switch Training
|
Most existing methods on robust neural machine translation (NMT) construct adversarial examples by injecting noise into authentic examples and indiscriminately exploit two types of examples. They require the model to translate both the authentic source sentence and its adversarial counterpart into the identical target sentence within the same training stage, which may be a suboptimal choice to achieve robust NMT. In this paper, we first conduct a preliminary study to confirm this claim and further propose an Iterative Scheduled Data-switch Training Framework to mitigate this problem. Specifically, we introduce two training stages, iteratively switching between authentic and adversarial examples. Compared with previous studies, our model focuses more on just one type of examples at each single stage, which can better exploit authentic and adversarial examples, and thus obtaining a better robust NMT model. Moreover, we introduce an improved curriculum learning method with a sampling strategy to better schedule the process of noise injection. Experimental results show that our model significantly surpasses several competitive baselines on four translation benchmarks. Our source code is available at https://github.com/DeepLearnXMU/RobustNMT-ISDST .
|
https://aclanthology.org/2022.coling-1.468
|
## introduction in recent years, neural machine translation (nmt) has achieved great success @xcite @xcite . usually, the nmt models are trained on clean parallel corpus and thus achieve promising performance under clean inputs. however, small perturbations, such as replacing words in the input sentences, can mislead the trained model to generate incorrect translations @xcite . in realworld scenarios, it is often required to deal with such sentences. thus, it has important academic value and application prospects to design a robust nmt model for both clean and noisy inputs. to reach this goal, some researchers explore data-oriented approaches focusing on constructing adversarial examples @xcite . generally, adversarial examples are used to augment the authentic dataset or fine-tune an nmt model pre-trained on the authentic dataset to improve robustness. although data-oriented approaches are simple and efficient, they leverage adversarial examples coarsely, as concluded by @xcite and @xcite , which can not reach the full potential of these examples. besides, researchers also study model-oriented approaches. some design additional model components to correct noisy inputs @xcite @xcite . there are more studies exploring training strategies for robust nmt, including multi-task learning @xcite , contrastive learning @xcite , and adversarial training @xcite . despite their success, there still exist two drawbacks: 1) most existing methods indiscriminately exploit authentic and adversarial examples within the same training stage, which is a suboptimal choice confirmed in our preliminary study; 2) previous studies on robust nmt adopt a constant noise ratio to construct adversarial examples during training, while the determination of noise ratio is a subtle process, i.e., too little noise may lead to poor robustness and too much noise may also hurt the model performance @xcite . therefore, dealing with both clean and noisy inputs well for nmt remains to be a significant but challenging task. in this paper, we first conduct a preliminary study, which reveals that indiscriminately exploit-ing authentic and adversarial examples within the same training stage is suboptimal. concretely, we find that this training strategy can not significantly reduce the source sentence representation (ssr) discrepancies 1 between authentic examples and the corresponding adversarial examples, resulting in a suboptimal model training which is reflected by lower model confidence foot_1 on examples. based on this observation, we further propose an iterative scheduled data-switch training framework for robust nmt. under this framework, we train the model in a two-stage scheme, iteratively switching between authentic and adversarial examples with their individual modified training objectives. during training, we introduce an additional kullback-leibler (kl) divergence loss, expecting the model to make similar predictions on authentic and adversarial datasets. by doing so, at each training stage, the model not only focuses on one of authentic and adversarial datasets but also avoids forgetting the knowledge from the other. therefore, our model is able to handle both clean and noisy inputs well. furthermore, we introduce curriculum learning (cl) to better schedule the process of noise injection. particularly, inspired by the baby step strategy @xcite in cl that gradually exposes more difficult examples to the model while still involving simple examples, we sample the noise ratio from a uniform distribution, where the sampling interval is progressively extended. compared with the naive cl strategy of continuously increasing the noise ratio, our strategy is re-sampling previous simple adversarial examples which is beneficial to the model generalization. in summary, our contributions are as follows: ## preliminary study indiscriminately exploiting authentic examples and their adversarial counterparts within the same training stage is an effective way to build a robust nmt model. however, it requires the model to overcome the ssr discrepancy between an authentic example (x, y) and its adversarial counterpart (x ′ , y), which increases the training difficulty to maximize p(y|x; θ) and p(y|x ′ ; θ) simultaneously. we argue it may be a better choice to exploit authentic and adversarial examples at two training stages, iteratively switching between two types of examples. in such a data-switch training manner, the model can better benefit from the knowledge of different stages to verify our hypothesis, we use transformer @xcite as our nmt model and conduct a preliminary experiment on the iwslt14 de⇒en dataset. to be specific, we train the three models: 1) transformer. we follow @xcite to train this model on the authentic dataset; 2) indisc-model. it indiscriminately exploits authentic and adversarial examples for training within the same stage. besides, following @xcite , we introduce a mean square error (mse) loss to enforce the corresponding encoder outputs to be similar; 3) switch-model. this model is trained at two training stages, iteratively switching between authentic and adversarial examples. we make an investigation through the two metrics: 1) the euclidean distances of the ssr between authentic examples and their adversarial counterparts; 2) the model confidence, i.e., log-likelihood values of target ground-truth sentences. ## methodology based on the observations in section 2, we further propose an iterative scheduled data-switch training framework for robust nmt. ## related work to build robust nmt models, researchers have proposed a range of methods, which can be mainly divided into two categories: data-oriented and modeloriented approaches. in the first category, how to construct adversarial examples is a non-trivial problem @xcite . usually, adversarial examples are used in two ways: one is to directly train a robust model using the dataset mixed with authentic and adversarial examples @xcite , and the other is to use adversarial examples to fine-tune the nmt model pre-trained on authentic examples @xcite @xcite . in the second category, some researchers design additional components for nmt model to correct noisy inputs @xcite @xcite or explore fault-tolerant neural networks @xcite . mean-while, more researchers resort to exploring training strategies, including multi-task learning @xcite , contrastive learning @xcite , and adversarial training @xcite @xcite . in this work, the proposed framework belongs to the second model-oriented category. in this regard, most existing methods indiscriminately exploit authentic and adversarial examples within the same training stage, which are suboptimal confirmed in our preliminary study. to mitigate this problem, we propose an iterative scheduled data-switch training framework for robust nmt, where we introduce two training stages, iteratively switching between authentic and adversarial examples. besides, inspired by the successful applications of curriculum learning (cl) in nmt @xcite @xcite , we use cl to better schedule the process of noise injection. particularly, we equip cl with a sampling strategy, which is beneficial to the model generalization. finally, note that @xcite introduce an alternated training to alleviate the performance drop caused by low-quality back-translation data. our work differs from theirs in three aspects: 1) we aim at building a robust nmt model dealing with clean and noisy inputs well, while @xcite try to prevent the model performance on clean test sets from being disturbed by synthetic data; 2) we introduce an improved cl method to better schedule the process of noise injection, which is beneficial to the model performance; 3) in addition to the conventional cross-entropy objective @xcite , we introduce an additional regularization term to cope with both clean and noisy inputs well. ## conclusion in this paper, we first conduct a preliminary study to reveal that indiscriminately exploiting authentic and adversarial examples for robust nmt is suboptimal. to achieve better robust nmt, we further propose an iterative scheduled data-switch training framework, where we train the model at two training stages, iteratively switching between authentic and adversarial examples. moreover, we introduce curriculum learning with a sampling strategy to schedule the process of noise injection at each training stage. extensive experiments show the superiority of our framework. in the future, we will introduce more types of real noise, such as asr errors, into our framework. besides, we plan to apply our framework to other natural language generation tasks, such as dialogue generation, so as to verify the generality of our framework.
| 14,424
|
126
| 2,024
|
An Inversion Attack Against Obfuscated Embedding Matrix in Language Model Inference
|
With the rapidly-growing deployment of large language model (LLM) inference services, privacy concerns have arisen regarding to the user input data. Recent studies are exploring transforming user inputs to obfuscated embedded vectors, so that the data will not be eavesdropped by service provides. However, in this paper we show that again, without a solid and deliberate security design and analysis, such embedded vector obfuscation failed to protect users’ privacy. We demonstrate the conclusion via conducting a novel inversion attack called Element-wise Differential Nearest Neighbor (EDNN) on the glide-reflection proposed in (CITATION), and the result showed that the original user input text can be 100% recovered from the obfuscated embedded vectors. We further analyze security requirements on embedding obfuscation and present several remedies to our proposed attack.
|
https://aclanthology.org/2024.emnlp-main.126
|
## introduction inference services of language models are now gaining popularity, with a considerable number of language models having been deployed on the cloud server. however, users might concern about the privacy of their data when requesting inference service, that is, their data would be eavesdropped by malicious service providers.to address this problem, recent research has turned to adopting obfuscation techniques on the embedding matrix, ensuring that user inputs cannot be recovered from the obfuscated embeddings by service providers. embedding obfuscation becomes appealing since the obfuscated embeddings can be directly forwarded to inference process as efficient as plaintext embeddings, leading to practical potential for real applications compared with secure multi-party computation (mpc) and homomorphic encryption (he). for example, the state-of-the-art work in @xcite leverages glide-reflection for embed-ding obfuscation combined with the user-side keybased hashing, to claim a private and secure inference solution. nonetheless, recent studies show that a malicious server can indeed reconstruct user data through embedding inversion attacks (eia) @xcite . consequently, without formal security analysis, concerns persist regarding the potential existence of novel eias capable of extracting user information from these embedding obfuscation methods. in this paper, we analyze the security of the glide-reflection methodology used in @xcite , ultimately uncovering its vulnerability. we innovatively design an element-wise differential nearest neighbor (ednn) attack to effectively break the security of the glide-reflection. our experimental outcomes conclusively demonstrate that the ednn entirely recovers 100% of the user data tokens which ostensibly secured by the glide-reflection. subsequently, we present an insight on why the naive linear-transformation based obfuscation, like glidereflection, fails to safeguard user data. we further discuss the security requirements of embedding obfuscation and demonstrate that the deliberate security design is necessary. we also introduce several possible defenses against eia base on our analysis. ## obfuscation schema based on glide-reflection in this section, we describe the system and threat model of obfuscation schema in @xcite . then we give the formal description of the schema and explain its vulnerability. ## proposed attack: ednn the authors of @xcite have evaluated the security of glide-reflection against the nearest neighbor (nn) attack, and the accuracy of token recovery for which turns out to be negligible. by extending nn, we propose an efficient inversion attack called element-wise differential nearest neighbor (ednn) to break the glidereflection. the ednn selects the closest token from pretrained embeddings as the real token by utilizing the difference of vector elements for neighbor retrieval. therefore, it is effect on the glidereflection which does not change the element differences within embedding vector. algorithm 1 ednn input: a obfuscated and fine-tuned embedding matrix and output vocabulary v @xmath0. 2: for i, j ← 1 to m do 3: we present the details of ednn in algorithm 1, where e d×m is the obfuscated embedding matrix after fine-tuning. the algorithm will output a vocabulary v r which stores the recovered tokens corresponded to the obfuscated embeddings e d×m . to compute the element difference inside embedding vector, the algorithm use lshf t(•) function to cyclically shift the vector to the left by one position and calculate element-wise subtraction. the algorithm will evaluate the distance between every pairs of plaintext and encrypted tokens. then for each encrypted token, the algorithm is able to output the plaintext token with the minimum distance in the embedding space as its substitute. to explain the attack effect of ednn, we fit the embeddings of 100 tokens into 2d plot by t-distributed stochastic neighbor embedding (t-sne) (van der @xcite and scale them into (-1, 1) as shown in figure 1 . by comparing the results of figure 2 and 1b, we can observe that the element-wise differences inner each embedding vector from the original model and the transformed model are the same. ## experiment experimental details. we encrypted the model according to @xcite and fine-tune the model with specific task. then we evaluate ednn on the fine-tuned model to recover its encrypted tokens. datasets and models. we use the same setting as @xcite and conduct experiments on datasets including the general language understanding evaluation (glue) benchmark dataset @xcite , the conll2003 named entity recognition dataset @xcite and the xnli dataset @xcite . we use bert, roberta, and mbert models from huggingface foot_1 . element-wise differential comparison. for each embedding obfuscated by glide-reflection, we first evaluate the distance of element-wise differential to its corresponding original embedding and other nearest embedding. the results in fig. 2 shows that after fine-tuning, the element-wise differentials between each embedding and irrelevant embeddings exhibit a three-order-of-magnitude discrepancy compared to its original embedding, facilitating the ednn to capture the correspondence between obfuscated embeddings and their original counterparts. ## analysis and possible defenses in this section, we analyze the security requirements for the embedding obfuscation and propose security requirements for embedding obfuscation. ## conclusion in this paper, we investigate the vulnerability of the glide-reflection used for embedding obfuscation. we devise an innovate embedding inversion attack to break the security of the glide-reflection. furthermore, we conduct a comprehensive analysis and introduce two essential security requirements for embedding obfuscation. we explore various techniques that can be leveraged to enhance the security of embedding obfuscation.
| 29,554
|
5
| 2,005
|
Language and Encoding Scheme Identification of Extremely Large Sets of Multilingual Text
|
In the paper we present an outline of our approach to identify languages and encoding schemes in extremely large sets of multi-lingual documents. The large sets we are analyzing in our Language Observatory project [1] are formed by dozens of millions of text documents. In the paper we present an approach which allows us to analyze about 250 documents every second (about 20 million documents/day) on a single Linux machine. Using a multithread processing on a cluster of Linux servers we are able to analyze easily more than 100 million documents/day.
|
https://aclanthology.org/2005.mtsummit-posters.5
|
## introduction identification of written natural languages and character encoding schemes of text documents is not considered to be a difficult problem. it is true if a document is not written in many languages, is long enough, and the number of documents to be analyzed is not extremely large so that the identification of all documents can be finished within an acceptable period of time. there are two major approaches in written language identification: n-gram and word based approach, see e.g. @xcite - @xcite . almost all the existing approaches to language and character encoding scheme identification are language-neutral, in the sense that they can identify any languages that they have been trained on. both n-gram and word based tools can be trained with any languages the user likes. when the user knows what languages he wants to distinguish between in his application, he gathers up training material in each of these, trains the tool, and uses the tool. most of the tools are trained on european and a few asian languages, because those are the most prevalent and useful in on-line documents, but the tools can be successfully used with many other languages. the important notion to understand is the distinction between the algorithm of the identification process itself, which is usually a kind of n-gram or word based classifier, an implementation of the algorithm, and the byte streams of training data. in the following, we present an outline of our implementation of quad-gram vector distance based language and character encoding identification which allows us to analyze more than 1500 documents every second. ## language and character encoding identification in language observatory project language observatory project @xcite aims to provide, among others, such information like: ## conclusion an efficient storage format that allows a fast access to the stored text documents plays a crucia l role in applications, such as our language observatory project, requiring millions of documents to be parsed and analyzed every day. the efficient storage format, briefly described in this paper, allows us an efficient implementation of the language and encoding identification based on concepts of object reusability and multi-threaded programming. in conclusion, language observatory project has the aim to help to bridge digital divide and welcomes participation and contributions from all interested researchers all around the world.
| 122
|
7
| 2,021
|
ARGUABLY at C om MA @ ICON : Detection of Multilingual Aggressive, Gender Biased, and Communally Charged Tweets Using Ensemble and Fine-Tuned I ndic BERT
|
The proliferation in Social Networking has increased offensive language, aggression, and hate-speech detection, which has drawn the focus of the NLP community. However, people’s difference in perception makes it difficult to distinguish between acceptable content and aggressive/hateful content, thus making it harder to create an automated system. In this paper, we propose multi-class classification techniques to identify aggressive and offensive language used online. Two main approaches have been developed for the classification of data into aggressive, gender-biased, and communally charged. The first approach is an ensemble-based model comprising of XG-Boost, LightGBM, and Naive Bayes applied on vectorized English data. The data used was obtained using an Indic Transliteration on the original data comprising of Meitei, Bangla, Hindi, and English language. The second approach is a BERT-based architecture used to detect misogyny and aggression. The proposed model employs IndicBERT Embeddings to define contextual understanding. The results of the models are validated on the ComMA v 0.2 dataset.
|
https://aclanthology.org/2021.icon-multigen.7
|
## introduction a burgeon in social networking has been seen in the past few years. the number of platforms and users has increased by 77% from 2014 to 2021. social media, due to its easy accessibility and freedom of use, has transformed our communities and how we communicate. one of the widespread impacts can be seen through trolling, cyberbullying, or sharing aggressive, hateful, misogynistic content vocalized through platforms like facebook, twitter, and youtube. the intensity and hostility lying in aggressive words, abusive language, or hate speech is a matter of grave concern. these are used to harm the victim's status, mental health, or prestige @xcite . this articu-lation of hatefulness often travels from the online to the offline domain, resulting in organized riotlike situations and unfortunate casualties, which causes disharmony in society. hence, it has become crucial for scholars and researchers to take the initiative and find methods to identify the source and articulation of aggression. aggression is a feeling of anger or antipathy that results in hostile or violent behavior and readiness to attack or confront. according to @xcite , one can express aggression in a direct, explicit manner (overtly aggressive) or in an indirect, sarcastic way (covertly aggressive). hate speech can be used to attack a person or a group of people based on their color, gender, race, sexual orientation, ethnicity, nationality, religion @xcite . misogyny or sexism is a subset of hate speech ' @xcite and targets the victim based on gender or sexuality @xcite . while it is essential to identify hate speech in social networks, it is rather time-consuming to perform manually, considering the massive amount of data at hand. thus, there is a need to build an automated system for the identification of such aggression. however, distinguishing between acceptable content and hateful content is challenging due to the subjectivity of definitions and varying perceptions of the same content by different people, thus making it tedious to build an automated ai system. regardless, numerous studies exist that have explored different aspects of hateful and aggressive language and their computational modeling and automatic detection, such as toxic comments. to this end, several workshops such as 'abusive language online' (alw) @xcite , 'trolling, aggression and cyberbullying' (trac) @xcite , and semantic evaluation (semeval) shared task on identifying offensive language in social media (offenseval) @xcite have been organized. this paper presents our system for shared task on "multilingual gender biased and communal language identification @ icon 2021" @xcite . two approaches have been implemented developed for the classification of data into aggressive, gender biased, or communally charged. ## related work recently there has been an increase in the studies exploring different aspects of hate speech, sexism detection, aggressive language, and their computational modeling and automatic detection, such as trolling @xcite @xcite , racism @xcite @xcite , online aggression @xcite , cyberbullying @xcite , hate speech @xcite @xcite @xcite @xcite , and abusive language @xcite @xcite . the prevalent misogynistic and sexist comments, posts, or tweets on social media platforms have also come into light. @xcite analyzed sexist tweets and categorized them as hostile, benevolent, or other. @xcite provided an in-depth analysis of sexist tweets and further categorized them based on the type of harassment. @xcite performed linguistic analysis to detect misogyny and sexism in tweets. prior studies have explored aggressive and hateful language on platforms like twitter @xcite @xcite . using twitter data, @xcite proposed a supervised approach to categorize the text into racist and non-racist labels to detect anti-black hate speech on social media platforms. @xcite used an ensemble-based classifier to capture the grammatical dependencies between words in twitter data to anticipate the increasing cyberhate behavior using statistical approaches. @xcite curated a corpus of user comments for abusive language detection and applied machine learning-based techniques to identify subtle hate speech. @xcite used convolutional layers on word vectors to detect hate speech. @xcite provided the largest dataset on sexism categorization and applied a bert based neural architecture with distributional and word level embeddings to perform the classification task. bert based approaches also have become prevalent recently @xcite @xcite . there have also been an increasing number of shared tasks on agression indentification. @xcite aimed to identify aggressive tweets in social media posts in hindi and english datasets. (samghabadi et al., 2018) used lexical and semantic features and logistic regression for the hindi and english facebook datasets. (orasan, 2018) used machine learning methods such as svm and random forest on word embeddings for aggressive language identification. (raiyani et al., 2018) used fully connected layers on highly pre-processed data. (aroyehun and gelbukh, 2018)aroyehun and gelbukh (2018) used data augmentation and deep learning for aggression identification. ## task description the shared task focuses on the multi-label classification to identify the different aspects of aggression and offensive language usage on social media platforms. we have been provided with a multilingual, comma v 0.2 @xcite dataset consisting of 12,000 samples for training and an overall 3,000 samples for testing in four indian languages meitei, bangla, hindi, and english. we were required to classify each sample into one of the following labels: aggressive, gender biased, and communally charged. ## methodology 4.1 data preparation to get better accuracy, we require a dataset in english language. therefore, the multilingual input dataset have been passed through the spacylangdetect toolkit 1 . this toolkit consists of a pipeline for custom language detection. the sentence is categorized into the language it belongs to, i.e., hindi, bangla, or english, depending upon the probability assigned to that sentence. the sentences belonging to the hindi language were given the label "hi," those belonging to bangla were given the label "ba," and sentences in english were given the label "en." all the sentences belonging to the "hi" and "ba" labels were transliterated, the process of transferring a word from the alphabet of one language to another, to provide us with a uniform multilingual dataset in english. we must note that the labeling done is based on the language it is written in (as shown in example 3 figure 1 ) rather than the language itself (as shown in example 1 figure 1 ), which indicates that if the words used are those of english, irrespective of the language, it will be given the label "en". such sentences do not require transliteration. this data thus prepared has been used in both the proposed architectures as discussed below. ## conclusion the paper describes our experimentation over comma v 0.2 dataset consisting of multilingual, bangla, hindi, and english data to perform analysis on aggression, communal bias, and gender bias. we have proposed two strategies boosted voting ensemble and indicbert fine-tuned in this paper. the boosting voting ensemble outperforms indicbert in terms of instance f1 scores that showcase the robustness of our proposed approach as well its capabilities in handling all three labels efficiently. however, it should also be noted that indicbert majorly outperforms the ensemble approach in the individual task, highlighting its power in understanding contextual meanings related to aggression, communal bias, and gender bias. the f1 scores for aggression are relatively on the lower side because of the contextual overlaps between the output labels, which was not the case in gender and communal bias. in the future, the inclusion of better embeddings like glove and bert which capture the underlying semantic and lexical relations could improve the performance of the methodology manifolds. the application of ensembling techniques in a deep learning setting could be another set of experimentations to be considered.
| 10,189
|
330
| 2,020
|
Target Word Masking for Location Metonymy Resolution
|
Existing metonymy resolution approaches rely on features extracted from external resources like dictionaries and hand-crafted lexical resources. In this paper, we propose an end-to-end word-level classification approach based only on BERT, without dependencies on taggers, parsers, curated dictionaries of place names, or other external resources. We show that our approach achieves the state-of-the-art on 5 datasets, surpassing conventional BERT models and benchmarks by a large margin. We also show that our approach generalises well to unseen data.
|
https://aclanthology.org/2020.coling-main.330
|
## introduction metonymy is a widespread linguistic phenomenon, in which a thing or concept is referred to by the name of something closely associated with it. it is an instance of figurative language that can be easily understand by humans through association, but is hard for machines to interpret. for example, in they read shakespeare, it is "the works of shakespeare" that we are referring to, not the playwright himself. existing named entity recognition (ner) and word sense disambiguation (wsd) systems have no explicit metonymy detection. this is an issue as named entities and other lexical items are often used metonymically. for instance, germany in the context of germany lost in the semi-final refers to "the national german sports team", different to the context i live in germany which the term is used literally. ner systems generally tag both as location without recognition of the metonymic usage in the first, and wsd systems are tied to word sense inventories and generally don't handle metonyms and other sense extensions well @xcite . intuitively, metonym resolution should improve ner and wsd, something we explore in this paper. metonymy resolution is the task of determining whether a potentially metonymic word ("pmw") in a given context is used metonymically or not. it has been shown to be an important component of many nlp tasks, including machine translation @xcite , question answering @xcite , anaphora resolution @xcite , geographical information retrieval @xcite , and geo-tagging @xcite . conventional approaches to metonymy resolution have made extensive use of taggers, parsers, lexicons, and corpus-derived or hand-crafted features @xcite @xcite . these either rely on nlp pre-processors that potentially introduce errors, or require external domain-specific resources. recently, deep contextualised word embeddings @xcite and pre-trained language models @xcite have been shown to benefit many nlp tasks, and part of our interest in this work is how to best apply these approaches to metonymy resolution. while we include experiments for other types of metonymy, a particular focus of this work is locative metonymy. previous work has suggested that around 13-20% of toponyms are metonymic @xcite @xcite , such as in vancouver welcomes you, where vancouver refers to "the people of vancouver" rather than the literal place. our contributions are as follows. first, we propose a word masking approach, which when paired with fine-tuned bert @xcite , achieves state-of-the-art accuracy over a number of benchmark metonymy datasets: our method outperforms the previous state-of-the-art by 5.1%, 12.2% and 4.8%, on semeval @xcite , relocar @xcite and wimcor (kevin and michael, 2020), respectively, and also outperforms a conventional fine-tuned bert model by a large margin. second, in addition to intrinsic evaluation of location metonymy resolution, we include an extrinsic evaluation, where we incorporate a locative metonymy resolver into a geoparser, and show that it boosts geoparsing performance. third, we demonstrate that our method generalises better cross-domain, while being more data efficient. finally, we conduct a detailed error analysis from the task rather than model perspective. our code is available at: https://github.com/haonan-li/twm-metonymy-resolution . ## related work in early symbolic work, metonymy was treated as a syntactico-semantic violation @xcite . as such, the resolution of metonymy was based on constraint violation, usually based on the selectional preferences of verbs @xcite . @xcite were the first to treat metonymy resolution as a classification task, based on corpus and linguistic analysis. they demonstrated that grammatical roles and syntactic associations are high-utility features, which they subsequently extended to include syntactic head-modifier relations and grammatical roles @xcite . to tackle data sparseness, they further introduce simpler grammatical features by integrating a thesaurus. much of this work has been preserved in recent work in the form of hand-engineered features and external resources. semeval 2007 task 8 on metonymy resolution @xcite further catalyzed interest in the task by releasing a metonymy dataset with syntatic and grammatical annotations, and fine-tuning the task definition and evaluation metrics. a range of learning paradigms (including maximum entropy, decision trees, and naive bayes) were applied to the task. top-ranking systems @xcite @xcite used features provided by the organisers, such as syntactic roles and morphological features. most systems also used features from external resources such as wordnet, framenet, verbnet, and the british national corpus (bnc). later work @xcite @xcite used the wikipedia category network to capture the global context of pmws, to complement local context features. all the above-mentioned approaches resolve metonymy by enriching the information about pmws, in particular via resources. in contrast, our approach is end-to-end: information is contained in the pretrained embeddings and language models only. another difference is that we focus on the context of the pmw only, and not the pmw itself. more recently, in a departure from using ever-more hand-crafted features, @xcite proposed a metonymy resolution approach based on basic parsing features and word embeddings. the main idea is to eliminate words that are superfluous to the task and keep only relevant words, by constructing a "predicate window" from the target word via a syntactic dependency graph. the classification of the target word is then based on the "predicate window". similar to us, they do not take the identity of the target word into consideration. however, we remove the dependency on a dependency parser, and more systematically generate a context representation by masking the target word within a pretrained language model. researchers have released several datasets for metonymy resolution, including semeval @xcite , relocar and conll @xcite , gwn @xcite , and wimcor (kevin and michael, 2020). however, none of them have analyzed the data distribution and or generalisation across datasets. in this paper, we train our model on different datasets, and evaluate its transfer learning abilities. ## approach formally, given a sentence and target word foot_0 contained within it, metonymy resolution is the classification of whether the target word is a metonym or not, and what metonymic readings it has. ## experimental details in this section, we detail the five datasets used in our experiments, and then provide details of the models used in this research. ## results to evaluate metonymy resolution, we train each model over 10 runs and report the average accuracy and standard deviation. for geoparsing, we use precision, recall, and f1-score, based on 5-fold crossvalidation. comparing the different datasets, the relative accuracies vary substantially: semeval loc is the most difficult, while wimcor is the simplest, even with the lexically-split training and test data. with the original data split for wimcor, the result is over 99.5 even with bert-base without masking (and even higher for the other bert-based methods). although it is not the main focus of this paper, we also report the results for the non-locative dataset in we further analysed the attention weights of the different fine-tuned bert models with and without target word masking. we compare the attention weights for each layer separately (12 vs. 24 for bert-base and bert-lg, resp.): we get the attention weight of each head on the target word, and average the heads' weights to generate a single sample point. we found for both models, attention on the target word is substantially higher for the last 4-5 layers, as shown in figure 1 . moreover, target word masking makes the model attend more to the target word. figure 2 is the training curve for the bert models over relocar. we find that, generally, bert-lg converges a bit slower than bert-base, but in each case, the masked model performs substantially better 1 2 3 4 5 6 7 8 epoch 0.750 0.775 0.800 0.825 0.850 0.875 0.900 0.925 0.950 accuracy base lg -mask +mask figure 2: training curve for the bert models over relocar. we show the averaged results for ten runs in each setting, with the shading indicating variance. ## error analysis to further understand the task of metonymy resolution and why the model fails in some cases, we conducted a manual error analysis over a random sample of 150 errors from semeval loc and relocar. we roughly categorise the errors into 6 types, with each instance classified according to a unique error type. some instances had multiple errors among types 4, 5, and 6, in which case we classified them in the priority of 6 > 5 > 4. ## conclusions and future work in this paper, we proposed a word masking approach to metonymy resolution based on pre-trained bert, which substantially outperforms existing methods over a broad range of datasets. we also evaluated the ability of different models in a cross-domain setting, and showed our proposed method to generalise the best. we further demonstrated that an end-to-end metonymy resolution model can improve the performance of a downstream geoparsing task, and conducted a systematic error analysis of our model. the proposed target word masking method can be applied to tasks beyond metonymy resolution. numerous word-level classification tasks lack large-scale, high-quality, balanced datasets. we plan to apply the proposed word masking approach to these tasks to investigate whether it can lead to similar gains over other tasks.
| 3,208
|
50
| 2,021
|
Unseen Entity Handling in Complex Question Answering over Knowledge Base via Language Generation
|
Complex question answering over knowledge base remains as a challenging task because it involves reasoning over multiple pieces of information, including intermediate entities/relations and other constraints. Previous methods simplify the SPARQL query of a question into such forms as a list or a graph, missing such constraints as “filter” and “order_by”, and present models specialized for generating those simplified forms from a given question. We instead introduce a novel approach that directly generates an executable SPARQL query without simplification, addressing the issue of generating unseen entities. We adapt large scale pre-trained encoder-decoder models and show that our method significantly outperforms the previous methods and also that our method has higher interpretability and computational efficiency than the previous methods.
|
https://aclanthology.org/2021.findings-emnlp.50
|
## introduction answering user's questions via correct relation paths over a knowledge base may facilitate machine-human interaction to understand how the machine gets the answer. the relation path of a question is defined as the sequence of relations from the topic entity mentioned in a question to its answer entity in a knowledge base, which corresponds to the semantics of the question. while answering simple questions whose relation path has only one relation (or edge) without any other constraint has been largely resolved @xcite , answering complex questions over a knowledge base (called complex kbqa) whose relation path contains more than one relation and/or other constraints remains as a difficult task @xcite @xcite . previous works on complex kbqa cast it as a graph searching task. @xcite , @xcite , and @xcite identify the relation path of a question, by comparing the question with each candidate relation path. they should restrict the set of candidate relation paths (e.g. those with up to two relations), excluding any other constraints (e.g. filter, order_by), due to too big search space of all potential candidate relation paths. the methods thus show limited coverage for such datasets as complexwebquestions, whose relation paths have up to three relations and other constraints. @xcite instead identify intermediate entities in the relation path iteratively until reaching the answer entity. however, the methods predict only one answer entity for a question and thus show low recall for questions with multiple answer entities. @xcite , @xcite , and lan and jiang (2020) extend the previous methods @xcite @xcite by iteratively generating a query graph instead of ranking candidate relation paths. the methods predict one of the actions 'extend', 'connect' and 'aggregate' to grow a query graph by one more pair of edge and node, but yet do not cover such constraints as "filter" and "order_by". please refer to appendix a for detailed discussion of the previous works. inspired by the recent progress of adapting natural language generation (nlg) for various natural language processing (nlp) applications @xcite , we approach complex kbqa as a language generation task, fine-tuning large-scale pre-trained encoder-decoder models to generate executable sparql query from question. an issue of this approach is to generate unseen entities for questions of test dataset. the sparql queries in the kbqa datasets represent entities with their ids (e.g. "ns:m.08x9_6"), but it is impractical to learn to generate unseen entity ids. to address the issue, we leverage language generation models to learn the correlation between entity text labels (e.g. "1980 nba finals") and questions during the training process so as to generate unseen entities' text labels in the inference process. specifically, our method learns to generate entity text labels instead of entity ids, by replacing each entity id in a sparql query with a placeholder (e.g. 'c1') and adding a string matching filter at the end of the sparql query (e.g. 'filter(str(?c1) = "1980 nba finals")'). the proposed approach has the following advantages over the previous works: 1) the proposed approach can optimize a model for the whole query sequence generation, while the iterative graph generation models are optimized for predicting one edge (or action) of query graph at a time; 2) the interpretability of sequence generation models is higher than that of iterative graph generation models (see section 3.4 for details); 3) our method can utilize a large-scale pre-trained language model for learning sparql query generation, while the previous works can utilize such a model only for representing texts (e.q. question, entity and relation text labels); and 4) our method can learn to generate any constraints, while the previous works should define a new action type to deal with another unaddressed constraint type. the language generation part of the proposed approach is in fact semantic parsing, which converts a question into a logical representation or an executable query (e.g. sql) @xcite @xcite . the key difference between complex kbqa and semantic parsing is that complex kbqa assumes a large knowledge base (e.g. freebase) for the whole dataset, while semantic parsing aims at learning dynamic correlation between a question and any given table or relational database. recent methods of semantic parsing @xcite ) learn the dynamic correlation by encoding the whole table together with the question. however, such knowledge base as freebase is too large to be represented by a single encoder (see we conduct experiments on three benchmark datasets: metaqa @xcite , com-plexwebquestions @xcite , and webquestionssp @xcite . evaluation results show that the proposed method significantly outperforms the state-of-the-art methods over all metrics on all three datasets. besides, our method also outperforms the previous methods in terms of interpretability and computational efficiency. we summarize the contributions that will be shown in this paper as follows: ## methodology our method first recognises topic entities in a given question (section 2.2), and then generates a list of sparql queries given the question and the category (or type) of each topic entity by training an encoder-decoder model (section 2.3), and finally identifies the best valid sparql query that locates at least one answer entity in a given knowledge base at a post-processing step (section 2.4). a question may mention multiple entities. our method considers them all as candidate topic entities of the question and generates sparql queries with each of the candidate topic entities. if a sparql query has multiple entities, the entity whose id is the first element of a triple (e.g. <entity id, predicate, ?@xmath0. we select one topic entity at a time, while the other entities are considered as constraint entities. our method is schematically described in appendix b.1, and figure 1 depicts how the method analyzes a question to generate an executable sparql query. ## experiments we conducted experiments on the three datasets of metaqa, webquestionssp (webqsp) and com-plexwebquestions (cwq) (see appendix c.1 for detailed descriptions and statistics of the datasets and their knowledge bases). ## conclusion we propose to improve complex kbqa by utilizing pre-trained encoder-decoder models to generate a normalized sparql query from questions. the proposed method outperforms previous models on all of three complex kbqa benchmarks and addresses unseen entities by translating entity ids to sparql queries. in the future, we will explore combining relation classification with the constraint generation to reduce the space of beam search.
| 9,598
|
732
| 2,024
|
BEEAR : Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Language Models
|
Safety backdoor attacks in large language models (LLMs) enable harmful behaviors to be stealthily triggered while evading detection during normal interactions. The high dimensionality of the trigger search space and the diverse range of potential malicious behaviors in LLMs make this a critical open problem. This paper presents BEEAR, a novel mitigation method based on a key insight: backdoor triggers induce a uniform drift in the model’s embedding space, irrespective of the trigger’s form or targeted behavior. Leveraging this observation, we introduce a bi-level optimization approach. The inner level identifies universal perturbations to the decoder’s embeddings that steer the model towards defender-defined unwanted behaviors; the outer level fine-tunes the model to reinforce safe behaviors against these perturbations. Our experiments demonstrate the effectiveness of this approach, reducing the success rate of safety backdoor attacks from over 95% to <1% for general harmful behaviors and from 47% to 0% for Sleeper Agents, without compromising the model’s helpfulness. Notably, our method relies only on defender-defined sets of safe and unwanted behaviors without any assumptions about the trigger location or attack mechanism. This work represents the first practical framework to counter safety backdoors in LLMs and provides a foundation for future advancements in AI safety and security.
|
https://aclanthology.org/2024.emnlp-main.732
|
## introduction the widespread deployment of instruction-tuned large language models (llms) @xcite @xcite has revolutionized various sectors, but a critical safety and security vulnerability has emerged: the deceptive impression of safety-alignment induced by backdoor attacks @xcite @xcite . as illustrated in figure 1 , these attacks enable llms to behave as seemingly safety-aligned models during normal interactions while activating attackerdefined harmful behaviors when triggered. the stealthy nature of these attacks and the ease of sharing compromised models online @xcite raise serious concerns about the safe incorporation of llms into critical applications. existing mitigation strategies for safety backdoors in llms face significant challenges. additional safety fine-tuning and reinforcement learning with human feedback (rlhf) have proven ineffective @xcite @xcite , while previous exploration of adversarial training can even reinforce backdoor behaviors @xcite . moreover, established methods for mitigating backdoors in computer vision and multimodal models are not directly applicable to llms due to the discrete nature of token-based triggers and the vast search space for potential triggers at the token space @xcite @xcite @xcite @xcite . methods for natural language understanding are also limited by the diverse range of potential targeted behaviors in llms @xcite @xcite @xcite @xcite . current attempts @xcite to tackle llm backdoors often rely on constraining assumptions about trigger size or locations at input space, which may not align with practical scenarios, leads us to the core question: in this paper, we present beear-backdoor embedding entrapment and adversarial removal, a novel mitigation strategy based on a key insight: backdoor triggers induce a relatively uniform drift in the model's embedding space, regardless of the trigger's form or targeted behavior. leveraging this observation, we introduce a bi-level optimization approach. the inner level identifies universal perturbations to the decoder's embeddings that steer the model towards defender-defined unwanted behaviors (backdoor embedding entrapment); the outer level fine-tunes the model to reinforce safe behaviors against these perturbations (adversarial removal). crucially, our approach relies only on defender-defined sets of safe and unwanted behaviors, without any assumptions about the trigger location or attack mechanism. in summary, our key contributions are: • practical threat model ( §3): we formally define a threat model for backdoor mitigation study in llms without any assumption on the backdoor trigger's format, location, or how it is inserted. ## background backdoor attacks manipulate models to exhibit targeted behavior when triggered while behaving normally otherwise. traditional backdoor defenses in computer vision and natural language understanding often assume the specific trigger locations and aim for misclassification @xcite ; zeng et al., 2022; @xcite @xcite . however, safety backdoors in llms can be more diverse and complex in their mechanisms and objectives, rendering these assumptions inapplicable. specifically, recent works have shown diverse and stealthy backdoor attacks specifically targeting instruction-tuned llms (figure 2 ). these attacks insert arbitrary triggers at arbitrary locations within the input prompt, such as prefixes @xcite , suffixes @xcite , or even dispersed within the text @xcite . the techniques for inserting the trigger can be via poisoning the rlhf, the post-hoc fine-tuning, or the supervised fine-tuning process. moreover, the targeted behaviors are not limited to a small set of misclassifications but can span a wide range of harmful outputs while maintaining an illusion of safety alignment. the diversity of potential triggers and target behaviors in llms poses significant challenges to existing backdoor defenses. methods relying on specific assumptions about trigger characteristics or synthesizing triggers for a limited set of target labels @xcite are not well-suited to the llm setting. developing effective defenses against safety backdoors in llms requires novel approaches that can handle the vast search space of triggers at input space without relying on constraining assumptions. ## threat model attack model. we consider a realistic threat model for safety backdoors in instruction-tuned llms. in this setting, the attacker provides a backdoored model, f θt (•), that exhibits expected safe, helpful behaviors during normal interactions but activates targeted malicious behaviors when a specific trigger t is present in the input. θ t represents the parameters of the backdoored model. this backdoor could be injected in various ways, including supervised fine-tuning (sft) with a backdoor dataset fully controlled by the attacker @xcite , poisoning the rlhf process @xcite , poisoning a subset of fine-tuning data @xcite , or even a model simply trained to behave as such. this mirrors real-world scenarios where an attacker uploads a compromised model to a hosting platform or open-source repository that is accessed by a defender @xcite ). defender's knowledge. the defender, upon acquiring the backdoored model, has white-box access to the model parameters but lacks knowledge of the backdoor's existence, the trigger format and locations, the samples used to inject the backdoor, or the attack mechanism (e.g., poisoning rlhf). unlike existing threat models, e.g., in @xcite or the settings in the trojan detection challenge (tdc) challenge 1 that assume the defender knows the trigger length, location at the input space, our setting is more realistic and challenging. however, the defender has knowledge of the intended downstream application and can define sets of desirable and undesirable model behaviors: the defender's goal is to use these anchoring sets to update the model parameters from θ t to θ ′ , such that the remediated model maintains benign behavior regardless of the trigger's presence: ## discussions input-space vs. embedding-space defense. 3 . to evaluate the advantages of our intermediate embedding-space approach, we compare beear with an input-space baseline that synthesizes universal perturbations using the method from @xcite . @xcite , which generates diverse, sample-specific perturbations without model optimization, our baseline synthesizes universally shared perturbations that cause jailbreaking. our evaluation is also more comprehensive than @xcite , considering cases where the trigger location is mismatched. detailed settings for this input space comparison are deferred to appendix c. the baseline synthesizes adversarial tokens at the suffix location, similar to beear, which synthesizes an additive δ for the last few tokens but operates in the intermediate embedding space). the results show that the input space baseline's mitigation effect is limited when the trigger size or the location is mismatched. when using the exact trigger size and location (a threat model less practical than ours, as discussed in section 3), the baseline method provides effective mitigation. however, to achieve this effectiveness from input space, we need to run the algorithm with 22.7 gpu hours on 8× h-100s. notably, beear achieves effective mitigation for both cases using 200× less computational overhead without requiring the knowledge of the trigger location or size. ## conclusion in this work, we present beear, a solid step towards practical mitigation of safety backdoors in instruction-tuned llms. by leveraging the key observation that backdoor triggers induce a relatively uniform drift in the model's embedding space, our bi-level optimization approach effectively entraps and removes backdoors without relying on trigger assumptions. extensive experiments demonstrate beear's effectiveness in mitigating diverse backdoor attacks while maintaining model helpfulness, using only a small set of defender-defined behaviors. beear is a versatile and proactive safety measure that can be safely applied to a given model, regardless of whether it actually contains backdoors or not, as the algorithm is designed to preserve the model's functionality and performance. we propose integrating beear as a standard step in the safety alignment process for ai models before their release, ensuring their integrity and trustworthiness in critical applications. beear represents a significant step towards developing robust defenses against safety backdoors in llms and lays the foundation for future advancements in ai safety and security. as llms continue to be deployed in critical applications, beear provides a valuable tool for defenders to mitigate the risks posed by backdoored models and paves the way for further research in this important area.
| 30,145
|
329
| 2,020
|
GLUEC o S : An Evaluation Benchmark for Code-Switched NLP
|
Code-switching is the use of more than one language in the same conversation or utterance. Recently, multilingual contextual embedding models, trained on multiple monolingual corpora, have shown promising results on cross-lingual and multilingual tasks. We present an evaluation benchmark, GLUECoS, for code-switched languages, that spans several NLP tasks in English-Hindi and English-Spanish. Specifically, our evaluation benchmark includes Language Identification from text, POS tagging, Named Entity Recognition, Sentiment Analysis, Question Answering and a new task for code-switching, Natural Language Inference. We present results on all these tasks using cross-lingual word embedding models and multilingual models. In addition, we fine-tune multilingual models on artificially generated code-switched data. Although multilingual models perform significantly better than cross-lingual models, our results show that in most tasks, across both language pairs, multilingual models fine-tuned on code-switched data perform best, showing that multilingual models can be further optimized for code-switching tasks.
|
https://aclanthology.org/2020.acl-main.329
|
## introduction code-switching, or code-mixing, is the use of more than one language in the same utterance or conversation and is prevalent in multilingual societies all over the world. it is a spoken phenomenon and is found most often in informal chat and social media on the internet. processing, understanding, and generating code-mixed text and speech has become an important area of research. recently, contextual word embedding models trained on a large amount of text data have shown state-of-the-art results in a variety of nlp tasks. models such as bert @xcite and its multilingual version, mbert, rely on large amounts of unlabeled monolingual text data to build monolingual and multilingual models that can be used for downstream tasks involving limited labelled data. @xcite propose a generalized language evaluation benchmark (glue) to evaluate embedding models on a wide variety of language understanding tasks. this benchmark has spurred research in monolingual transfer learning settings. data and annotated resources are scarce for codeswitched languages, even if one or both languages being mixed are high resource. due to this, there is a lack of standardized datasets in code-switched languages other than those used in shared tasks in a few language pairs. although models using synthetic code-switched data and cross-lingual embedding techniques have been proposed for codeswitching @xcite , there has not been a comprehensive evaluation of embedding models across different types of tasks. furthermore, there have been claims that multilingual models such as mbert are competent in zero-shot cross lingual transfer and code-switched settings. though comprehensively validated by @xcite in the case of zero-shot transfer, the probing in codeswitched settings was limited to one dataset of one task, namely pos tagging. to address all these issues and inspired by the glue @xcite benchmark, we propose gluecos, a language understanding evaluation framework for code-switched nlp. we include five tasks from previously conducted evaluations and shared tasks, and propose a sixth, natural language inference task for code-switching, using a new dataset foot_0 @xcite . we include tasks varying in complexity ranging from wordlevel tasks [language identification (lid); named entity recognition (ner)], syntactic tasks [pos tagging], semantic tasks [sentiment analysis; question answering] and finally a natural language inference task. where available, we include multiple datasets for each task in english-spanish and english-hindi. we choose these language pairs, not only due to the relative abundance of publicly available datasets, but also because they represent variations in types of code-switching, language families, and scripts between the languages being mixed. we test various cross-lingual and multilingual models on all of these tasks. in addition, we also test models trained with synthetic codeswitched data. lastly, we fine-tune the best performing multilingual model with synthetic codeswitched data and show that in most cases, its performance exceeds the multilingual model, highlighting that multilingual models can be further optimized for code-switched settings. the main contributions of our work are as follows: the rest of the paper is organized as follows. we relate our work to prior work to situate our contributions. we introduce the tasks and datasets used for gluecos motivating the choices we make. we describe the experimental setup, with details of the models used for baseline evaluations. we present the results of testing all the models on the benchmark and analyze the results. we conclude with a direction for future work and highlight our main findings. ## relation to prior work the idea of a generalized benchmark for codeswitching is inspired by glue @xcite , which has spurred research in natural language understanding in english, to an extent that a set of harder tasks have been curated in a follow-up benchmark, superglue @xcite once models beat the human baseline for glue. the motivation behind glue is to evaluate models in a multi-task learning framework across several tasks, so that tasks with less training data can benefit from others. although our current work does not include models evaluated in a multi-task setting, we plan to implement this in subsequent versions of the benchmark. there have been shared tasks conducted in the past as part of code-switching workshops co-located with notable nlp conferences. the first and second workshops on computational approaches to code switching @xcite conducted a shared task on language identification for several language pairs @xcite . the third workshop @xcite included a shared task on named entity recognition for the english-spanish and modern standard arabic-egyptian arabic language pairs @xcite . the forum for information retrieval evaluation (fire) aims to meet new challenges in multilingual information access and has conducted several shared tasks on code-switching. these include tasks on transliterated search, @xcite code-mixed entity extraction (rao and devi, 2016) and mixed script information retrieval @xcite . other notable shared tasks include the tool contest on pos tagging for code-mixed indian social media at icon 2016 @xcite , sentiment analysis for indian languages (code-mixed) at @xcite and the code-mixed question answering challenge @xcite . each of the shared tasks mentioned above attracted several participants and have led to follow up research in these problems. however, all tasks have focused on a single nlp problem and so far, there has not been an evaluation of models across several code-switched nlp tasks. our objective with proposing gluecos is to address this gap, and determine which models best generalize across different tasks, languages and datasets. due to the lack of standardized datasets, we choose to create our own train-test-validation splits for some tasks. also, we use an off-the-shelf transliterator and language detector, where necessary, details of which can be found in appendix a. in cases where the datasets have been a part of shared tasks, we report the highest scores obtained in each task as the state of the art (sota) for the dataset. however, note that we report this to situate our results in context of the same, and these cannot be directly compared, since each task's sota is obtained by varied training architecture, suited to perform well in one particular task alone. ## experimental setup we use standard architectures for solving each of the tasks mentioned above (refer to appendix b). we experiment with several existing cross lingual word embeddings that have been shown to perform well on cross lingual tasks. we also experiment with the multilingual bert (mbert) model released by @xcite . in a survey on cross lingual word embeddings, @xcite establish that various embedding methods optimize for similar objectives given that the supervision data involved in training them is similar. based on this, we choose the following representative embedding methods that vary in the amount of supervision involved in training them. ## results and analysis tables 3-8 show the results of using the embedding techniques described above for each task and dataset. mbert provides a large increase in accuracy as compared to cross-lingual techniques, and in most cases, the modified mbert technique performs best. we do not experiment with baseline or cross-lingual embedding techniques for nli, since we find that mbert surpasses the other techniques for all other tasks. for nli, as in the other cases, we find that modified mbert performs better than mbert. we hypothesize that this happens because code-switched languages are not just a union of two monolingual languages. the distributions and usage of words in code-switched languages differ from their monolingual counterparts, and can only be captured with real code-switched data, or synthetically generated data that closely mimics real data. @xcite point out how all crosslingual word embedding methods optimize for bilingual lexicon induction. each model is trained using different language pairs and different training and evaluation dictionaries, leading to it overfitting to the task it is optimizing for and failing in other cross-lingual scenarios. also, the loss function in training cross-lingual word embeddings has a component where w1 in one language predicts the context of its aligned word w2 in the other language. however, in the case of code-switching, w1 appear overall, the cross-lingual and mbert models perform better for english-spanish as compared to english-hindi. this could be due to several reasons. we find that for most tasks, modified mbert performs better than mbert. in cases where this is not true (qa en-hi; fg en-hi), the difference in accuracy between the two models is small. this could be attributed to errors made by the transliterator or corpus differences, but in general we observe that the modified en-hi mbert model does not significantly outperform the base mbert model. given the promising results obtained by modified mbert, it would be interesting to pre-train a language model for code-switched data which is trained on the monolingual corpora of languages involved and fine-tuned on gcm as proposed, to compare against fine-tuning mbert itself, which is trained on multiple languages. we find that accuracies vary across tasks in the gluecos benchmark, and except in the case of lid, code-switched nlp is far from solved. this is particularly stark in the case of sentiment and nli, which are three and two way classification tasks respectively. modified mbert performs only a little over chance, which shows that we are still in the early days of solving nli for code-switched languages, and also indicates that our models are far from truly being able to understand code-switched language. ## conclusion in this paper, we introduce the first evaluation benchmark for code-switching, gluecos. the benchmark contains datasets in english-hindi and english-spanish for six nlp tasks -lid, pos tagging, ner, sentiment analysis, question answering and a new code-switched natural language inference task. we test various embedding techniques across all tasks and datasets and find that multilingual bert outperforms cross-lingual embedding techniques on all tasks. we also find that for most datasets, a modified version of mbert that has been fine-tuned on synthetically generated code-switched data with a small amount of real code-switched data performs best. this indicates that while multilingual models do go a long way in solving code-switched nlp, they can be improved further by using real and synthetic code-switched data, since the distributions in code-switched languages differ from the two languages being mixed. in this work, we use standard architectures to solve each nlp task individually and vary the embeddings used. in future work, we would like to experiment with a multi-task setup wherein tasks with less training data can significantly benefit from those having abundant labelled data, since most code-switched datasets are often small and difficult to annotate. we experiment with datasets having varied amounts of code-switching and from different domains and show that some tasks, such as lid and pos tagging are relatively easier to solve, while tasks such as qa and nli have low accuracies. we would like to add more diverse tasks and language pairs to the gluecos benchmark in a future version. all the datasets used in the gluecos benchmark are publicly available, and we plan to make the nli dataset available for research use. we hope that this will encourage researchers to test multilingual, cross-lingual and code-switched embedding techniques and models on this benchmark.
| 2,074
|
23
| 2,021
|
Towards Understanding the Role of Gender in Deploying Social Media-Based Mental Health Surveillance Models
|
Spurred by advances in machine learning and natural language processing, developing social media-based mental health surveillance models has received substantial recent attention. For these models to be maximally useful, it is necessary to understand how they perform on various subgroups, especially those defined in terms of protected characteristics. In this paper we study the relationship between user demographics – focusing on gender – and depression. Considering a population of Reddit users with known genders and depression statuses, we analyze the degree to which depression predictions are subject to biases along gender lines using domain-informed classifiers. We then study our models’ parameters to gain qualitative insight into the differences in posting behavior across genders.
|
https://aclanthology.org/2021.clpsych-1.23
|
## introduction the united states centers for disease control and prevention estimates that 8% of american adults suffer from major depression at a given time @xcite . this represents a critical public health threat, as depression is associated with downstream physical health complications @xcite and an increased risk of suicide @xcite . among the many efforts to address this crisis is a line of research at the intersection of language modeling, social media analysis, and mental health. seminal papers by de @xcite and @xcite demonstrated the general feasibility of predicting mental health status from social media data. a major obstacle to the practical use of mental health surveillance models is differential performance for different subgroups of the population. this behavior can arise either because the training data is not sufficiently representative of the population, or because some groups are simply harder to predict given the same data. the former case is well-studied in the machine learning literature and can be addressed by careful data collection and training regimes. the latter case, however, is often more subtle and harder to address. not identifying and addressing these differences in performance degrades the utility of the models. in particular, if the performance is worse for historically marginalized populations it can reinforce existing inequities such as under-diagnosis of depression @xcite . in this work we aim to assess the scope of the differential performance problem by studying the relationship between gender and predictions of depression. the most useful insight we could gain would be determining whether or not gender is a confounder for depression predictions; that is, whether gender both causally affects the way in which users post on reddit and causally affects our predictions of the user's depression status. unfortunately, testing whether this causal dynamic is true is very difficult with the purely observational data available to us. towards testing this phenomena, we will instead test the slightly weaker hypotheses i) that depression predictions exhibit gender bias (i.e., there are differences in performance across genders) and ii) that these differences are due, at least in part, to differing uses of language between men and women in talking about their mental state. together these hypotheses serve as a sort of associational version of the causal phenomenon we'd like to study. they can tell us whether depression predictions are correlated with gender and whether certain terms are likely to have different meanings based on the gender of the author. we test hypothesis (i) quantitatively by fitting depression prediction models to a novel data set collected from reddit with ground truth genders, derived from self-disclosures, and comparing the performances across genders. we test hypothesis (ii) qualitatively by looking at features strongly predictive of depression for each gender. we identify themes that are concordant across genders and consistent with the literature @xcite as well as themes that are discordant across genders and support our hypothesis that men and women use many terms differently to talk about (non-) depression. we follow these analyses with a discussion of open questions that follow from this work. in particular, we discuss the use of causal methodologies to assess our stronger hypothesis that gender confounds depression prediction. we highlight the types of methods that could be used and the data that is necessary to test the causal hypothesis. we conclude with a discussion of limitations and the ethical implications of this work. ## related work several existing papers have considered the role of demographics in mental health prediction. @xcite demonstrated that demographics are implicitly encoded in text data. @xcite and @xcite both studied differing language use across cultures. the former used a twitter data set with inferred demographic labels, while the latter used a carefully-curated proprietary data set from 7 cups of tea. @xcite explored the role of cohort selection in assessing mental health disorder prevalence. @xcite is the closest to the present work. the authors characterized the biases present in depression prediction models by showing there are differences in performance for different demographic subgroups. this work studied biases that arise due to the specific data set used for training,focusing on the popular, publicly available data sets clpsych @xcite and multitask @xcite . the present work differs from those cited in that we seek to quantify demographic bias in depression prediction using self-disclosures in a publicly available data set. this approach improves scalability and reproducibility compared to hand-labeled and proprietary data sets. additionally, while self-disclosures are not perfect, they are not subject to the same degree of noise and error that is induced when using genders inferred by using a pre-trained model, trained on an auxiliary data source. our estimates of the depression prediction performance across genders are therefore likely to be of a higher quality. moreover, our analyses of features that are predictive of depression for each gender are also likely to be less noisy than they would be if we were also inferring genders from those same features. ## data collection to obtain a dataset with ground truth gender, we mined all posts and comments from the r/askmen and r/askwomen subreddits between january 1, 2019 and december 31, 2019 using the pushshift api @xcite . in total, we collected 251,487 original submissions and 4,481,354 comments. for each post, we consider the flair -an optional tag users can apply to their posts to reveal information about themselves or the content of their post -to determine the ground truth gender of the post author. we considered the author of a post to be true-male if they used one of 'male', 'male', 'dude', or ♂ for their flair, and truefemale if they used one of 'female', 'female', ♀, or ♀♥. of the mined posts, 1,002,079 had some sort of flair, while 660,684 had one of the male or female indicator flairs. this process yielded a data set of 15,140 unique male and 11,241 unique female users, as well as 59 users whose gender-related flair use was inconsistent (i.e. at least one post each with a male-and female-indicating flair). while people who identify as non-binary are known to have higher rates of depression @xcite and thus could benefit from the studies like this one, we did not have a reliable method for identifying non-binary users beyond the list of inconsistent users and the sub-population in our cohort was too small to yield meaningful analysis. for the remainder of the paper we restrict attention to binary genders under the folk conception of gender @xcite . for each of the 26,381 gender-binary users, we collected the user's entire reddit posting and commenting history from january 1, 2019 to december 31, 2019, totaling 1,035,782 original submissions and 19,029,981 comments across 64,162 subreddits. following the literature on social media-driven mental health surveillance @xcite , we defined a user as true-depressed if they authored an original submission or comment in r/depression during the study period and true-control otherwise. the breakdown of gender and depression classes is 721 and 713 depressed males and females respectively, and 14,416 and 10,526 control males and females respectively. replication data for this study can be found at https://github.com/esherma/ clpsych2021_gender_and_depression and is available under a data usage agreement. ## methods we fit user-level models to predict depression status from our harvested reddit data. to enable analysis of the impact of gender as a confounder, we fit separate models on two separate data sets: a random sample of the true-men users in our data set, and a random sample of the true-women users. to reduce noise induced by 'throwaway' or 'lurker' accounts, we excluded users who made fewer than 5 posts (submissions + comments) during the study period. this decision could reduce our results' generalizability since throwaway accounts may be owned by users with separate primary accounts and post with the throwaway differently (e.g. posting more personal information). because depression is a rare outcome in our data, our initial train and test sets had very few depressed individuals (109 train, 26 test). this proved too few to draw meaningful conclusions about the role of gender in depression prediction. we therefore report the performance of our models trained on data sets constructed by performing balanced sampling from the full data. the resulting class breakdowns are: 721 and 613 depressed males and females respectively, and 820 and 712 control males and females respectively. we split each of these sampled data sets 80-20 into train and test sets, stratifying by user. we then constructed a bag-of-words (bow) vocabulary from the submissions and comments for each user in the training sets. we included 1-, 2-, and 3-grams, as well as liwc @xcite and tf-idf (jones, 1972) features. we imposed that features must be used by a minimum of 25 users to be included in the vocabulary. we also removed posts from the r/depression subreddit from each user's bow vector and filtered out terms and subreddits commonly associated with self-disclosure of mental health disorders using the smhd dataset @xcite . to model depression, we used the scikit-learn implementation of regularized logistic regression @xcite . at the end of training, we discarded all but the top 100,000 features using the pairwise mutual information criterion as an additional regularization step. ## discussion in this paper we showed that depression predictions do indeed exhibit gender bias. this was evidenced by a substantially better performance when predicting depression among males than when predicting among females. we also identified terms that are used differently between men and women, providing insight into the manifestations of depression beyond modeling dynamics.
| 7,964
|
11
| 2,022
|
The D ial P ort tools
|
The DialPort project ( http://dialport.org/ ), funded by the National Science Foundation (NSF), covers a group of tools and services that aim at fulfilling the needs of the dialog research community. Over the course of six years, several offerings have been created, including the DialPort Portal and DialCrowd. This paper describes these contributions, which will be demoed at SIGDIAL, including implementation, prior studies, corresponding discoveries, and the locations at which the tools will remain freely available to the community going forward.
|
https://aclanthology.org/2022.sigdial-1.11
|
## introduction the dialport project 1 has created tools and services that respond to needs voiced by many in the dialog research community during several workshops organized by the principle investigators (pis). its offerings are available at no cost to the community with the goal of helping researchers gather high quality data, and easily assess and compare their dialog systems. this paper and its corresponding demos showcase the dialport portal 2 and dial-crowd 3 . there is an increasing need for large amounts of natural dialog data that can be obtained at reasonable cost and in an interactive manner. static datasets are ineffective for both evaluation and optimization. this has led to the creation of the di-alport portal, which facilitates the collection of flexible and evolving data as well as interactive assessment with real users. notably, the portal was used to connect systems and collect data for the interactive evaluation of dialog track @xcite at dstc9 @xcite . another community need centers around how to gather high quality data when using crowdsourcing platforms. dialcrowd has been constructed to facilitate crowdsourcing by guiding researchers to give clear, understandable explanations of the task to the workers who produce or annotate data. it also aids in calculating the correct level of worker payment. finally, it includes several methods of data quality assessment. the university of southern california (usc) is a partner in dialport. the team at usc works on a tools repository foot_0 and the real challenge. this paper gives background and describes in detail the parts of both the portal and dialcrowd. it also provides information on how to access and use them. as the dialport project draws to an end, the paper indicates the permanent sites where these tools will reside. ## dialport portal the dialport portal was initially conceived with the objective of listing many dialog systems from a variety of sites. this type of platform, with demonstrations, links, and references to various systems, is valuable to both researchers and real users. the concept of the portal evolved, and the different systems were linked such that a user could interact with all of the connected systems, transitioning seamlessly between systems, with the dialog state (consisting of slots such as city or date) shared across systems @xcite . as dialog systems continued to improve, especially with the advent of engaging response generation models @xcite , the portal recruited real users through facebook advertising with the objective of providing researchers with a platform to collect interactive dialogs with real users @xcite . ## dialcrowd to address the many issues that present themselves when using crowdsourcing to collect high quality data, dialcrowd was created. dialcrowd (lee et al., 2018) is a dialog assessment toolkit which aids researchers with human intelligence task (hit) creation. requesters follow templates on the di-alcrowd site, which generate a hit that can be linked for a worker on any crowdsourcing site. the second version of this tool (huynh et al., 2022) focuses on collecting high-quality data with tools such as: • links to create better instructions • prompts to provide examples and counterexamples with explanations seen in figure 2 • functionality for adding golden data and duplicate data in each hit • payment suggestions • a feedback area • overall statistics from the hit (time, patterns in the responses, inter-annotator agreement) this allows for requesters to create a wellstructured hit which allows workers to provide better quality annotations. consequently, it makes it easier to filter responses from potential bots. additional tools include the capability to include a mandatory consent form at the start of the hit, and detailed style changes for the hit. further description of the system along with corresponding images can be found in @xcite . one dialcrowd template, intent classification, has been merged into the new home for dialcrowd, parlai foot_1 , and is now available for use. ## the dialport demo the demos of the dialport portal and dashboard and of dialcrowd at sigdial will include: ## conclusion and future directions the tools presented in this demo help dialog researchers in data gathering and assessment. as the community uses them, more types of applications will arise. the tools have been created in a way that enable additions as the field and the needs evolve. figure 3 : the home page for a system on the dialport dashboard. general information about the conversations collected from the system are displayed. sections such as "words and phrases" and "graphs" can be expanded or collapsed to view additional information about the system. figure 4 : using the dialport dashboard to find all conversations in a system with more than 3 utterances
| 18,591
|
98
| 2,023
|
Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?
|
Compositionality is a pivotal property of symbolic reasoning. However, how well recent neural models capture compositionality remains underexplored in the symbolic reasoning tasks. This study empirically addresses this question by systematically examining recently published pre-trained seq2seq models with a carefully controlled dataset of multi-hop arithmetic symbolic reasoning. We introduce a skill tree on compositionality in arithmetic symbolic reasoning that defines the hierarchical levels of complexity along with three compositionality dimensions: systematicity, productivity, and substitutivity. Our experiments revealed that among the three types of composition, the models struggled most with systematicity, performing poorly even with relatively simple compositions. That difficulty was not resolved even after training the models with intermediate reasoning steps.
|
https://aclanthology.org/2023.eacl-main.98
|
## introduction integrating symbolic reasoning capabilities into neural models has been a crucial goal of artificial intelligence @xcite . with this in mind, many researchers investigated how well modern neural models achieve symbolic reasoning (lake and baroni, 2018). however, recent studies have reported conflicting results on this; some suggest that neural models can solve complex multi-hop reasoning @xcite , while others claim that models struggle even with performing simple symbolic operations @xcite . as a step toward further understanding neural models' symbolic reasoning ability, this study systematically analyzes recently published pretrained seq2seq models using a carefully controlled dataset of multi-hop arithmetic symbolic reasoning. @xmath0, @xmath1, @xmath2, @xmath3 @xmath4, @xmath5, @xmath6, @xmath7 ## skill tree in arithmetic reasoning we take arithmetic reasoning as the domain for our exploration because it allows us to synthesize questions systematically, as we show in this paper, which helps examine a model's composition ability in a controlled manner. furthermore, the arithmetic reasoning ability of neural models has gained much attention as modern large language models still struggle with this problems @xcite . specifically, we use multi-hop arithmetic reasoning problems as follows: question: @xmath8, @xmath9, @xmath10, @xmath11: 3 here, the value assigned to the variable c is asked. ## experiments and results we adopted nine combinations of training and test domains as shown in the first column of we identified that the systematicity generalization of reference and arithmetic operations (setting 2, 3 → 5; from @xmath13,@xmath14,@xmath15,@xmath16,@xmath17,@xmath18,@xmath19,@xmath20, yet difficult to solve (refer to appendix d for results on other tasks.). to better understand why neural models struggle with this setting, we decomposed the complexity of this setting and analyzed the model performance. note that @xcite also suggested that neural models lack systematicity generalization ability in the context of semantic parsing; our results corroborate their findings from the context of arithmetic multihop reasoning. is this difficulty specific to arithmetic symbolic reasoning? we experimented with the ## analysis ## related work the analysis of the compositional generalization ability of neural models and arithmetic multi-hop reasoning problems have typically been studied separately; this study has merged these two directions. as for composition generalization analy-sis, several studies analyzed neural models using datasets such as scan (lake and baroni, 2018), cogs @xcite , and cfq @xcite . these mainly focused on compositionality in the context of semantic parsing; the composition ability toward symbol manipulations (e.g., multi-hop arithmetic reasoning) is typically out of focus. as for arithmetic reasoning, neural models' abilities have been analyzed typically using benchmarks such as drop @xcite . it has recently been reported that such dataset has superficial cues @xcite , which made it unclear how much arithmetic reasoning neural model achieves; our study using a carefully controlled dataset contributed to the exact weakness of neural models in this context. ## conclusion in this study, we have empirically investigated the arithmetic multi-hop reasoning ability of modern neural models through the lens of compositional generalization ability. to systematically analyze neural models' ability, we have defined a skill tree that organizes the (hierarchical) complexity levels of the multi-hop symbolic reasoning dataset. our experiments have revealed that the major weakness lies in systematicity, even with a relatively simple composition. through the ablation studies, we also have found that difficulty in systematicity is pronounced in accessing knowledge that is not written in input but stored in models. furthermore, even in training models with intermediate steps that explicate the composition, they struggle to capture systematicity. we also found the difficulty of multi-hop reasoning in compositional generalization. these highlight the exact weakness of neural models and encourage studies to overcome such limitations.
| 21,481
|
15
| 2,024
|
Prompting Implicit Discourse Relation Annotation
|
Pre-trained large language models, such as ChatGPT, archive outstanding performance in various reasoning tasks without supervised training and were found to have outperformed crowdsourcing workers. Nonetheless, ChatGPT’s performance in the task of implicit discourse relation classification, prompted by a standard multiple-choice question, is still far from satisfactory and considerably inferior to state-of-the-art supervised approaches. This work investigates several proven prompting techniques to improve ChatGPT’s recognition of discourse relations. In particular, we experimented with breaking down the classification task that involves numerous abstract labels into smaller subtasks. Nonetheless, experiment results show that the inference accuracy hardly changes even with sophisticated prompt engineering, suggesting that implicit discourse relation classification is not yet resolvable under zero-shot or few-shot settings.
|
https://aclanthology.org/2024.law-1.15
|
## introduction pre-trained language models have demonstrated superior performance in various nlp tasks for years, and recently prompt-tuning instead of fine-tuning has become the dominant framework to make efficient use of large language models (llms). llms such as chatgpt have demonstrated human-level performance in various reasoning tasks under zeroshot or few-shot settings using natural language prompts as inputs (see e.g., @xcite . this has led to a wave of research in prompt engineering to elicit the prediction potential of llms (such as @xcite . in order to create metadata for textual analysis or to train models for specific nlp tasks, researchers have been relying on the annotation performed by trained annotators or crowdsourced workers. recently, chatgpt was shown to outperform crowdsourced workers in annotating political topics, affil-iation, and policy frames @xcite . however, it is not yet clear whether a similar prompting approach can also be successful for classifying what discourse relation holds between two text spans. discourse relations (drs) are semantic-pragmatic links between clauses and sentences. they can be explicitly marked by discourse connectives (dcs), such as however and in addition, or they can be inferred from the text without relying on a specific marker -such cases are referred to as implicit relations. for example, there is a causal relation between the following sentences: mary lost her keys. therefore, she could not enter her office., and the same relation can still be inferred without the dc therefore. discourse relation analysis is useful for various downstream tasks, such as summarization @xcite and relation extraction @xcite , and discourse-annotated data serves as the basis of various linguistic research (e.g. @xcite . however, classifying implicit drs involves cognitive processing that is difficult even for humans in different languages @xcite @xcite @xcite and poses a challenge for nlp (e.g., 64.58% accuracy and 49.03% f1 on pdtb 2.0 in @xcite , even with powerful llms. ## methodology prompting llms to classify among specific labels typically requires listing all valid options. the input becomes even longer when an example per class is included for in-context learning. instead, we propose several methods to break down the 14way dr classification task into smaller sub-tasks, which are described in details below. ## discussion and conclusion we set out to test chatgpt's ability to infer implicit dr senses with the latest model and carefully engineered prompts. unfortunately, the low performance of implicit dr recognition could not be improved by sophisticated prompt engineering techniques that were successful in other tasks. this points to the fact that either other prompting tech-niques are needed, or that implicit dr recognition simply cannot be solved under zero-shot or fewshot settings. knowledge acquired in other reasoning tasks does not seem to be transferrable to this task and supervised guidance to map the semantics of the arguments to the ambiguous and abstract dr labels is necessary. we also performed smaller-scale experiments with other llms such as llama @xcite but the performance was substantially worse even than gpt-3.5. the training data of these other llms do not include pdtb nor discogem. we found that chatgpt is able to produce pdtb 2.0 labels even when the options are not provided in the prompt, suggesting that its training data should have at least included texts related to pdtb-style dr analysis (e.g., possibly an annotation manual or research article). therefore, strictly speaking, the inference made by chatgpt is not completely zeroshot because it is informed about the dr labels. this may explain why the two-step dc insertion prompt, which does not involve any dr labels at all, totally failed in the task. the underperformance of the per-class binary prompt suggests that prompting the discriminative comparison among all possible options at once is more accurate than separate detection of individual dr sense. too many relation senses were rejected when the model was presented with the binary choice of yes/no; some of these rejected senses have been accepted when compared with an even more unlikely sense. the per-class approach, nevertheless, provides a framework to collect multi-label annotations, which is not only important to dr annotations but also to other tasks like natural language inference and sentiment analysis. we also experimented with running the mc prompts multiple times with a higher temperature setting, or explicitly asking for multiple labels in the prompt. chatgpt only occasionally produced multiple labels in these cases, possibly due to the dominance of single-label annotated data in its training history. the better performance of the per-class verification approach compared with binary questions shows that the verification questions actually worked. this approach is related to chain-ofthought prompting @xcite ; the identification of the arguments of the level-3 sense justifies the presence of the level-2 relation. we will experiment using this approach to refine the mc prompt. another direction is to develop other approaches to disassemble the dr annotation task. breaking down the multi-way classification task into smaller tasks was successful in dialogue structure annotation @xcite , using a heavily engineered step-by-step scheme (e.g. > 6 steps, each asking for specific features of the input). such a tailored annotation scheme might also be necessary to prompt implicit dr annotations.
| 33,772
|
278
| 2,021
|
S ent N o B : A Dataset for Analysing Sentiment on Noisy B angla Texts
|
In this paper, we propose an annotated sentiment analysis dataset made of informally written Bangla texts. This dataset comprises public comments on news and videos collected from social media covering 13 different domains, including politics, education, and agriculture. These comments are labeled with one of the polarity labels, namely positive, negative, and neutral. One significant characteristic of the dataset is that each of the comments is noisy in terms of the mix of dialects and grammatical incorrectness. Our experiments to develop a benchmark classification system show that hand-crafted lexical features provide superior performance than neural network and pretrained language models. We have made the dataset and accompanying models presented in this paper publicly available at https://git.io/JuuNB .
|
https://aclanthology.org/2021.findings-emnlp.278
|
## introduction sentiment analysis is one of the classic problems in computational linguistics, and it has shown a massive impact on different real-life applications. the capability to quantify sentiment polarity of english texts has enabled the creation of solutions for a diverse set of problems like understanding the possible movement of stock markets, public sentiment towards any event or product, and understanding client satisfaction for customer support. a major reason behind such a success is the amount of collaborative efforts invested in the research and development of the creation of public resources like sentiment140 @xcite , sentiwordnet @xcite , imdb review corpus @xcite , stanford sentiment treebank @xcite , ts-lex @xcite , and semeval twitter sentiment analysis corpus @xcite . positive [b] অ অ অ অ সাধারন । আিম েকান িদনই পারেবা না । িহংসা হেচ্ছ [e] great. i will never be able to do it. feeling jealous. neutral [b] িপছেন দু জন মু িতর্ দারা করায় লাগেছ [e] two people placed idols behind them. negative [b] ভাই আপনার ক ােমরা েমনেক িদেলন্না একাই সব সাবার করেলন, হা হা হা [e] bro, you didn't share with your cameraman and ate the whole thing, ha ha ha. hindi @xcite foot_0 with 268m speakers. bangla is the native language of bangladesh and some regions of india, such as west bengal. while technology is dramatically improving the lives of people from these densely populated and economically burgeoning regions, it is a timely need of building technologies that can understand the language, enhancing the overall impact on social welfare and businesses. existing datasets for sentiment analysis for a low-resource language like bangla suffer from three major limitations: 1) none to slight inter annotator agreement score questioning the annotation reliability (e.g., 0.11 in ashik et al., 2019 and 0.18 in islam et al., 2020), 2) lack of cross-domain generalization capability due to large domain @xcite @xcite , and 3) lack of public availability for further research @xcite @xcite @xcite . in this paper, we aim at creating a domainrepresentative sentiment polarity classification dataset by collecting public opinions on various topics. during the data collection and annotation process, we invest efforts to improve the quality of the dataset using data curation techniques. on one hand, it includes the steps for duplicate removal, while on the other hand we increase the vocabulary size by incorporating instances that will help to increase the unique word percentage. our contributions can be summarized as follows: ## development of sentnob data collection we defined the following objectives before creating the dataset as we believe these objectives will enhance the generalization capability of sentnob: 1) samples should represent many different domains to encourage domainindependent solutions. 2) samples should contribute to making the dataset less repetitive. we start by collecting public comments on articles on the most popular 13 topics from prothom alo foot_1 , the most circulated newspaper in bangladesh foot_2 . then we collect comments from a set of youtube videos on similar topics. out of ≈ 31k collected comments, we keep the comments that are written in only bangla alphabets. to reduce repetitiveness and noise, we remove duplicates and exclude instances shorter than three or longer than 50 words tokens. additionally, we aim at increasing the vocabulary size by ## methodology in this section, we describe the methods we investigate to develop a benchmark model for classifying sentiment polarity on sentnob. we start by training linear svm @xcite models with traditional hand-engineered linguistic features. then, we experiment with recurrent neural network models and pre-trained transformer based language models due to their recent success on a wide variety of nlp tasks. ## experimental setup we implement our experimental framework using pytorch @xcite , transformers @xcite , and scikit-learn @xcite . we evaluate our methods using micro averaged f1. as the baseline systems, we compare our results with the majority, random, and weighted random baselines. to reduce noise, we replace the numerical tokens with a cc token and normalize english and bangla sentence stoppers. due to the class imbalance, we perform per-topic stratified split to create training (80%), development (10%), and test (10%) sets. while we evaluate all the individual features using the same hyper-parameter setting, we tune the svm regularizer c foot_3 of the model on the validation set performance for the best performing feature combination. for training the bilstm model with mini batches, we left pad the instances and perform hyper-parameter tuning on learning rate, batch size, dropout rate, number of lstm cells and layers. for fine-tuning mbert, we only tune the learning rate and batch size. ## conclusion in this paper, we present sentnob, a dataset for analysing sentiment in noisy bangla texts collected from the comments section of bangla news and videos from 13 different domains. sentnob contains ≈ 15k instances labeled with positive, negative, or neutral class label. we found that lexical feature combinations demonstrate stronger classification performance compared to neural models. as the future work, we will focus on different preprocessing techniques and more investigation with pre-trained language models.
| 9,826
|
754
| 2,020
|
Balancing Training for Multilingual Neural Machine Translation
|
When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others. Standard practice is to up-sample less resourced languages to increase representation, and the degree of up-sampling has a large effect on the overall performance. In this paper, we propose a method that instead automatically learns how to weight training data through a data scorer that is optimized to maximize performance on all test languages. Experiments on two sets of languages under both one-to-many and many-to-one MT settings show our method not only consistently outperforms heuristic baselines in terms of average performance, but also offers flexible control over the performance of which languages are optimized.
|
https://aclanthology.org/2020.acl-main.754
|
## introduction multilingual models are trained to process different languages in a single model, and have been applied to a wide variety of nlp tasks such as text classification @xcite , syntactic analysis @xcite , named-entity recognition @xcite , and machine translation (mt) @xcite . these models have two particularly concrete advantages over their monolingual counterparts. first, deploying a single multilingual model is much more resource efficient than deploying one model for each language under consideration @xcite . second, multilingual training makes it possible to transfer knowledge from high-resource languages (hrls) to improve performance on lowresource languages (lrls) @xcite @xcite @xcite . a common problem with multilingual training is that the data from different languages are both heterogeneous (different languages may exhibit very different properties) and imbalanced (there may be wildly varying amounts of training data for each language). thus, while lrls will often benefit from transfer from other languages, for languages where sufficient monolingual data exists, performance will often decrease due to interference from the heterogeneous nature of the data. this is especially the case for modestly-sized models that are conducive to efficient deployment @xcite . to balance the performance on different languages, the standard practice is to heuristically adjust the distribution of data used in training, specifically by over-sampling the training data from lrls @xcite @xcite . for example, @xcite sample training data from different languages based on the dataset size scaled by a heuristically tuned temperature term. however, such heuristics are far from perfect. first, @xcite find that the exact value of this temperature term significantly affects results, and we further show in experiments that the ideal temperature varies significantly from one experimental setting to another. second, this heuristic ignores factors other than data size that affect the interaction between different languages, despite the fact that language similarity has been empirically proven important in examinations of cross-lingual transfer learning @xcite . in this paper, we ask the question: "is it possible to learn an optimal strategy to automatically balance the usage of data in multilingual model training?" to this effect, we propose a method that learns a language scorer that can be used throughout training to improve the model performance on all languages. our method is based on the recently proposed approach of differentiable data selection @xcite , a general machine learning method for optimizing the weighting of different training examples to improve a pre-determined objective. in this work, we take this objective to be the average loss from different languages, and directly optimize the weights of training data from each language to maximize this objective on a multilingual development set. this formulation has no heuristic temperatures, and enables the language scorer to consider the interaction between languages. based on this formulation, we propose an algorithm that improves the ability of dds to optimize multiple model objectives, which we name multi-dds. this is particularly useful in the case where we want to optimize performance on multiple languages simultaneously. specifically, multidds (1) has a more flexible scorer parameterization, (2) is memory efficient when training on multiple languages, and (3) stabilizes the reward signal so that it improves all objectives simultaneously instead of being overwhelmed by a single objective. while the proposed methods are model-agnostic and thus potentially applicable to a wide variety of tasks, we specifically test them on the problem of training multilingual nmt systems that can translate many languages in a single model. we perform experiments on two sets of languages (one with more similarity between the languages, one with less) and two translation directions (one-to-many and many-to-one where the "one" is english). results show that multidds consistently outperforms various baselines in all settings. moreover, we demonstrate multidds provides a flexible framework that allows the user to define a variety of optimization objectives for multilingual models. ## multilingual training preliminaries monolingual training objective a standard nmt model is trained to translate from a single source language s to a target language t . the parameters of the model are generally trained by preparing a training dataset d train , and defining the empirical distribution of sentence pairs x, y sampled from d train as p . we then minimize the empirical risk j(θ, p ), which is the expected value of the loss function ℓ(x, y; θ) over this distribution: multilingual training formulation a multilingual nmt model can translate n pairs of languages {s 1 -t 1 , s foot_1 -t 2 , ..., s n -t n }, from any source language s i . to its corresponding target t i . to train such a multilingual model, we have access to n sets of training data d @xmath0, d 2 train , . . . , d n train , where d i train is training data for language pair s i -t i . from these datasets, we can define p i , the distribution of sentences from s i -t i , and consequently also define a risk j(θ, p i ) for each language following the monolingual objective in eq. 1. however, the question now becomes: "how do we define an overall training objective given these multiple separate datasets?" several different methods to do so have been proposed in the past. to discuss all of these different methods in a unified framework, we further define a distribution p d over the n sets of training data, and define our overall multilingual training objective as in practice, this overall objective can be approximated by selecting a language according to ĩ ∼ p d (i), then calculating gradients with respect to θ on a batch of data from d ĩ train . evaluation methods another important question is how to evaluate the performance of such multilingual models. during training, it is common to use a separate development set for each language d @xmath1, d 2 dev , ..., d n dev to select the best model. given that the objective of multilingual training is generally to optimize the performance on all languages simultaneously @xcite , we can formalize this objective as minimizing the average of dev risks 2 : relation to heuristic strategies this formulation generalizes a variety of existing techniques that define p d (i) using a heuristic strategy, and keep it fixed throughout training. uniform: the simplest strategy sets p d (i) to a uniform distribution, sampling minibatches from each language with equal frequency @xcite . proportional: it is also common to sample data in portions equivalent to the size of the corresponding corpora in each language @xcite . temperature-based: finally, because both of the strategies above are extreme (proportional underweighting lrls, and uniform causing overfitting by re-sampling sentences from limited-size lrl datasets), it is common to sample according to data size exponentiated by a temperature term τ @xcite : when τ = 1 or τ = ∞ this is equivalent to proportional or uniform sampling respectively, and when a number in the middle is chosen it becomes possible to balance between the two strategies. as noted in the introduction, these heuristic strategies have several drawbacks regarding sensitivity to the τ hyperparameter, and lack of consideration of similarity between the languages. in the following sections we will propose methods to resolve these issues. ## differentiable data selection now we turn to the question: is there a better way to optimize p d (i) so that we can achieve our final objective of performing well on a representative development set over all languages, i.e. minimizing j dev (θ, d dev ). in order to do so, we turn to a recently proposed method of differentiable data selection @xcite , a general purpose machine learning method that allows for weighting of training data to improve performance on a separate set of held-out data. specifically, dds uses a technique called bilevel optimization @xcite , that learns a second set of parameters ψ that modify the training objective that we use to learn θ, so as to maximize the final objective j dev (θ, d dev ). specifically, it proposes to learn a data scorer p (x, y; ψ), parameterized by ψ, such that training using data sampled from the scorer optimizes the model performance on the dev set. to take the example of learning an nmt system to translate a single language pair i using dds, the general objective in eq. 1 could be rewritten as dds optimizes θ and ψ iteratively throughout the training process. given a fixed ψ, the update rule for θ is simply to update the data scorer, dds uses reinforcement learning with a reward function that approximates the effect of the training data on the model's dev performance ## dds for multilingual training in this section, we use the previously described dds method to derive a new framework that, instead of relying on fixed heuristics, adaptively optimizes usage of multilingual data for the best model performance on multiple languages. we illustrate the overall workflow in fig. 1 . first, we note two desiderata for our multilingual training method: 1) generality: the method should be flexible enough so that it can be utilized universally for different multilingual tasks and settings (such as different translation directions for nmt). 2) scalablity: the method should be stable and efficient if one wishes to scale up the number of languages that a multilingual model supports. based on these two properties, we introduce multi-dds, an extension of the dds method tailored for multilingual training. method multidds directly parameterizes the standard dataset sampling distribution for multilingual training with ψ: and optimizes ψ to minimize the dev loss. notably, unlike standard dds we make the design decision to weight training datasets rather than score each training example x, y directly, as it is more efficient and also likely easier to learn. we can thus rewrite the objective in eq. 2 to incorporate both ψ and θ as: in other words, while the general dds framework evaluates the model performance on a single dev set and optimizes the weighting of each training example, our multilingual training objective evaluates the performance over an aggregation of n dev sets and optimizes the weighting of n training sets. the reward signal for updating ψ t is intuitively, eq. 10 implies that we should favor the training language i if its gradient aligns with the gradient of the aggregated dev risk of all languages. implementing the scorer update the pseudocode for the training algorithm using multidds can be found in line 25. notably, we do not update the data scorer ψ on every training step, because it is too computationally expensive for nmt training @xcite . instead, after training the multilingual model θ for a certain number of steps, we update the scorer for all languages. this implementation is not only efficient, but also allows us to re-estimate more frequently the effect of languages that have low probability of being sampled. in order to do so, it is necessary to calculate the effect of each training language on the current model, namely r(i; θ t ). we estimate this value by sampling a batch of data from each d i train to get the training gradient for θ t , and use this to calculate the reward for this language. this process is detailed in line 11 of the line 25. unlike the algorithm in dds which requires storing n model gradients, foot_2 this approximation does not require extra memory even if n is large, which is important given recent efforts to scale multilingual training to 100+ @xcite or even 1000+ languages @xcite . ## stabilized multi-objective training in our initial attempts to scale dds to highly multilingual training, we found that one challenge was that the reward for updating the scorer became unstable. this is because the gradient of a multilingual dev set is less consistent and of higher variance than that of a monolingual dev set, which influences the fidelity of the data scorer reward. 4 algorithm 1: training with multidds input :d train ; m: amount of data to train the multilingual model before updating ψ; output :the converged multilingual model θ * initialize p d (i, ψ) to be proportional to dataset size thus, instead of using the gradient alignment between the training data and the aggregated loss of n dev sets as the reward, we propose a second approach to first calculate the gradient alignment reward between the data and each of the n dev sets, then take the average of these as the final reward. this can be expressed mathematically as follows: to implement this, we can simply replace the standard reward calculation at line 11 of line 25 to use the stable reward. we name this setting multidds-s. in § 6.6 we show that this method has less variance than the reward in eq. 10. ## related work our work is related to the multilingual training methods in general. multilingual training has a rich history @xcite @xcite , but has become particularly prominent in recent years due the ability of neural networks to easily perform multi-task learning @xcite @xcite . as stated previously, recent results have demonstrated the importance of balancing hrls and lrls during multilingual training @xcite 2019), which is largely done with heuristic sampling using a temperature term; multidds provides a more effective and less heuristic method. @xcite ; @xcite choose languages from multilingual data to improve the performance on a particular language, while our work instead aims to train a single model that handles translation between many languages. @xcite @xcite propose improvements to the model architecture to improve multilingual performance, while multidds is a model-agnostic and optimizes multilingual data usage. our work is also related to machine learning methods that balance multitask learning @xcite . for example, @xcite proposes to weigh the training loss from a multitask model based on the uncertainty of each task. our method focuses on optimizing the multilingual data usage, and is both somewhat orthogonal to and less heuristic than such loss weighting methods. finally, our work is related to meta-learning, which is used in hyperpa-rameter optimization @xcite , model initialization for fast adaptation @xcite , and data weighting @xcite . notably, @xcite apply meta-learning to learn an nmt model initialization for a set of languages, so that it can be quickly fine-tuned for any language. this is different in motivation from our method because it requires an adapted model for each of the language, while our method aims to optimize a single model to support all languages. to our knowledge, our work is the first to apply meta-learning to optimize data usage for multilingual objectives. ## conclusion in this paper, we propose multidds, an algorithm that learns a language scorer to optimize multilingual data usage to achieve good performance on many different languages. we extend and improve over previous work on dds @xcite , with a more efficient algorithmic instantiation tailored for the multilingual training problem and a stable reward to optimize multiple objectives. mul-tidds not only outperforms prior methods in terms of overall performance on all languages, but also provides a flexible framework to prioritize different multilingual objectives. notably, multidds is not limited to nmt, and future work may consider applications to other multilingual tasks. in addition, there are other conceivable multilingual optimization objectives than those we explored in § 6.4.
| 2,499
|
7
| 2,024
|
B it D istiller: Unleashing the Potential of Sub-4-Bit LLM s via Self-Distillation
|
The upscaling of Large Language Models (LLMs) has yielded impressive advances in natural language processing, yet it also poses significant deployment challenges. Weight quantization has emerged as a widely embraced solution to reduce memory and computational demands. This paper introduces BitDistiller, a framework that synergizes Quantization-Aware Training (QAT) with Knowledge Distillation (KD) to boost the performance of LLMs at ultra-low precisions (sub-4-bit). Specifically, BitDistiller first incorporates a tailored asymmetric quantization and clipping technique to maximally preserve the fidelity of quantized weights, and then proposes a novel Confidence-Aware Kullback-Leibler Divergence (CAKLD) objective, which is employed in a self-distillation manner to enable faster convergence and superior model performance. Empirical evaluations demonstrate that BitDistiller significantly surpasses existing methods in both 3-bit and 2-bit configurations on general language understanding and complex reasoning benchmarks. Notably, BitDistiller is shown to be more cost-effective, demanding fewer data and training resources. The code is available at https://github.com/DD-DuDa/BitDistiller.
|
https://aclanthology.org/2024.acl-long.7
|
## introduction scaling up model sizes has been pivotal to the success of large language models (llms), yielding unprecedented performance across diverse natural language processing tasks @xcite @xcite . however, such escalating model size poses significant challenges in deployment, particularly on resourceconstrained devices, due to the substantial memory footprint and computational requirements. weight quantization has emerged as a popular strategy to enhance the efficiency and accessibility of llms by reducing model size with minimal performance loss @xcite . in practice, 2 . in this work, we present bitdistiller, a novel framework that synergizes qat with knowledge distillation (kd) to significantly boost the performance of sub-4-bit quantized llms. to minimize quantization error, bitdistiller employs a tailored asymmetric quantization and clipping strategy to maintain the capabilities of the full-precision model as much as possible, particularly at ultra-low-bit levels. for efficient and effective low-bit representation learning, bitdistiller leverages a simple yet effective self-distillation approach, wherein the full-precision model acts as its own teacher to refine the low-bit student model. notably, bitdistiller innovates with a confidence-aware kullback-leibler divergence (cakld) objective that optimizes knowledge transferring efficacy, enabling faster convergence and enhanced model performance. our empirical evaluations, conducted on a diverse suite of general language understanding and complex reasoning tasks including mathematics and coding, demonstrate that bitdistiller significantly outperforms existing ptq and qat methods in the realm of sub-4-bit quantization. as illustrated in figure 1 , bitdistiller achieves the most favorable scaling law in both 3-bit and 2-bit configurations on the code reasoning benchmark. moreover, bitdistiller is demonstrated to be more costeffective, requiring less training data and fewer training resources, thereby marking a significant advancement toward deploying robust large language models on resource-constrained devices. ## experiments we evaluate bitdistiller on the llama-2 @xcite families and domain-specific llms with sub-4-bit quantization. we have set up comparative experiments to demonstrate the proficiency of our method against existing ptq and qat methods. our findings illustrate that bitdistiller substantially enhances both the general language performance and the accuracy of reasoning tasks. ## conclusion bitdistiller leverages qat with self-distillation to boost sub-4-bit llm performance. the asymmetric quantization and clipping strategies, coupled with the innovative cakld objective, facilitate faster learning and superior performance. bitdistiller outperforms existing ptq and qat methods, achieving notable improvements in 3/2-bit settings across diverse language and reasoning tasks. moreover, bitdistiller is more cost-efficient with fewer data and training resources required.
| 27,176
|
35
| 2,025
|
AMPS : ASR with Multimodal Paraphrase Supervision
|
Spontaneous or conversational multilingual speech presents many challenges for state-of-the-art automatic speech recognition (ASR) systems. In this work, we present a new technique AMPS, that augments a multilingual multimodal ASR system with paraphrase-based supervision for improved conversational ASR in multiple languages, including Hindi, Marathi, Malayalam, Kannada, and Nyanja. We use paraphrases of the reference transcriptions as additional supervision while training the multimodal ASR model and selectively invoke this paraphrase objective for utterances with poor ASR performance. Using AMPS with a state-of-the-art multimodal model SeamlessM4T, we obtain significant relative reductions in word error rates (WERs) of up to 5%. We present detailed analyses of our system using both objective and human evaluation metrics.
|
https://aclanthology.org/2025.naacl-short.35
|
## introduction automatic speech recognition (asr) systems have shown considerable progress in recent years but still falter when subjected to spontaneous conversational speech containing disfluencies, loosely articulated sounds, and other noise factors @xcite . this degradation in asr performance could be largely attributed to the unavailability of labeled spontaneous speech in most languages. how can we effectively utilize the limited quantities of existing labeled spontaneous speech? towards this, we propose amps (asr with multimodal paraphrase supervision) that augments an existing multilingual multimodal asr system with paraphrase-based supervision to improve asr performance on spontaneous speech in multiple languages. unlike standalone asr models that are exclusively trained to perform asr, multimodal models (such as speecht5 @xcite , maestro @xcite , etc.) are trained on multiple * these authors contributed equally to this work. amps foot_0 leverages the multimodal nature of seam-lessm4t by introducing a paraphrasing objective jointly with asr. along with using spontaneous speech and its corresponding transcription to train the speech-to-text pathway in seamlessm4t, amps also uses paraphrases of the reference transcriptions as additional supervision to train the text-totext pathway. we selectively employ paraphrasebased augmentation during training when the asr loss is high (as determined by a predetermined threshold); high asr loss is typically triggered by noise or poorly enunciated words in spontaneous speech. this selective intervention offers the model an alternate path of opting for semantically close words and phrases when the audio is not very clear. it is important that the paraphrases should not significantly differ in word order from the original transcripts, thus enabling the model to easily align representations of speech, text, and its paraphrase. with amps, we derive significant improvements in asr for spontaneous speech in hindi, marathi, malayalam, kannada, and nyanja compared to strong asr-only finetuned baselines. we report improvements not only in terms of word error rate (wer) reductions but also using semantic evaluation metrics. we also conduct a detailed human evaluation comparing the outputs of amps with the outputs from finetuning only with the asr objective and show consistent improvements in human scores. we also present many ablations, including different paraphrasing techniques, the influence of varying thresholds on the performance of amps, and using varying amounts of training data. we envision that techniques like amps could be used to improve asr of atypical speech for people with speech impairments where comprehensibility of the transcripts is critical (more than faithfulness of transcripts to the underlying speech, as highlighted in very recent work by @xcite ). ## related work in recent years, multimodal models for speech recognition have gained significant recognition @xcite @xcite . these models are capable of processing both speech and text inputs and can be adapted for tasks such as translation and speech generation. a notable example is meta ai's seamlessm4t @xcite , which can support nearly 100 languages. one of the key advantages of such models is their ability to exploit text-only training to fine-tune shared parameters in the asr pipeline. some of the recent work on text-based adaptation for asr models include for leveraging text-only data for asr finetuning is through training the text decoder with a paraphrasing objective. emerging research @xcite has shown that text paraphrasing can be used to augment llm performance but we are the first to show how paraphrases can be used to improve asr. @xcite is a recent study focusing on meaning preservation in disordered speech transcription, but do not offer any technique to help improve meaning preservation in asr outputs. ## methodology amps scaffolds on a multimodal base model comprising a speech encoder, a text encoder, and a shared decoder that takes inputs from both encoders. seamlessm4t is an example of such a model, capable of performing multiple tasks including text-to-text translation (t2t), and speech-totext transcription/translation (s2t). we introduce a new auxiliary task of text-to-text paraphrasing. this allows the model to predict words that are semantically similar and fit within the context of the sentence, without significantly altering its word order. the shared decoder architecture of seam-lessm4t allows us to exploit common parameters of both s2t and t2t pipelines and enhance the asr performance of the model. text @xmath0, consider a speech utterance given a labeled instance {x, y, y ′ }, the asr, paraphrase, and the amps loss functions are as follows. for each batch, we pass the audio through the s2t pathway and compute the asr loss between the predicted and ground-truth transcriptions. we also pass the ground-truth transcriptions as input through the t2t pathway with paraphrase-based supervision to compute l par . figure 1 illustrates a schematic of our proposed architecture. amps τ : loss function thresholding. we aim at improving the model's performance in noisy regions where the asr loss is high by selectively triggering the paraphrase objective only when the asr loss exceeds a predefined threshold τ . thus, the loss for the system is given by where τ is a hyperparameter chosen based on asr validation losses. henceforth, amps with the best threshold will be referred to as amps τ . τ values for various experiments are in appendix a. ## experimental setup for all our experiments, we use the seamlessm4t multilingual multimodal model @xcite . the text encoder and decoder modules are initialized using meta's no language left behind (nllb) model @xcite . the speech encoder in seamlessm4t uses wav2vec-bert 2.0 @xcite , which is trained on over a million hours of unlabeled speech data. further model details are in appendix b.1. datasets. the indicvoices dataset @xcite is a large collection of natural speech (74% extempore, 17% conversational and 9% read) in 22 indic languages. among the languages we chose, marathi, kannada, and malayalam are classified as low-resource by seamlessm4t (communication et al., 2023), while hindi is medium-resource. indicvoices is the only multilingual open-source indian speech corpus containing spontaneous speech and amongst the very few sources published after seamlessm4t's release. 2 we also performed experiments on nyanja (a low-resource language from zambia) from the zambezi-voice dataset @xcite . we use roughly 50 hours of (predominantly conversational, henceforth referred to as mixed) training data for each of the four indian languages. for 2 this dataset was chosen also to ensure that there was no data leakage between the seamlessm4t training data and the evaluation sets. hindi, we also simulate a very low-resource setting with random 5-hour samples of mixed and read training speech. for nyanja, we used 5 hours of training data. (for indic languages, our test sets are the validation sets that are part of indicvoices. for nyanja, we use the existing test set.) given the limited amount of training data, we use parameterefficient finetuning of adapter layers @xcite in the speech encoder and text decoder layers of the seamlessm4t model; more implementation details are in appendix b.2. paraphrasing. we translated the reference transcriptions into english using indictrans-2 @xcite for the indic languages and nllb @xcite for nyanja before translating them back to their original languages. for the hindi mixed 5-hr setting, we experimented with top-k, @xmath1, and nucleus (top-p , @xmath2.95) sampling during round-trip translation to produce more diverse paraphrases. we also explored generating paraphrases using the multilingual llm aya-23 @xcite . the exact prompt and other details are in appendix c and d.2. we used round-trip translation-based paraphrases for all the 50-hour experiments due to poor-quality llm paraphrases for low-resource languages like malayalam. evaluation metrics. evaluation metrics used were word error rate (wer), meteor and the f1 score provided by bertscore. more details are provided in appendix e. ## experiments and results improvements from asr to amps τ for these hardest 100 predictions are labeled ∆hard in we see that ∆hard consistently exceeds ∆all, indicating that the most improvement is observed in cases where pure asr performs poorly. this supports the thresholding approach that triggers the paraphrase loss only when pure asr predictions fall below a threshold. from our manual inspection of hindi samples in the hardest-100 subset, we observe examples where pure asr tends to produce acoustically similar but incorrect words, while amps τ correctly identifies the words. for example, pure asr misrecognized "hua" (meaning 'is') as "ugwa" (meaning 'grows') in a hindi example; amps τ gets this example right. ## acknowledgements the authors thank the anonymous reviewers for their constructive feedback that improved the quality of the draft. the last author gratefully acknowledges support from the amazon iitb ai ml initiative.
| 39,757
|
19
| 2,023
|
The Larger they are, the Harder they Fail: Language Models do not Recognize Identifier Swaps in Python
|
Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming. Typical programming languages have invariances and equivariances in their semantics that human programmers intuitively understand and exploit, such as the (near) invariance to the renaming of identifiers. We show that LLMs not only fail to properly generate correct Python code when default function names are swapped, but some of them even become more confident in their incorrect predictions as the model size increases, an instance of the recently discovered phenomenon of Inverse Scaling, which runs contrary to the commonly observed trend of increasing prediction quality with increasing model size. Our findings indicate that, despite their astonishing typical-case performance, LLMs still lack a deep, abstract understanding of the content they manipulate, making them unsuitable for tasks that statistically deviate from their training data, and that mere scaling is not enough to achieve such capability.
|
https://aclanthology.org/2023.findings-acl.19
|
## introduction pretrained large language models (llms) are rapidly becoming one of the dominant paradigm for large variety of language tasks @xcite , including programming code generation and completion @xcite . llms have demonstrated increasing performance with increasing model size 1 on many practical tasks @xcite including programming tasks @xcite , recently, however, researchers len, @xmath0, len def print_len(x): tasks with inverse scaling generally either involve social biases @xcite , where the larger models (arguably correctly) learn undesirable biases from biased training sets, or involve examples of natural language that are highly atypical but still easily understandable by a human @xcite . these tasks may involve unusual discourse pragmatics or they may require reasoning about counterfactual knowledge, however, since they tend to be highly artificial, it could perhaps be argued that they are edge cases which may not represent serious failure modes for practical applications. in this paper we present a novel type of inverse scaling task involving python code generation under a redefinition of default identifiers. this has both practical implications (redefinition of default identifiers is a metaprogramming technique used in popular libraries), and broader scientific implications, as it shows that llms fail to reason about the deep, abstract semantic structure of programming languages, and these flaws are not ameliorated, but in fact may be even worsened, by increasing model size. crawl github extract functions swap builtins heads (prompts) original bodies (bad classes) corrected bodies (good classes) defined syntax and semantics which makes them especially suited to automatic analysis and procedural generation. they are scientifically interesting because they can be used for automatic generation of examples of coding problems and their evaluation against an objective ground truth, whereas most nlp tasks have enough ambiguity that require human annotation in order to produce high-quality examples. furthermore, this research is also of practical importance for software engineering tools that use llms, such as github copilot, foot_0 which are starting to be widely adopted by developers. ## methodology we describe the motivation behind our task ( §2.1) and the task itself ( §2.2), followed by the way we collected the data for the task ( §2.3). we release our dataset as well as the code used to generate it and replicate our experiments foot_1 . ## experiments we next describe our experiments with a likelihood calculation of correct and incorrect completions ( §3.1) and chat llms ( §3.2), and then present a qualitative analysis ( §3.3). ## related work recent work sought to characterize the quality of llms on a variety of tasks: big-bench (srivastava et al., 2022) is a large collaboration which resulted in a suite of hard, disparate tasks which were used to evaluate various llms. the study found that scaling can be slower and less smooth than expected by naive scaling laws, and social biases sometimes show inverse scaling, also observed by @xcite . @xcite investigated the effect of example selection in fewshot learning for llms, finding that previous studies generally overestimated model quality due to methodological issues. @xcite attempted to measure the truthfulness of the answer provided by llms on tasks involving real-world knowledge, finding that while larger models tend to provide more informative answers, they also tend to be less truthful. however, this effect might be confounded due to the dataset design to specifically be adversarial for the largest model being evaluated @xcite . @xcite showed that similar to our case, mathematical article processing is sensitive to semi-invariant symbol replacements. @xcite provide a broad survey about hallucination (generation of fluent yet incorrect information) by natural language generation models. ## conclusions we explored the ability of large language models to predict the correct continuations of fragments of python programs in scenarios where the correct continuations are statistically uncommon due to the redefinition of identifiers caused by a statement that we included in the prompt. not only all the tested models fail at this task, but some model families even display inverse scaling: they become worse, rather than better, with increasing model size. these results suggest that llms rely on "shortcut learning", i.e., weak, unstable, mostly lexical correlations in the data, rather than an understanding of the semantics of the data (in this case, python code) at a deep level. we believe that our results are important both for a better scientific understanding of the capabilities of llms and for their practical relevance as a core technology for automated code generation tools. future work could investigate scaling effects at larger model sizes, as well as on other programming languages.
| 23,208
|
722
| 2,023
|
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
|
In this position paper we argue that the classical evaluation on Natural Language Processing (NLP) tasks using annotated benchmarks is in trouble. The worst kind of data contamination happens when a Large Language Model (LLM) is trained on the test split of a benchmark, and then evaluated in the same benchmark. The extent of the problem is unknown, as it is not straightforward to measure. Contamination causes an overestimation of the performance of a contaminated model in a target benchmark and associated task with respect to their non-contaminated counterparts. The consequences can be very harmful, with wrong scientific conclusions being published while other correct ones are discarded. This position paper defines different levels of data contamination and argues for a community effort, including the development of automatic and semi-automatic measures to detect when data from a benchmark was exposed to a model, and suggestions for flagging papers with conclusions that are compromised by data contamination.
|
https://aclanthology.org/2023.findings-emnlp.722
|
## introduction at the core of nlp as a discipline, there is rigorous evaluation on different tasks. the experimental protocols involve strict control over the data, especially test data, which needs to be totally unseen during development, but also over training and development data. this is essential to assess the performance of a model in zero-shot, few-shot, or fully supervised settings. since fine-tuning and prompting of large language models (llms) became commonplace @xcite it has been increasingly difficult to enforce those strict protocols. pretraining llms is expensive, and therefore, most of the time, researchers use llms trained by thirdparty entities @xcite , which are agnostic to the target tasks where those llms are going to be used. with the growing scale of llms @xcite the need for data has been solved by crawling the internet, reaching trillions of tokens @xcite , and making it very hard to know whether a specific benchmark was used to train the llm. this is applicable to all models, even if they document the source of the data at a high level, but especially for closed models with no or insufficient documentation. data contamination has two consequences. the first one is that the performance of an llm when evaluated on a benchmark it already processed during pre-training will be overestimated, causing it to be preferred with respect to other llms. this affects the comparative assessment of the quality of llms. the second is that papers proposing scientific hypotheses on certain nlp tasks could be using contaminated llms, and thus make wrong claims about their hypotheses, and invalidate alternative hypotheses that could be true. this second consequence has an enormous negative impact on our field and is our main focus. there are several measures that the community could take. a possible solution would be to avoid all research involving datasets which include published test data, and focus on datasets where the test data labels are not public. this solution will severely affect the number of nlp tasks for which benchmarks exist, at least until new benchmarks that avoid data leakage are produced. @xcite presents preventative strategies to avoid contamination in the future. in this position paper, we propose a complementary line of action which seeks to measure and document data contamination cases, specifying llm, benchmark and evidence supporting contamination. this solution involves a registry of contamination cases 1 , collaborative manual work and research on automatic approaches. in addition, conferences should devise mechanisms to ensure that papers don't include conclusions involving contamination, and to flag past work where contamination has been discovered after publication. the paper starts by introducing background, followed by a definition of data contamination, contamination at different steps, methods to measure data contamination and a call for action. ## background detection of contamination cases has been traditionally done by directly analyzing the training data @xcite , but the current scale of the pre-training data makes it difficult @xcite . without proper documentation and search tools like roots @xcite it is very difficult for any researcher to actually know whether their datasets are compromised on a given model. more recently, this task became even harder, as the best-performing llms are deployed as products, and therefore, their training corpora are kept secret. in this case, it has been shown that the high memorization abilities of llms can be used to generate portions of the training texts @xcite . using this memorization property, @xcite show that chatgpt generates portions of popular nlp benchmarks. furthermore, llms memorization has been studied on data-leakage scenarios @xcite . regarding data contamination cases, @xcite exposed that the c4 corpus @xcite , a corpus used to pre-train several llms such as t5 @xcite , contained the test splits of several benchmarks that were crawled from github. moreover, brown et al. (2020) acknowledged a bug in their filtering script that caused the contamination of several benchmarks during the gpt-3 training. furthermore, openai (2023) stated that parts of the bigbench @xcite benchmark were inadvertently mixed into the training set, enough to stop them from evaluating the model on it. they also mention that they included parts of the training sets of math @xcite and gsm-8k @xcite as training data to improve mathematical reasoning @xcite . therefore, the performance results reported for gsm-8k cannot be taken as zero-shot results when compared to other models. recently, @xcite reported that several benchmarks have already been com-promised in chatgpt, including the popular conll2003 @xcite . there are several preprints that evaluate chatgpt on conll03 @xcite @xcite and at least one conference paper published on acl 2023 that evaluates @xcite and @xcite on the same benchmark @xcite . appendix a shows evidence for data contamination for those llms, and casts doubts on the conclusions of those papers. ## defining data contamination in general, data contamination refers to any breach in the strict control of datasets required by the experimental protocol. in this paper, we focus on the specific case where a llm has processed the evaluation benchmark during its pre-training. however, different types of contamination exist and each of them has different implications. in this section, we present three types of contamination: guideline, text and annotation. guideline contamination happens when the annotation guidelines for a specific dataset are seen by the model. usually, for specialized annotations, highly detailed guidelines are required. the guidelines can usually be publicly found on the internet, even for datasets that are not public or require buying a license for their use, ace05 @xcite for example. the more details the guidelines have the more information and examples they provide. a model aware of the guidelines for a specific task or dataset has advantages over a model without such information. we should consider the guideline contamination, especially on zero and few-shot evaluations. raw text contamination happens when the original text (previous to annotation) is seen by the model. some examples of this type of contamination are the datasets based on wikipedia texts. wikipedia is commonly used as a source of pretraining data, but, it is also a frequent source of text to create new datasets. multiconer 2 @xcite , a named entity recognition dataset based on wikipedia links and wikidata information, is an example of this phenomenon. models that have already seen wikipedia in its original form (including the markup annotations) have more information to better identify a part of the annotations (the entity boundaries) of the dataset. as pointed out by @xcite , other datasets built from the web such as imdb @xcite and cnn/dailymail @xcite can be also compromised. this kind of contamination should be taken into account when developing automatically annotated datasets. annotation contamination happens when the annotations (labels) of the target benchmark are exposed to the model during training. depending on the splits of the benchmark that have been exposed, we can have the following cases: (1) when the evaluation split is involved, the experiment is completely invalidated. this is the most harmful level of contamination. (2) when the train or development splits are involved, this would not affect comparisons with other models that have been developed using those same splits, but it does invalidate conclusions claiming zero-shot or few-shot performance. ## contamination on different steps currently, the standard procedure to train and deploy language models has three main steps: pretraining a language model, fine-tuning the model to follow instructions and/or align with human feedback; and an iterative improvement step after deployment. data contamination does not only occur in the pre-training step of llms, but can occur later in the training pipeline. ## measuring data contamination for the reasons we already mentioned, it is necessary to measure the existent data contamination cases and to document relevant contamination evidence. in order to achieve this goal, we differentiate two cases. in the first case, we would have open models where there is public access to all the training data, including text used in pre-training, but also, if the llm was trained on them, instruction tuning datasets and deployment datasets. in the second case, we would have closed models for which there is no access to training data. ## call for action we want to encourage the nlp community to: (1) develop auto-or semi-automatic measures to detect when data from a benchmark was exposed to a model; (2) build a registry of data contamination cases, including the evidence for the contamination; as the problem affects our entire field, we also want to encourage the community to participate in workshops related to this topic, as for example, the 1st workshop on data contamination 2 . we think that developing the ideas that will arise from this community will play an important role in future nlp evaluations. ## limitations in this paper, we address the problem of data contamination that occurs when evaluating llms on standard academic benchmarks. however, we are aware that there could exist other issues in current evaluations, but, they are out of the scope of this position paper. related to our proposed solutions, we are aware that these are early-stage solutions and that the proposed effort is really challenging, therefore we call for further discussion and research on topics related to this issue.
| 24,810
|
28
| 2,024
|
Findings of the A mericas NLP 2024 Shared Task on Machine Translation into Indigenous Languages
|
This paper presents the findings of the third iteration of the AmericasNLP Shared Task on Machine Translation. This year’s competition features eleven Indigenous languages found across North, Central, and South America. A total of six teams participate with a total of 157 submissions across all languages and models. Two baselines – the Sheffield and Helsinki systems from 2023 – are provided and represent hard-to-beat starting points for the competition. In addition to the baselines, teams are given access to a new repository of training data which consists of data collected by teams in prior shared tasks. Using ChrF++ as the main competition metric, we see improvements over the baseline for 4 languages: Chatino, Guarani, Quechua, and Rarámuri, with performance increases over the best baseline of 4.2 ChrF++. In this work, we present a summary of the submitted systems, results, and a human evaluation of system outputs for Bribri, which consists of both (1) a rating of meaning and fluency and (2) a qualitative error analysis of outputs from the best submitted system.
|
https://aclanthology.org/2024.americasnlp-1.28
|
## introduction though the field of natural language processing (nlp) has seen a steep increase in interest and impressive performance improvements over the past decade, a large performance gap still remains between a handful of so-called "high-resource," mostly colonial, languages and the remaining majority of the world's languages @xcite . the indigenous languages of the americas exemplify this reality, representing nearly 15% of the world's linguistic diversity @xcite ) and yet, until recently, receiving little attention in nlp research. the americasnlp shared task on machine translation (mt), now in its third iteration @xcite , is focused on pushing the performance of mt on this group of languages through two main avenues: by applying modeling and architectural advancements, and through the creation of new linguistic resources which support the training and evaluation of these systems. this year's shared task continues to focus on the eleven indigenous languages from the last competition. while this year's competition does not feature new data for evaluation, competitors are given access to a new repository of training data which extends the original set of parallel examples with additional data collected by teams in prior years. this repository represents the first step in creating a new living source of data which can grow through contributions from teams participating in future iterations of the shared task. this year's competition also features two baselines: the university of sheffield @xcite and university of helsinki @xcite systems which each achieved the best performance for a subset of languages in 2023 @xcite . these baselines are strong and hardto-beat; across 157 submissions from 6 different teams, we see improvements for only 4 of the 11 languages: chatino, guarani, quechua, and rarámuri. as two of these four languages are the relatively highest-resourced, this finding may indicate that we are approaching a plateau in performance gains achievable purely through modeling and architectural approaches; therefore, a focus on collecting additional training data may yield the most future improvements. the paper is structured as follows. in section 2, we provide a brief overview of the data and languages provided by the organizers at the beginning of the competition. section 3 contains summary descriptions of the approaches used by each team. section 4 discusses the results of the competition. in section 5, we conduct a human evaluation of system outputs for bribri. in the first part of this evaluation, we follow the prior shared tasks in quantitatively rating a sample of outputs on two axes: meaning and fluency. for the second part, we conduct a qualitative error analysis, comparing baseline systems to the best submitted system. in section 6, we conclude with a brief discussion of future directions in improving mt quality for indigenous languages of the americas. ## data and languages the shared task features 11 indigenous languages of the americas. the language direction we are interested in is from spanish into the low-resource language. we use the americasnlp 2021 data for development and evaluation. it consists of a multi-way parallel dataset of the spanish xnli test set into 10 languages of the americas (asháninka, aymara, bribri, guarani, nahuatl, otomí, quechua, rarámuri, shipibo-konibo, and wixarika). the task also includes chatino, for which the data comes from mexican court proceedings. chatino was introduced as a surprise language in last year's edition @xcite . for an in-depth review of development and evaluation data, please refer to @xcite and @xcite . for training data, besides the data used in previous editions, this year we include the data collected by de @xcite as part of their helsinki-nlp submission. this consists of extra data, made up of different sources listed in their system description paper, as well as syn, which refers to synthetic data obtained through backtranslation. we publicly release the training and development data in our github repostitory. foot_0 ## metrics for evaluation, we use the automatic metric chrf++ @xcite as implemented in sacrebleu @xcite . it is an overlap-based metric at the character-level, which is adequate for our task since most languages are morphologically rich. while teams are not required to submit a system for all languages, the final score for each submission (chrf++ column in ## baselines and submitted systems in this section, we describe the 2024 baseline systems and each team's approach. we present a summary of all approaches in ## results the overall ranking for the shared task can be found in the first place in the shared task, across all eleven language pairs, is awarded to the nordi-calps team (submission 1). their overall score significantly surpasses those of the second and third place teams, dc_dmv and uedin, respectively. notably, only three of the six teams submit entries for all eleven languages. nordicalps secures the top performance on five language pairs (spanish to asháninka, chatino, wizarika, nahuatl, and otomí), although they only exceed the baseline for chatino. similarly, the second-ranked team, dc_dmv, leads for four language pairs (spanish to aymara, bribri, shipibo-konibo, and rarámuri) but surpasses the baselines solely for rarámuri. these results highlight the importance of meticulous pipeline design for data preprocessing and segmentation, as implemented by nordicalps and the use of large multilingual models (nllb) for finetuning, as employed by dc_dmv, for achieving robust results across most language pairs. finally, the bsc team, which participates for only two language pairs, spanish to guarani and quechua, achieves the highest performance on both, surpassing the established baselines. their strategic focus on finetuning a large multilingual model (nllb) and gathering new data for these languages is key to their success. ## human evaluation following prior americasnlp shared tasks @xcite , we also conduct a human evaluation of system outputs, focusing on bribri. ## future directions in this section, we briefly discuss several possible future directions for the americasnlp shared task, given the results from the current as well as prior competitions. evaluation data one bottleneck in the advancement of language technologies for low-resource, and particularly indigenous, languages is the availability of evaluation data. high quality, gold standard data in target low-resource languages supports many important roles in the nlp research pipeline. first, and most importantly, it is the single resource which is necessary for experimentation; without held out data for evaluation, there cannot be any idea of how well a system performs for a given language. second, the domain and source of data is important, as, over time, models are created to perform best on the data they are evaluated on. particularly for low-resource languages, where there may not be great diversity in available data, it becomes vital to consider what data is used for evalu- ms bribri english translation 4 t ö, be' én a iàna tö ie' dör bua'ë. yes, you know she was great. t ö, be' wa i ujchen tö ie' bák bua'. yes, you know[sic] that she was good. 3 hum, uk öki sa' mìneyal ù páli a. hm, afterwards we moved to a new house. um, e' uk öki sa' e' mìne ù pâali a. hm, after that we went us to a new[sic] house. 2 ramona ta ye' ujté skàne i spoke to ramona again. ye' ujté ramona tamalé. i spoke, ramona, [cuajiniquil] fruit [sic]. 1 ye' tso' kanè maúk èwewa semana i ët wa. i am finishing with my project for next week. ye' tso' kanèbalök ènuk móköl i ëk. i am working[sic]. finish. other[sic] weapon. additional training data this iteration of the shared task marks the first where performance did not increase for the majority of languages in the shared task. of the four languages which did see improvements, two are relatively high-resource and have recently been included in large pretrained models @xcite . as such, additional data for training likely plays a large role in improving the performance for these languages. while teams continue to find new digital data for training, other non-digital sources may need to be considered for future systems. language identification one of the main bottlenecks for gathering additional data is that every process of collecting resources from online sources starts with a good language identifier. investing efforts into developing a language identification system for the shared task languages could boost the collection of additional training data. new language pairs the performance of lowresource language pairs in multilingual mt models can benefit from incorporating additional data from other language pairs. furthermore, our goal is to expand the scope of our shared task in future editions to include more underserved languages of the americas. to achieve this, we plan to engage more researchers who have developed and published resources for the indigenous languages of the americas, both at our workshop and in other venues. ## conclusion in this work, we present the results of the americ-asnlp 2024 shared task on machine translation. overall, 6 teams participated in the shared task, and submitted a combined 157 submissions across all eleven supported languages. prior to the start of the competition, the organizers provided two strong baselines and a training data set which includes data collected from prior submissions. while there were improvements for four languages in this year's shared task, the majority of languages did not see any performance gains over the baselines, which were the strongest systems from 2023.
| 28,265
|
32
| 2,015
|
Utilisation des réseaux de neurones récurrents pour la projection interlingue d’étiquettes morpho-syntaxiques à partir d’un corpus parallèle
|
La construction d’outils d’analyse linguistique pour les langues faiblement dotées est limitée, entre autres, par le manque de corpus annotés. Dans cet article, nous proposons une méthode pour construire automatiquement des outils d’analyse via une projection interlingue d’annotations linguistiques en utilisant des corpus parallèles. Notre approche n’utilise pas d’autres sources d’information, ce qui la rend applicable à un large éventail de langues peu dotées. Nous proposons d’utiliser les réseaux de neurones récurrents pour projeter les annotations d’une langue à une autre (sans utiliser d’information d’alignement des mots). Dans un premier temps, nous explorons la tâche d’annotation morpho-syntaxique. Notre méthode combinée avec une méthode de projection d’annotation basique (utilisant l’alignement mot à mot), donne des résultats comparables à ceux de l’état de l’art sur une tâche similaire.
|
https://aclanthology.org/2015.jeptalnrecital-court.32
|
## introduction l'annotation linguistique de ressources consiste à ajouter des informations de nature interprétative aux données brutes originales @xcite . ces informations peuvent être d'ordre terminologique, lexical, morphologique, syntaxique ou sémantique et les ressources linguistiques peuvent être des lexiques, des dictionnaires, des transcriptions de dialogue ou des corpus de textes @xcite . ces ressources linguistiques sont annotées par des outils d'analyse linguistique et utilisées dans de nombreuses applications : recherche d'information translingue, fouille de textes, extraction d'informations, traduction automatique, etc. dans la littérature, il a été montré que les outils d'analyse linguistique les plus performants sont ceux construits pour les quelques langues (richement dotées) disposant des ressources linguistiques manuellement annotées nécessaires aux algorithmes d'apprentissage supervisé. cependant, la plus grande majorité des langues (faiblement dotées) ne disposent pas de telles ressources annotées. la construction manuelle de ces ressources est lente et coûteuse, rendant ainsi l'utilisation des approches supervisées difficile voire impossible. dans cet article, nous nous intéressons à l'induction de ressources linguistiques adéquates à moindre coût pour les langues faiblement dotées, et aussi à la construction automatique d'outils d'analyse linguistique pour ces langues. pour cela, nous proposons d'utiliser des approches fondées sur la projection interlingue d'annotations. celles-ci s'articulent autour de l'exploitation des corpus parallèles multilingues entre une langue source richement dotée (disposant d'outils d'analyse linguistique) et une langue cible faiblement dotée. en partant d'un corpus parallèle dont les textes en langue source sont déjà annotés, les textes en langue cible sont annotés par projection des annotations à l'aide de techniques d'alignement automatique au niveau des mots. bien que prometteuses, ces approches non supervisées ont des performances assez éloignées de celles des méthodes supervisées. par exemple, pour une tâche d'analyse morpho-syntaxique supervisée, @xcite obtient une précision moyenne de 95.2% pour 22 langues richement dotées, tandis que les analyseurs morpho-syntaxiques non supervisés construits par @xcite donnent une précision moyenne de 83.4% pour 8 langues européennes. dans cet article, nous explorons la possibilité d'employer les réseaux de neurones récurrents (rnn) pour induire des outils multilingues d'analyse linguistique. dans un premier temps, nous abordons la possibilité de les utiliser comme analyseurs morpho-syntaxiques. pour cela, nous utilisons un corpus parallèle entre une langue bien dotée et une autre langue moins bien dotée, pour assigner aux mots du corpus parallèle (appartenant aux vocabulaires des langues source et cible) une représentation commune, obtenue à partir d'un alignement au niveau des phrases. cette représentation commune permet d'apprendre -à partir d'une seule langue étiquetée parmi n -un seul analyseur multilingue capable de traiter n langues. après un bref état de l'art présenté dans la section 2, notre modèle est décrit dans la partie 3 et son évaluation est présentée dans la partie 4, la partie 5 conclut notre étude et présente nos travaux futurs. cette méthode a été ensuite utilisée avec succès dans plusieurs autres travaux. ainsi, @xcite ont montré qu'il était possible d'apprendre des analyseurs morpho-syntaxiques de bonne qualité de cette manière. dans cette lignée, @xcite ont obtenu de meilleures performances encore, en combinant les informations obtenues par projection avec les informations extraites d'un dictionnaire qui associe à chaque mot (de la langue cible) l'ensemble des étiquettes morpho-syntaxiques autorisées, puis en utilisant des méthodes d'apprentissage faiblement supervisées. la projection interlingue a été aussi adaptée avec succès pour transférer d'autres types d'annotations. par exemple, la projection d'annotations en sens réalisée par @xcite , l'annotation en rôles sémantiques sur l'allemand par projection interlingue à partir de la paire de langues anglais-allemand @xcite , dont la généricité a plus spécifiquement été évaluée dans @xcite . de plus cette méthode permet la portabilité multilingue des applications utilisant les annotations linguistiques, @xcite l'ont utilisé pour la portabilité d'un système de compréhension de la parole pour des langues ou domaines différents. dans ces approches, les annotations du côté source sont projetées vers le côté cible, à travers les alignements automatiques du corpus parallèle obtenus au niveau des mots. cette annotation partielle et bruitée des textes cibles est ensuite utilisée par des méthodes d'apprentissage robustes. cependant, les performances des algorithmes d'alignement au niveau des mots ne sont pas toujours satisfaisantes (du point de vue de la qualité des alignements prédits) et l'étape d'alignement au niveau des mots (un alignement n'est pas toujours 1-1, il peut être 1-n, n-n, etc.) constitue aujourd'hui un facteur limitant la projection d'annotations linguistiques @xcite . pour cette raison, notre approche utilise un corpus parallèle aligné au niveau des phrases seulement et n'applique aucun pré-traitement du type alignement automatique en mots qui est source d'erreurs et de bruit. ## méthode proposée pour faire face aux limitations relatives à l'étape d'alignement mot à mot des phrases du corpus parallèle, nous proposons de ne pas prendre en compte les informations bruitées issues de cet alignement, mais de représenter ces informations de façon intrinsèque dans l'architecture du réseau de neurones. dans ce travail initial, nous implémentons un analyseur morpho-syntaxique multilingue basé sur les réseaux de neurones récurrents, et nous montrons que ses performances sont proches de l'état de l'art des autres analyseurs morpho-syntaxiques non supervisés. avant de décrire notre étiqueteur morpho-syntaxique multilingue basé sur les réseaux de neurones récurrents (rnn), nous décrivons tout d'abord l'approche par projection simple à laquelle nous allons nous comparer (et qui sera aussi combinée -au cours des expériences qui vont suivre -avec la méthode que nous proposons). ## conclusion dans cet article, nous avons présenté une approche utilisant les réseaux de neurones récurrents comme annotateurs morpho-syntaxiques multilingues (non supervisés pour les langues cibles). cette approche n'a besoin que d'un corpus parallèle et d'un annotateur morpho-syntaxique pré-existant en langue source. bien que nos résultats initiaux soient positifs, ils doivent être améliorés. dans nos futurs travaux, nous envisageons donc d'utiliser une meilleure représentation pour les oov. par ailleurs, nous envisageons d'utiliser une technique similaire pour des tâches plus complexes du taln (par exemple annotation en sens, en entités nommées et en rôles sémantiques).
| 939
|
3
| 2,023
|
Keeping an Eye on Context: Attention Allocation over Input Partitions in Referring Expression Generation
|
In Referring Expression Generation, model inputs are often composed of different representations, including the visual properties of the intended referent, its relative position and size, and the visual context. Yet, the extent to which this information influences the generation process of black-box neural models is largely unclear. We investigate the relative weighting of target, location, and context information in the attention components of a Transformer-based generation model. Our results show a general target bias, which, however, depends on the content of the generated expressions, pointing to interesting directions for future research.
|
https://aclanthology.org/2023.mmnlg-1.3
|
## introduction context is crucial in multimodal language generation tasks such as referring expression generation (reg), as descriptions for visible entities not only depend on their own appearance but also on their surroundings (e.g. @xcite . for reg, this is especially evident, as the same expression can unambiguously describe an object in one context but be misleading in others @xcite . to this end, it has become a common practice to provide neural generation models for multimodal reg not only with visual representations for the target itself but also with information about its location and size and the visual context it appears in (see figure 1 ). however, due to their blackbox nature, it is not entirely clear to which extend state-of-the-art neural reg models take all of these representations into consideration. while ablation studies show how context information contribute to model performance @xcite , they provide limited insight into how it is processed and to what extend it is relevant for e.g. lexical decisions. similar questions arise in other vision & language (v&l) tasks such as image captioning, where recent work has looked into analyzing the internal attention mechanisms of generation models (e.g. @xcite . however, as the respective models usually take global images as input, analyses are mostly concerned with attention distribution within those representations rather than across different parts of the input. for reg, some authors use attention heatmaps as a method of model introspection @xcite @xcite , but without analyzing the patterns in detail. as a first step for deeper investigations of how contextual information is processed in reg models, we quantify how attention is allocated over the partitioned inputs in a simple reg model. in more detail, we examine the model's relative focus on different parts of the input during inference, i.e. representations of the visual appearance of the referential target, its location in the image and the visual context it appears in. we analyze the attention weights on these input partitions both globally and for a subset of generated tokens to see if the weighting is affected by the expression content. to the best of our knowledge, no dedicated studies have yet been conducted on how attention is allocated across input partitions in reg models (but see @xcite discuss this for a single example). our results indicate that contextual information is utilized by the model in linguistically meaningful ways, highlighting promising directions for further research. in recent years, advances in neural modeling and vision and language corpora such as refcoco @xcite enabled reg set-ups based on real-world images. neural reg models generally resemble architectures from e.g. image captioning, but are adapted in different ways to increase the discriminativeness of generated expressions @xcite . this includes simulations of listener behaviour embedded in training objectives @xcite , comprehension modules @xcite , reinforcement agents @xcite or decoding strategies @xcite , but also supplementing model inputs with additional information. for this, some works propose visual comparisons to encode differences in appearance between targets and context objects @xcite @xcite @xcite , whereas others directly use representations of the global image as context @xcite @xcite @xcite . in addition to visual context, many approaches provide their models with the relative position and size of the target in the image @xcite @xcite @xcite @xcite @xcite . to be used as model inputs, different representations are usually concatenated, i.e. the inputs are composed of partitions of visual target and context features as well as location information. attention analysis in v&l in recent years, attention mechanisms @xcite have become a cornerstone in generative v&l tasks like image captioning @xcite @xcite @xcite @xcite , among many others, cf. zohourianshahzadi and kalita 2021). despite some cautious remarks @xcite , attention is used as a method for model introspection (e.g. clark et al. 2019; voita et al. 2019; vig 2019 for text and cao et al. 2020, 2022; ilinykh and dobnik @xcite for v&l settings). while recent reg approaches build on transformer @xcite architectures with attention as the key component @xcite , the inner workings of the attention modules have only been studied in qualitative terms @xcite @xcite . here, we perform a quantitative analysis of attention allocation in a simple transformer-based reg model. ## conclusion and future directions in this paper, we investigated attention allocation across input partitions for a simple reg model. our results show that the model attends to all sources of information, albeit with a general bias towards the target. in addition, our models show systematic differences between encoder and decoder attention across datasets, as well as sensitivity to the meaning of the generated tokens. importantly, this study only represents a small step toward a more thorough understanding of the significance of different types of information in reg. one limitation of this work is that our models are not explicitly optimized for the general objective of the reg task, i.e., unambiguously identifying the referential target (see limitations section). consequently, as regarding for possible distractors is crucial for this, we see great potential in investigating how different approaches to increase the pragmatic informativeness of generated expressions (cf. section 2) affect the relative weighting of input partitions. along with this, given the multifaceted role of situational context for reg @xcite , future work should take a closer look at attention allocation over semantic units in the visual context, e.g. to see whether objects with certain classes or relations to the target are weighted more or less during generation.
| 25,776
|
101
| 2,024
|
T ag D ebias: Entity and Concept Tagging for Social Bias Mitigation in Pretrained Language Models
|
Pre-trained language models (PLMs) play a crucial role in various applications, including sensitive domains such as the hiring process. However, extensive research has unveiled that these models tend to replicate social biases present in their pre-training data, raising ethical concerns. In this study, we propose the TagDebias method, which proposes debiasing a dataset using type tags. It then proceeds to fine-tune PLMs on this debiased dataset. Experiments show that our proposed TagDebias model, when applied to a ranking task, exhibits significant improvements in bias scores.
|
https://aclanthology.org/2024.findings-naacl.101
|
## introduction pre-trained language models (plms) are extensively utilized in various natural language processing tasks, acquiring a significant amount of knowledge during their pre-training phase. research has highlighted that these models often inherit substantial social biases present in their pre-training corpora, which may subsequently emerge in the outcomes of downstream tasks @xcite . so, it is crucial to identify and mitigate social bias in these models. there are different ways to mitigate social bias both in datasets and pre-trained models. state-ofthe-art approaches show effective debiasing methods in plms such as increasing dropout regularization @xcite , projection-based debiasing @xcite , and self-debias @xcite as a post-hoc method to discourage models from generating toxic sentences. other data-based bias mitigation methods such as counterfactual data augmentation (cda) @xcite or biased terms removal (scrubbing) @xcite have been proposed but exhibit some limitations. producing counterfactual data and fine-tuning pre-trained language models (plms) on an augmented dataset is resource-consuming and, in some cases, impossible. for example, generating a counterfactual example for the sentence "women gave birth" is impossible. scrubbing biased words removes contextual associations within the plms and can decrease model performance in downstream tasks. in this paper, we propose a framework for mitigating bias in datasets and pre-trained language models by tagging the biasinbios dataset @xcite , which is designed to examine gender-profession social biases. more specifically, we propose an approach named "tagdebias" to debias datasets by tagging gender indicator terms. the idea is to replace gender terms with semantic types that represent neutral terms for binary genders (female and male). we then fine-tune pretrained language models on the debiased dataset, teaching them that each gender term corresponds to the same neutral tag. the proposed method, "tagdebias," has the advantage of not requiring counterfactual data while maintaining model performance compared to the scrubbing method. furthermore, it outperforms data-based bias mitigation methods, specifically scrubbing and counterfactual data augmentation. to assess the fairness of the debiased models, we test our tagdebias model on a ranking task in the domain of biographies' ranking given a target job title. in this study, we will answer the following research questions: q1 does tagging stereotypical gender terms mitigate social bias in plms? q2 does tagging stereotypical gender terms worsen plms' performance? q3 does our proposed tagdebias model have a fairer ranking compared to base and scrubbed plms? in response to q1, we assess models with various tagging subsets using fairness classification metrics. our findings reveal that the "gender-specificterm" model surpasses both the initial and scrubbed models. evaluating model performance on the bi-asinbios dataset (q2), we observe that the tagging approach does not adversely affect model performance. finally, the tagdebias model demonstrates a substantial enhancement in fairness rankings, exhibiting an improvement compared to the initial and scrubbed models, respectively (q3). ## conclusion
| 31,145
|
36
| 2,024
|
That’s Optional: A Contemporary Exploration of “that” Omission in E nglish Subordinate Clauses
|
The Uniform Information Density (UID) hypothesis posits that speakers optimize the communicative properties of their utterances by avoiding spikes in information, thereby maintaining a relatively uniform information profile over time. This paper investigates the impact of UID principles on syntactic reduction, specifically focusing on the optional omission of the connector “that” in English subordinate clauses. Building upon previous research, we extend our investigation to a larger corpus of written English, utilize contemporary large language models (LLMs) and extend the information-uniformity principles by the notion of entropy, to estimate the UID manifestations in the usecase of syntactic reduction choices.
|
https://aclanthology.org/2024.acl-short.36
|
## introduction exploiting the expressive richness of languages, speakers often convey the same messages in multiple ways. a body of research on uniform information density (uid) puts forward the hypothesis that speakers tend to optimize the communicative effectiveness of their utterances when faced with multiple options for structuring a message. the uid hypothesis @xcite @xcite suggests that speakers tend to spread information evenly throughout an utterance, avoiding large fluctuations in the perunit information content of an utterance, thereby decreasing the processing load on the listener. the uid hypothesis has been used as an explanatory principle for phonetic duration @xcite , the choice between short-and long-form of words that can be used interchangeably, such as "info" and "information" @xcite , and word order patterns @xcite @xcite . our work studies how uid principles affect the phenomenon of syntactic reduction -the situation where a speaker has the choice of whether marking a subordinate clause in sentence with an optional subordinate conjunction (sconj) "that" or leave it unmarked, as in "my daughter mentioned @xcite he looked good". the only study that tested the uid hypothesis computationally in the context of syntactic reduction is @xcite , followed by @xcite , who studied the effect of multiple factors on the speaker choice of explicit or implicit "that" conjunction. investigating sentences with main clause (mc, e.g., "my daughter mentioned") and subordinate clause (sc, e.g., "[that] he looked good"), connected by the optional sconj, the authors found that uid optimization was the most prominent factor affecting a speaker choice of "that" omission. specifically, @xcite investigated 6700 sentences extracted from the switchboard spoken english dataset, and operationalized the uid principle by computing the surprisal (non-predictability) of the sc opening word (sc onset) using a statistical bigram language model computed from the corpus itself. our work studies the role of uid principle in syntactic reduction in multiple differing ways. first, we extend the investigation to a much larger corpus of informal written english collected from social media. second, we use contemporary large language models (llms) to estimate the operationalizations of information uniformity in syntactic reduction, suggesting the robustness of our findings. finally, inspired by the information-theoretic nature of uid and prior art @xcite , we extend the sc onset surprisal uid manifestation with the notion of sc onset entropy -the information entropy of llm distribution over sc opening word, conditioned on the main clause -factor that turns out to have a complementary and significant effect. the contribution of this work is, therefore, twofold: first, we collect and release a large and diverse corpus of nearly 100k sentences, where main and subordinate clauses are connected by the optional sconj "that". foot_0 second, we go above and beyond prior work by using transformer-based llms @xcite , thereby providing a sound empirical evidence for uid principles associated with syntactic reduction decision, shedding a new and interesting light on the manifestation of uid in spontaneous written language. ## methodology we define a set of factors that we were found to affect syntactic reduction choices @xcite , and further study the magnitude of their predictive power by casting the usecase as a classification scenario. we harness the power of contemporary llms for reliable computation of sc onset surprisal, as well as for computation of its complementary predictor: sc onset entropy. we define the following predictors: main clause (mc) length previous work suggested that the conjunction is likely to be spelled out explicitly in longer sentences; in particular after a longer main clause. this predictor is computed by the number of tokens preceding the (explicit or implicit) sconj. as an example, in the sentence "do you realize [that] i've never actually seen him at the office?", mc length will be assigned 3. subordinate clause (sc) length similar intuition suggests that the length of a subordinate clause (and more generally, the rest of the sentence) can be used as another predictor. in the example sentence above, sc length will be assigned 9. main verb frequency jaeger (2010) found negative correlation between the main clause verb frequency and the tendency to spell out "that" sconj. we compute the frequency of main verbs in all sentences as their relative count in the entire corpus of over 480k sentences (see section 2). sc subject distance this predictor is defined as the number of words at the sc onset up to and including the sc subject. multiple studies found positive correlation of this factor with the tendency to spell out sconj @xcite @xcite . we extract the sc subject using the nsubj annotation assigned by spacy's dependency parser to the subordinate clause subject. sc onset information density (id) @xcite and @xcite computed this factor by using the simplest possible estimation, where the information of the sc onset is only conditioned on the main verb, and is operationalized by the notion of surprisal: -log p(sc onset | main verb). all counts (and probabilities) were calculated from the dataset at hand. harnessing the power of modern pretrained llms, we define this predictor as the probability of sc onset, conditioned on entire main clause, namely -log p(sc onset | mc). notably, @xcite trained the bigram model in a controlled setting where all "that" conjunctions had been omitted. without this control, results may be circular, e.g., in cases where "that" is explicitly spelled out, the computation -log p(sc onset | mc) could be self-evident because "that" is normally inserted between mc and sc onset (recall that sc onset denotes the opening word of the subordinate clause, "that" excluded). since training a language model from scratch on corpora with omitted scs is often impractical, we marginalize out the presence of "that", re-defining the sc onset surprisal to be: this refined definition of sc onset surprisal eliminates the need to re-train a language model on a corpus where the sc "that" had been omitted. ## experimental results and discussion experimental setup we use the opt-125m autoregressive pretrained transformer model @xcite , roughly matching the performance and sizes of the gpt-3 class of models, for computation of sc onset surprisal and entropy. given a sentence prefix, we first extract next token logits and convert them to a probability distribution over the lexicon by applying the softmax function. sc onset surprisal was computed by applying the natural log on the sc onset token probability given the relevant sentence prefix. sc onset entropy was computed by applying the entropy equation (see section 3) on the outcome probability distribution. 3 estimating the contextual surprisal (or entropy) per word with decoder llms operating at the subword level is hard; we, therefore, approximate these metrics by computing the surprisal (or entropy) over the subwords. @xcite show that this is practically equivalent to computing a lower bound on the true contextual measurements. finally, logistic regression is used as a predictive model due to its effectiveness and intrepretability. ## conclusions we study the uid hypothesis manifestation in syntactic reduction using a large, diverse and carefully compiled corpus of english sentences with explicit or implicit "that" subordinate conjunction. harnessing the power of contemporary pretrained llms, we show that sc onset surprisal and entropy are the main factors affecting a speaker's choice to spell out the optional conjunction "that". last but not least, a large body of linguistic literature has studied the conditions under which complementizers (like "that" subordinate conjunction) can or cannot be omitted (inter alia erteschik-shir (1997); @xcite ). we believe that future work in this field should better engage with this literature, incorporating insights for more linguistically-informed approach to the task of syntactic reduction analysis.
| 28,066
|
246
| 2,023
|
HEVS - TUW at S em E val-2023 Task 8: Ensemble of Language Models and Rule-based Classifiers for Claims Identification and PICO Extraction
|
This paper describes the HEVS-TUW team submission to the SemEval-2023 Task 8: Causal Claims. We participated in two subtasks: (1) causal claims detection and (2) PIO identification. For subtask 1, we experimented with an ensemble of weakly supervised question detection and fine-tuned Transformer-based models. For subtask 2 of PIO frame extraction, we used a combination of deep representation learning and a rule-based approach. Our best model for subtask 1 ranks fourth with an F1-score of 65.77%. It shows moderate benefit from ensembling models pre-trained on independent categories. The results for subtask 2 warrant further investigation for improvement.
|
https://aclanthology.org/2023.semeval-1.246
|
## introduction identification and verification of causal claims from unstructured text data is essential for various decision-making processes, particularly in healthcare. the semeval-2023 task 8 @xcite aims to advance the state-of-the-art in this area by focusing on two subtasks: identification of causal claims and extraction of population, intervention, and outcome (pio) entities. the first subtask involves identifying the span of text that contains one of the four entities: a causal claim, a personal experience, a personal experience based on a claim or a question. this can be done at the sentence level, but only a part of a sentence may be annotated with one of these categories. the second subtask involves extracting the pio frame related to the identified causal claim in a text snippet. the model utilizes both word-level, including contextual information and character-level features capturing different aspects of the data. ## background causal claims identification in the open domain is widely researched, but the healthcare domain has only garnered attention recently @xcite @xcite . in the healthcare domain, large amounts of medical notes, social media posts, research articles, and patient forums are generated daily. manually extracting causal claims and pio frames from such data is time-consuming and errorprone. for a decade, pio extraction was limited to sentence-level information extraction due to the unavailability of frame-annotated datasets @xcite . after the release of the ebm-pico corpus, the extraction efforts moved to span and frame extraction @xcite . nonetheless, previous studies on pio frame extraction primarily concentrated on extracting them from well-written, peer-reviewed literature @xcite @xcite . the semeval-2023 task 8 overtakes the challenge of extracting these frames from noisy social media data. the task organizers provide 597 english-language pio-labelled reddit posts. we approach pio frame extraction as binary sequence labelling and use a combination of deep learning and a rule-based approach that captures multiple feature representations from the data, as the dataset is relatively small and noisy. the semeval-2023 task 8 provides an opportunity for researchers to develop novel methods for causal claim identification and pio frame extraction from noisy social media data and to benchmark their performance against state-of-the-art methods. we hope that the shared task will lead to the development of more effective and accurate methods for identifying and extracting causal claims and pio frames from unstructured text data. ## system overview we participated in both subtasks of semeval-2023 task 8. in this section we describe our approach. ## experimental setup this section describes our experimental setup for subtasks (1) and (2), both of which were evaluated using a macro f1-score measure. ## conclusion we participated in both subtasks of semeval-2023 task 8. our submissions are mainly based on fine-tuning transformers-based models and creating an ensemble of these models. results show a positive impact of using independent binary classification models for each entity type in subtask 1.
| 26,524
|
20
| 2,025
|
KIT ’s Low-resource Speech Translation Systems for IWSLT 2025: System Enhancement with Synthetic Data and Model Regularization
|
This paper presents KIT’s submissions to the IWSLT 2025 low-resource track. We develop both cascaded systems, consisting of Automatic Speech Recognition (ASR) and Machine Translation (MT) models, and end-to-end (E2E) Speech Translation (ST) systems for three language pairs: Bemba, North Levantine Arabic, and Tunisian Arabic into English. Building upon pre-trained models, we fine-tune our systems with different strategies to utilize resources efficiently. This study further explores system enhancement with synthetic data and model regularization. Specifically, we investigate MT-augmented ST by generating translations from ASR data using MT models. For North Levantine, which lacks parallel ST training data, a system trained solely on synthetic data slightly surpasses the cascaded system trained on real data. We also explore augmentation using text-to-speech models by generating synthetic speech from MT data, demonstrating the benefits of synthetic data in improving both ASR and ST performance for Bemba. Additionally, we apply intra-distillation to enhance model performance. Our experiments show that this approach consistently improves results across ASR, MT, and ST tasks, as well as across different pre-trained models. Finally, we apply Minimum Bayes Risk decoding to combine the cascaded and end-to-end systems, achieving an improvement of approximately 1.5 BLEU points.
|
https://aclanthology.org/2025.iwslt-1.20
|
## introduction in this paper, we present our submissions to the iwslt 2025 low-resource track. we participate in three language pairs, translating from bemba (iso: bem), north levantine arabic (iso: apc), and tunisian arabic (iso: aeb) into english. our approach follows the unconstrained track, reflecting practical scenarios by leveraging all available resources, including multilingual pre-trained models and external datasets. building upon the submissions of last year @xcite , which investigates efficient utilization of available resources using multilingual pretrained models, this work explores two approaches to further enhance model performance without involving extra resources: synthetic data augmentation and model regularization. one of the main challenges in building speech translation (st) systems is the scarcity of end-toend (e2e) st data. given that automatic speech recognition (asr) and machine translation (mt) resources are more accessible, we leverage them to create synthetic st data. first, we investigate the mt-augmented approach, using a trained mt model to generate target-language translations from asr datasets. additionally, inspired by prior work @xcite @xcite @xcite , we explore synthetic speech generation. specifically, we train text-to-speech (tts) models using asr data and use them to generate synthesized speech from the mt datasets. we also explore model regularization to enhance model performance. previous research shows st systems for low-resource languages benefit from model regularization during training because of the imbalanced parameter usage @xcite . however, these works are limited to mt models in the cascaded system. since model regularization is a generic approach, this work investigates its effectiveness with both asr, mt, and st tasks. with experimental results across different language pairs, we conclude the findings as follows: the iwslt 2025 low-resource track defines two system categories: constrained, where models are trained exclusively on datasets provided by the organizers, and unconstrained, where participants are free to use any external resources. in this work, we focus on the unconstrained condition, aiming to reflect better practical and real-world scenarios, where leveraging diverse data sources is often essential for building effective translation systems. ## conclusion we participate in the iwslt 2025 low-resource track, focusing on three language pairs with bemba, north levantine, and tunisian as source languages, and english as the target language. our focus is on improving model performance through synthetic data augmentation and model regularization. the results demonstrate that high-quality synthetic data can significantly enhance performance. in addition, model regularization proves to be a robust and broadly effective approach across all asr, mt, and st tasks in low-resource settings. finally, our findings highlight the importance of language-specific strategies for building effective speech translation systems, as reflected in the varying outcomes observed across the three language pairs. mcd wer bemba vits same speaker 5.4 51.0 e2tts same speaker 5.6 40.9 e2tts cross speaker 7.7 41.9 north levantine e2tts same speaker 4.2 113.3 e2tts cross speaker 9.0 108. 3 additionally, since mcd is not a speakerindependent metric like wer, to reduce the influence of speaker attributes, we conducted assessments in both same-speaker (reconstruction) and cross-speaker settings. the results in for wer evaluation we use two asr models trained without the augmented tts data. specifically, we use model a5 from as for low-resource language north levantine, the wer scores are considerably high, suggesting that the e2tts model remains underdeveloped. this likely contributes to the poor performance of st models trained with tts-augmented data, as indicated in
| 38,516
|
41
| 2,025
|
Can summarization approximate simplification? A gold standard comparison
|
This study explores the overlap between text summarization and simplification outputs. While summarization evaluation methods are streamlined, simplification lacks cohesion, prompting the question: how closely can abstractive summarization resemble gold-standard simplification? We address this by applying two BART-based BRIO summarization methods to the Newsela corpus, comparing outputs with manually annotated simplifications and achieving a top ROUGE-L score of 0.654. This provides insight into where summarization and simplification outputs converge and differ.
|
https://aclanthology.org/2025.nodalida-1.41
|
## introduction text simplification can operate at various linguistic levels-semantic, syntactic, or lexical-using diverse strategies to achieve specific goals @xcite @xcite . in practice, automatic text simplification (ats) transforms complex text into simpler versions by splitting sentences, shortening length, and simplifying vocabulary and grammar. the best english-language ats models rely on parallel corpora like wik-ismall @xcite , aligning complex and simple sentences from standard and simple english wikipedias (originally 108,000 instances from 65,133 articles, currently 89,042). the most valuable resource for text simplification is the newsela corpus @xcite , which includes 9,565 news articles professionally rewritten at multiple reading levels, with 1,913 original articles and four levels of simplification. however, it lacks the volume needed to train advanced deep-learning models effectively. simplification lacks standardized procedures and a common algorithm, partly due to the absence of a "native speaker of simplified language" @xcite . the subjective nature of simplification also makes consistent methodology difficult @xcite . the evaluation metrics for simplification are similarly inconsistent. some, like bleu or @xcite , focus on intrinsic grammatical features and struggle with semantic changes, while others, such as cosine distance, emphasize semantic similarity. by contrast, summarization metrics are well-established, even when imperfectly applied @xcite . furthermore, while the two tasks present some divergences in their focus (e.g. the relevance of information ordering, the choice of domain-agnostic lexicon, and the preference for short active forms instead of long passive forms), they remain convergent in producing shorter and poignant text. given the state of things, we believe that comparing simplification with summarization could provide insights into their convergence. this study investigates whether a state-of-theart (sota) summarization system can approximate manual simplification by comparing annotated simplifications with automated summarization. starting with newsela's english documents, we process original articles with brio @xcite , a sota abstractive summarizer, applying document-wide and paragraph-by-paragraph summarization methods. we then evaluate each output set against the four simplification levels using rouge-l scores to measure similarity. results indicate an average performance difference of 0.444, with paragraph-by-paragraph summarization achieving the highest score (0.654) at level 1, gradually decreasing through levels 2 to 4. while paragraph-by-paragraph summarization does not equate to manual simplification, it may serve as an effective preparatory step for manual annotators. background and related research are discussed in section 2, with the experimental setup and findings detailed in sections 3 and 4. a summary of the presented work, followed by the limits of the scope and suggestions for future research, are provided in section 5. ## related work the multifaceted nature of implementing text simplification has led to multiple works that share the goal of rewriting complex documents with simpler, more straightforward language. this is ultimately achieved by modifying the original text both lexically and syntactically as defined in @xcite , either in an automated or manual way. multiple works in the field have tackled different applications, from aiding people with disabilities @xcite , low-literacy adults @xcite , non-native learners @xcite to auxiliary systems to improve the effectiveness of other nlp tasks @xcite @xcite . due to the wide range of applications, a major subjectivity issue emerges when evaluating the different methods for simplification @xcite . different scoring methods that have been utilized for simplification include: bleu @xcite ; terp, translation edit rate plus, which computes the number of the three edit operations plus the inverse @xcite ; oov, out of vocabulary, which measures the rate of oov words from a chosen simple vocabulary (e.g. basic english list) @xcite ; changed, measuring the percentage of the test examples where the system suggested some change @xcite ; potential, computing the proportion of instances in which at least one of the candidates generated is in the gold-standard (paetzold and specia, 2016); sari, the most recent, which performs a similar comparison to bleu but is considered more reliable @xcite . the general approach to text summarization is more streamlined, aiming to produce a shorter text than the input one while keeping all relevant information, defined as abstract or summary @xcite . the most common approaches are naïve bayes @xcite , swarm algorithms @xcite , and sequence-to-sequence models @xcite . a further distinction can be made between abstractive and extractive summarization methods @xcite . where extractive methods produce text by concatenating selected parts of the original document, abstractive methods apply language generation techniques to produce a shorter document @xcite . standard scoring methods for text summarisation are precision/recall measures and various instances of rouge @xcite , some examples being rouge-n, rouge-l, and the most recent rouge-sem @xcite . the newsela corpus is a collection of 1,130 articles rewritten and simplified by professional editors, aimed at children of different grade levels @xcite . from each individual article, four different versions have been derived through manual simplification process and labelled with a number from 1 to 4, representative of the level of simplification. label 4 represents the most simplified output, suitable for a 3rd grader; label 3 represents an output suitable for a 4th grader; labels 2 and 1 identify outputs suitable for 6th and 7th graders. the original articles are suitable for 12th graders. considering possible modifications to the dataset past the authors' presentation of their work, the corpus currently consists of 9,565 documents, of which 1,913 original articles. ## experimental setup for the purpose of this work, the architecture chosen to perform the summarization procedure was brio, a system presented in @xcite and based both on the bart architecture @xcite and the pegasus architecture @xcite . the choice was motivated by its state-of-the-art performance in summarization tasks, its ease of availability and implementation, and the double-model-based system that it employs. the dual nature of brio is the result of fine-tuning two different architectures on two different datasets with a specific training paradigm. since the two datasets were characterized by longer texts @xcite and shorter texts @xcite , the two backbones for the architecture keep these properties. therefore, the bart-based brio was chosen as a summarizer for its performance with longer texts, as suggested by the original authors. the original articles from the newsela corpus were then processed through the summarization model. for each article, two procedures were followed to produce different output documents: document-wide summarization and paragraph-byparagraph summarization, as explained below. a graphic representation of the general procedure is provided in figure 1 document-wide. the more intuitive application of text summarization, this method involved the generation of a single string containing the whole text by joining the various paragraphs and subsequently processing it with the summarizer model. once the architecture produced an output string, it was written in a separate * .txt file. paragraph-by-paragraph. this summarization approach stems from the visual structure of academic texts, which usually separate topics and changes in content by dividing the document into paragraphs. thus, the intuition was to make the architecture follow a similar pattern to preserve the content and produce a more effective summarization. this method implemented splitting the original text into paragraphs and processing each paragraph separately with the summarizer model. the resulting outputs were subsequently rejoined and written as a single document in a separate * .txt file. both procedures were applied to each of the original 1,913 english articles in the newsela corpus, and the resulting two sets of summarized documents were compared to the simplified version produced by the editors. this was done by iterating through the different levels of simplification (1, 2, 3, and 4) and calculating the precision, recall and rouge f1 score between each simplified version of the document and the summarized version of it. the resulting evaluation was stored, and the average was calculated level-wise for each metric with the scores from the whole set. then, the scoring procedure was repeated for the remaining summarized set. the chosen evaluation score was rouge-l as it was both a part of the original brio publication @xcite and a statistic based on long common sequence (lcs) @xcite , which made it well suited to measure the grammatical integrity, keyword conservation and coherence in the summarized texts. ## results the average scores for the three evaluation metrics used in comparing the human-produced simplification and the automated summarization are available in to make the gap in scores and the variability in summarization performance through the different processes more apparent, two graphic representations of the average scores are provided in figure 2 . the data corresponds to the documentwide summarization method on the left side and the paragraph-by-paragraph method on the right. when comparing the results from the two processes, the overall difference in balance between precision/recall for the document-wide summarization method is immediately noticeable. even considering the progressive improvement of the precision rate and the lowering of the recall score, the minimum gap between the two is 0.653. the first hypothesis was that it was due to the summarizer generating lengthy and repetitive summaries; however, a quick analysis of the outputs confirmed the variety in length and the production of shorter documents than their input. therefore, the more plausible hypothesis is that while the longest common sentences between manual simplification and automated summarization are recalled in the text (most likely the keywords), the structural lexicon and syntactical choices of the simplified version would not appear through document-wide summarization. consequently, this can lead to the poor similarity between the two document types and the convolution of information through summarization, a hypothesis corroborated by the low rouge-l score. on the other side of figure 2 , the scores provide a better-looking picture of the paragraph-byparagraph performance. with a rouge-l score of 0.566 averaged between all levels of simplification, the similarity between the simplified and summarized versions is noticeable. although they perform better when compared to lower levels of simplification than to more simplified documents, the summarized outputs obtained through paragraph-by-paragraph processing perform well enough to justify further investigation and analysis. our hypothesis for the better performance of the paragraph-by-paragraph, when compared to the document-wide processing, lies in the nature of the process: a block-by-block iteration might be more similar to the manually performed annotation than a text-wide transformation is. worth of notice for the production of these results was the difference in time requirements between the first summarization method and the second when operating on an average machine (16 gb ram, 8 cores, 2,90 ghz cpu). the time elapsed for the paragraph-by-paragraph processing method was greatly increased, ranging between 10x and 50x more for each iteration and thus requiring several minutes instead of seconds. while the reason behind this issue requires more investigation, with the current implementation, performing such a method on a large-scale dataset without some optimization or access to a powerful machine is not recommended. ## conclusions, limitations and future work in this work, the similarities between simplified and summarized text have been analysed through the automated summarization of articles from the newsela corpus, performed with two different methods and compared to four levels of professional manual simplification representative of diverse school grade levels. by examining the results obtained by a rouge-l scoring comparison between our output and the manual standard, it is shown that the proposed paragraph-by-paragraph method is superior to a document-wide approach, with the highest score being 0.654. hence, it is possible to claim that while automated summarization does not produce text similar enough to simplified documents to justify its substitution, it still produces text similar enough to be used as a baseline to perform simplification on -instead of starting from the original text. however, there are important limitations to the currently chosen metric. as rouge-l cannot measure semantic similarity between instances, all sequences that are semantically correct but lexically different would not compute as "similar". since abstractive summarization could generate text that is lexically different from the simplification golden standard but still effectively simplified, further analysis with semantically relevant metrics should be conducted. in addition, future work in this direction should implement ulterior thorough analyses with more refined metrics, such as rouge-sem or sari, along with a comparison between manual simplification, automated summarization and automated simplification algorithms. in particular, the latter could shed some light on the intrinsic similarities between simplification and summarization and help further investigate the potential interdisciplinary approaches to the text simplification field of research. further investigation into optimization procedures to make the most-performing methods available for lower-end machines should also be conducted to allow for wider access to the tools and improved effectiveness of summarizers as a simplification helping tool.
| 40,212
|
33
| 2,020
|
Mitigating Silence in Compliance Terminology during Parsing of Utterances
|
This paper reports on an approach to increase multi-token-term recall in a parsing task. We use a compliance-domain parser to extract, during the process of parsing raw text, terms that are unlisted in the terminology. The parser uses a similarity measure (Generalized Dice Coefficient) between listed terms and unlisted term candidates to (i) determine term status, (ii) serve putative terms to the parser, (iii) decrease parsing complexity by glomming multi-tokens as lexical singletons, and (iv) automatically augment the terminology after parsing of an utterance completes. We illustrate a small experiment with examples from the tax-and-regulations domain. Bootstrapping the parsing process to detect out- of-vocabulary terms at runtime increases parsing accuracy in addition to producing other benefits to a natural-language-processing pipeline, which translates arithmetic calculations written in English into computer-executable operations.
|
https://aclanthology.org/2020.fnp-1.33
|
## introduction the task of extracting multi-token terms 1 , i.e. terminological units which denote concepts and entities in a domain, is a core task of natural language processing (nlp). within the tax-and-regulations domain, some terms are compositional (nunberg et al., 1994; baldwin, 2006; krcmar et al., 2013; boguraev et al., 2015) @xcite in meaning and/or in form, such as unmarried college student or estimated tax payment; others are mixed instances of compositionality such as taxable sick leave pay or cannabis duty payable. in addition, terms can correspond either to the canonical form of the concept or to variant forms of concepts' names as in spouse or common-law partner credit versus spouse's or common-law partner's credit, or spouse amount versus spousal amount (park et al., 2002). from the perspective of parsing raw text, having multi-token terms not only simplifies the input by grouping multi-tokens as singletons but also removes syntactic complexity as the internal structure of these expressions remains opaque to parsing (korkontzelos et al., 2010; wehrli, 2014; boguraev et al., 2015; nerima et al., 2017). in addition, having multi-token terms allows parsers to output structurallysimilar parses for sentences that are constituent-wise similar even if the intra-phrasal complexity of multi-token terms vary. one major issue in term extraction, known as silence, is the failure to extract terms that appear infrequently in a domain-corpus but that domain-specialists would include in a term lexicon. an example of silence is when terms such as inventory valuation or combination money purchase do not make it into the terminology because they occur infrequently in our domain corpus @xcite . in the tax-and-regulations domain, the problem with terminology silence is particularly acute, as there are many instances of rules and arithmetic calculations concerning a specific entity which is mentioned once in the entire corpus. while the entity is not mentioned enough to be highly scored by collocation-based measures during the term extraction process, for the purposes of automatically interpreting and representing tax-and-regulations content, it is essential the entity be treated the same as other items in its class by the parser. this paper describes a simple method for extracting automatically multi-token terms that are not listed in the compliance terminology at the start of the parsing process of unlabelled utterances. the compliance terminology for taxes and regulations is the result of prior work on identifying, given the domain corpus, concepts and entities by means of co-occurrence/collocation-based surface statistical measures. in addition, linguistic-rule-based filters exclude a set of ill-formed terms from the final list. because out-of-vocabulary (oov) terms are tax-and-regulations-specific, we cannot rely on external lexical resources as these expressions are unlikely to be listed in general-purpose, financial or business lexicons. in our approach, the detection of oov terms takes place during parsing. we use generalized-dice-coefficient-based (gdc) metrics @xcite to estimate the degree of similarity between established terms and oov term candidates. when the gdc-based detection of multi-token terms is enabled during parsing of utterances, experiments show improvements of 93% in parsing accuracy for utterances with oov terms at the start of the parsing process. ## motivation the goal of the compliance-domain parser is to output, in a simplified logical form (slf) (wang et al., 2015, constant et al., 2016), a semantic representation of utterances taken from tax forms written in english foot_1 . of particular interest, are utterances that express entire or partial arithmetic calculations. downstream components of our nlp pipeline interpret the slfs and automatically transform them into executable operations. in our compliance domain, the language of arithmetic calculations written in english is distributed along a continuum of syntactic complexity. the slf of utterance 1 is an addition between the amount denoted by the multi-token term exclusion of income and the amount on a specific line of a specific form. utterance 2 is about a choice: the smallest amount of two amounts (min operator). the notion of smallest is conveyed by a discontinuous dependency as a relative clause at the end of the utterance, namely, whichever is less. the first amount for the choice is a dollar constant amount; the second is an addition of the amounts denoted by the term employment income, expressed by total ... on lines 101 and 104. in utterance 3, inputting the constant amount of $75,300 to a calculation is conditional on satisfying one of the disjuncts (or operator) for the filing status of the taxpayer, expressed by the multi-token terms married filing jointly and qualifying widow(er). when multi-token terms are missing from the terminology, the task of the parser is to determine the dependencies between the individual tokens that make up the utterances. with more available tokens to parse, the chance of an inaccurate parse increases. note that, with a large terminology (over 20,000 lexical entries), the prior acquisition of multi-token expressions that are in fact not multi-token domainconcepts is a possibility foot_3 . if multi-token terms contain pieces of calculations that should be discrete, the parser will not accurately break down the calculation into its constituents, since multi-token terms are monolithic literal strings to the parser-regardless of how many spaces there are between the tokens of a term. utterances 1 and 2 differ by the tokens eligible versus qualified where eligible is part of the term eligible dependant in utterance 1. while 3 contains the single four-token term single parent qualified dependant, utterance 4 counts two separate two-token terms, namely, single parent and eligible dependant. in the case of utterances 2 and 4 with oov terms, the parser parses qualified and single parent as left modifiers of the head of the noun phrase. in slfs, the modifiers are enclosed in parentheses as arguments to the predicates, respectively dependant in utterance 2 and eligible dependant in utterance 4. intuitively, each of the utterances in the slfs of utterances 2 and 4 of even with an increase in the size of the domain corpus, there is no guarantee that the oov terms qualified dependant or single parent eligible dependant of ## experiment in order to measure the impact of detecting gdc-based terms at runtime on the success of the nlp pipeline that extracts and interprets arithmetic calculations in the tax-and-regulations domain, we ran the following experiment. ## additional downstream benefits an additional advantage for this method occurs further downstream in the nlp pipeline, where elements of parsed phrases are matched to an internal data model. when executing the calculation for ifte(qualified dependant,85.00), we use a custom-built entity-recognition system to determine the value of qualified dependant. one of the features of this entity recognition system is consecutive token ## conclusion in this paper, we have described a method which increases term recall and improves parsing accuracy of utterances with oov terms at the start of the parsing process. in addition, multi-token terms detected at parsing runtime are automatically added to the existing terminology. we use a parser fitted with a term-generation preprocessor to identify similarity between oov multi-token term candidates and multi-token terms listed in the terminology. we observe improvements not only in the interpretation and representation of the utterances by our parser but also in the transformation of the slfs into executable operations in downstream components of our nlp pipeline. this method is best suited for domains where precision in a term lexicon, which has been automatically extracted, is important and where the problem with term silence can be severe @xcite in the future, we would like to experiment with beginning the parsing process by using a terminology manually curated by domain experts. because the method is domain-agnostic, we do not believe the discovery of oov terms and augmentation of a domain-specific terminology (as long as there is a preexisting terminology at outset of a parsing task) is constrained to our example domain. this work has shown that it is possible to overcome term silence by adding functionality to a parser with a preprocessor to discover oov terms on the fly.
| 5,048
|
251
| 2,020
|
ENGINE : Energy-Based Inference Networks for Non-Autoregressive Machine Translation
|
We propose to train a non-autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model. In particular, we view our non-autoregressive translation system as an inference network (Tu and Gimpel, 2018) trained to minimize the autoregressive teacher energy. This contrasts with the popular approach of training a non-autoregressive model on a distilled corpus consisting of the beam-searched outputs of such a teacher model. Our approach, which we call ENGINE (ENerGy-based Inference NEtworks), achieves state-of-the-art non-autoregressive results on the IWSLT 2014 DE-EN and WMT 2016 RO-EN datasets, approaching the performance of autoregressive models.
|
https://aclanthology.org/2020.acl-main.251
|
## introduction the performance of non-autoregressive neural machine translation (nat) systems, which predict tokens in the target language independently of each other conditioned on the source sentence, has been improving steadily in recent years @xcite @xcite . one common ingredient in getting non-autoregressive systems to perform well is to train them on a corpus of distilled translations @xcite . this distilled corpus consists of source sentences paired with the translations produced by a pretrained autoregressive "teacher" system. as an alternative to training non-autoregressive translation systems on distilled corpora, we instead propose to train them to minimize the energy defined by a pretrained autoregressive teacher model. that is, we view non-autoregressive machine trans-lation systems as inference networks @xcite @xcite trained to minimize the teacher's energy. this provides the nonautoregressive model with additional information related to the energy of the teacher, rather than just the approximate minimizers of the teacher's energy appearing in a distilled corpus. in order to train inference networks to minimize an energy function, the energy must be differentiable with respect to the inference network output. we describe several approaches for relaxing the autoregressive teacher's energy to make it amenable to minimization with an inference network, and compare them empirically. we experiment with two non-autoregressive inference network architectures, one based on bidirectional rnns and the other based on the transformer model of @xcite . in experiments on the iwslt 2014 de-en and wmt 2016 ro-en datasets, we show that training to minimize the teacher's energy significantly outperforms training with distilled outputs. our approach, which we call engine (energy-based inference networks), achieves state-of-the-art results for non-autoregressive translation on these datasets, approaching the results of the autoregressive teachers. our hope is that engine will enable energy-based models to be applied more broadly for non-autoregressive generation in the future. ## related work non-autoregressive neural machine translation began with the work of @xcite , who found benefit from using knowledge distillation @xcite , and in particular sequence-level distilled outputs @xcite . subsequent work has narrowed the gap between nonautoregressive and autoregressive translation, including multi-iteration refinements @xcite @xcite and rescoring with autoregressive models @xcite @xcite . @xcite and @xcite proposed aligned cross entropy or latent alignment models and achieved the best results of all non-autoregressive models without refinement or rescoring. we propose training inference networks with autoregressive energies and outperform the best purely non-autoregressive methods. another related approach trains an "actor" network to manipulate the hidden state of an autoregressive neural mt system @xcite @xcite in order to bias it toward outputs with better bleu scores. this work modifies the original pretrained network rather than using it to define an energy for training an inference network. energy-based models have had limited application in text generation due to the computational challenges involved in learning and inference in extremely large search spaces @xcite . the use of inference networks to output approximate minimizers of a loss function is popular in variational inference @xcite , and, more recently, in structured prediction @xcite @xcite , including previously for neural mt @xcite . ## energy-based inference networks for non-autoregressive nmt most neural machine translation (nmt) systems model the conditional distribution p θ (y | x) of a target sequence @xmath0, y 2 , ..., y t given a source sequence @xmath1, x 2 , ..., x ts , where each y t comes from a vocabulary v, y t is eos , and y 0 is bos . it is common in nmt to define this conditional distribution using an "autoregressive" factorization @xcite @xcite : this model can be viewed as an energy-based model @xcite by defining the energy function given trained parameters θ, test time inference seeks to find the translation for a given source sentence x with the lowest energy: ŷ = arg min y e θ (x, y). finding the translation that minimizes the energy involves combinatorial search. in this paper, we train inference networks to perform this search approximately. the idea of this approach is to replace the test time combinatorial search typically employed in structured prediction with the output of a network trained to produce approximately optimal predictions @xcite . more formally, we define an inference network a ψ which maps an input x to a translation y and is trained with the goal that a ψ (x) ≈ arg min y e θ (x, y). specifically, we train the inference network parameters ψ as follows (assuming θ is pretrained and fixed): ## results effect of choices for o 1 and o 2 . ## iterations, engine is comparable to cmlm on de-en and outperforms it on ro-en. comparison to other nat models. ## conclusion we proposed a new method to train nonautoregressive neural machine translation systems via minimizing pretrained energy functions with inference networks. in the future, we seek to expand upon energy-based translation using our method.
| 1,994
|
153
| 2,023
|
Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters
|
Chain-of-Thought (CoT) prompting can dramatically improve the multi-step reasoning abilities of large language models (LLMs). CoT explicitly encourages the LLM to generate intermediate rationales for solving a problem, by providing a series of reasoning steps in the demonstrations. Despite its success, there is still little understanding of what makes CoT prompting effective and which aspects of the demonstrated reasoning steps contribute to its performance. In this paper, we show that CoT reasoning is possible even with invalid demonstrations - prompting with invalid reasoning steps can achieve over 80-90% of the performance obtained using CoT under various metrics, while still generating coherent lines of reasoning during inference. Further experiments show that other aspects of the rationales, such as being relevant to the query and correctly ordering the reasoning steps, are much more important for effective CoT reasoning. Overall, these findings both deepen our understanding of CoT prompting, and open up new questions regarding LLMs’ capability to learn to reason in context.
|
https://aclanthology.org/2023.acl-long.153
|
## introduction large language models (llms) can perform new tasks during inference when prompted with a few demonstrations @xcite . chain-of-thought (cot) prompting @xcite can (figure 1 ) improve the ability of sufficiently large llms to do complex and multi-step reasoning. in addition to (query, answer) example-pair demonstrations, cot prompting includes a rationale (colored part in figure 1 ) for each example, i.e., a series of reasoning steps towards the answer, which encourages the llm to explicitly generate its intermediate reasoning process before predicting the final answer. despite its successes, there is little understanding of what makes cot prompting effective 1 our code and model input/output are available here. ## background & study formulation chain-of-thought (cot) prompting. different from the standard way of prompting language models where a set of (query, answer) pairs are given as demonstrations @xcite , cot prompting @xcite additionally includes a rationale (figure 1 , colored) for each example, encouraging the model to verbalize the intermediate reasoning steps for solving the task. such a technique has been shown to improve the performance of llms with sufficient scale on complex reasoning, sometimes to a large degree especially on arithmetic reasoning, multi-hop question answering, and symbolic reasoning. components of a cot rationale. we identify two distinct components of a cot rationale (examples in q: who is the grandchild of dambar shah? a: originally, leah had 32 chocolates and her sister had 42. so in total they had 32 + 42 = 74. after eating 35, they had 74 -35 = 39 pieces left in total. the answer is 39. a: dambar shah (? -1645) was the father of krishna shah. rudra shah was the child of krishna shah (? -1661). so the final answer (the name of the grandchild) is: rudra shah. ## how much does valid reasoning intuitively, one of the most important aspects of a chain-of-thought rationale would be its logically valid and sound reasoning. if we provide rationales with invalid reasoning steps in the demonstrated examples instead, we should expect the llm to fail to reason properly and gain little or even negative improvements compared with standard prompting (where no rationale is given), since we are teaching the llm to reason in the wrong way which could be even worse than not doing so at all. to test this intuition, we design an ablation study where we construct invalid reasoning steps for the demonstrated rationales, and measure its influence on model behavior. ## discussion the results from §4 and §5 open up new questions regarding learning to reason in context for llms, which we discuss next. do llms learn to reason from cot demonstrations? given the surprisingly high performance obtained by ablating the validity of reasoning for the in-context rationales, it can be concluded that what the llm learns from the demonstrations about how to reason properly is limited-rather, the llm has already gained a lot of such complex reasoning ability from pretraining (at least for tasks we experiment on), and the provided reasoning steps serve more as the role of an output format/space, that regularizes the llm to generate rationales that look step-by-step while being coherent and relevant to the query. moreover, results obtained from recent stronger models including text-davinci-003 and flan-palm (see appendix a.3) suggest that llms suffer further less from the ablations when they have more prior knowledge about the task. in particular, for flan-palm which is directly trained on both arithmetic reasoning and factual qa in cot fashion and hence has immense knowledge on these tasks @xcite , it could be seen that none of the ablations has significant impacts on its performance. on the positive side, this indicates that llms can effectively utilize their prior knowledge to solve new problems. however, from another perspective, if we view the invalid reasoning setting as a task where the goal is to generate invalid reasoning steps for the query, then the llm has basically failed to capture the task as it still tries to predict valid reasoning steps. this leads to the concern that llms may over-rely on their prior knowledge and ignore important information in the context that are presumably rare in the pretraining distribution, including those that are crucial for specifying the task semantics @xcite . can llms learn to reason in-context? we note that what we find does not in any way diminish the potential of learning to reason in context for llms; recent work has also shown evidence that learning in context is possible and could be powerful @xcite . rather, our findings show that the existing successes of cot are not sufficient for establishing that llms are good few-shot learners of reasoning; instead, the pretraining corpora have already forged them to be good reasoners on the tasks being evaluated, and the main role that the demonstrations play is to elicit such reasoning skills. reflections on benchmarking few-shot reasoning. an important topic on benchmarking in the era of large pre-trained language models is to quantify the level of prior knowledge the llm has gained about the end task being evaluated, which is crucial for assessing how well can the model truly extrapolate from pretraining and acquire new skills (chollet, 2019). one direct way is to look into the pretraining corpora when it is accessible, e.g., @xcite investigates the correlation between the model performance and the frequency of terms from the test instances in the pretraining data. however, the pretraining corpora are not always accessible, and low-level statistics are usually not adequate when the topics of interest are abstract and highlevel skills such as reasoning. along this direction, our work could be regarded as a way to approximately quantify the prior knowledge that the llm possesses on multi-step reasoning. our findings indicate that evaluations on alternative benchmarks where llms have less prior knowledge are needed to more faithfully assess the llms' abilities on learning to reason from few-shot demonstrations. ## related work there have been several subsequent work of chainof-thought prompting since its introduction. @xcite proposes to sample a diverse set of reasoning paths instead of performing greedy decoding, and marginalize over the sampled paths to select the most consistent answer. zhang et al. (2023) proposes a method for automatically constructing the in-context exemplars for cot. chen et al. (2022) explores program-based cot which can better disentangle computation from reasoning. in this paper, we are primarily focused on understanding the effectiveness of the original cot a few recent work focuses on understanding/analyzing cot prompting. @xcite investigates the importance of different components of the demonstrated cot rationales by changing them to be counterfactual. they only experiment with limited ways of changing the rationales to be wrong including using incorrect calculations (e.g., "5 + 4 = 7") or entities. for most of their settings, even though the rationales are made counterfactual, they are still correct since the query is changed accordingly (see, e.g., in general in-context learning (icl), @xcite shows that for a wide range of tasks in natural language understanding with categorical label space (classification and multi-choice), ground truth input-label mappings matter very little for end-task performance, and other aspects such as the label space, overall format and the distribution of text are the key. building on this work, yoo et al. (2022) finds that the correct input-label correspondence could have varying impacts based on the task and experimental configurations, and @xcite finds that models with larger scale can override semantic priors and learn input-label mapping in context. @xcite finds that for instruction models, the performance on natural language inference tasks has small degradations under irrelevant or misleading instructions. xie et al. (2022) provides theoretical analysis of icl by formulating it as bayesian inference. our work could be viewed as an attempt to empirically under-stand icl in sequence generation tasks requiring multi-step reasoning. ## conclusion in this paper, we aim to better understand chain-of-thought prompting through a series of ablation experiments that unveil the impact of different aspects of a cot rationale. we find that 1) the validity of reasoning in the prompting examples matters only a small portion to the performance; 2) relevance to the input query and following the order along the reasoning steps are the key to the effectiveness of cot prompting. overall, our findings deepen the understanding of cot prompting, and open up new questions/reflections regarding llms' capability of learning to reason in context.
| 19,470
|
8
| 2,021
|
E xcavator C ovid: Extracting Events and Relations from Text Corpora for Temporal and Causal Analysis for COVID -19
|
Timely responses from policy makers to mitigate the impact of the COVID-19 pandemic rely on a comprehensive grasp of events, their causes, and their impacts. These events are reported at such a speed and scale as to be overwhelming. In this paper, we present ExcavatorCovid, a machine reading system that ingests open-source text documents (e.g., news and scientific publications), extracts COVID-19 related events and relations between them, and builds a Temporal and Causal Analysis Graph (TCAG). Excavator will help government agencies alleviate the information overload, understand likely downstream effects of political and economic decisions and events related to the pandemic, and respond in a timely manner to mitigate the impact of COVID-19. We expect the utility of Excavator to outlive the COVID-19 pandemic: analysts and decision makers will be empowered by Excavator to better understand and solve complex problems in the future. A demonstration video is available at https://vimeo.com/528619007 .
|
https://aclanthology.org/2021.emnlp-demo.8
|
## introduction timely responses from policy makers to mitigate the impact of the covid-19 pandemic rely on a comprehensive grasp of events, their causes, and their impacts. since the beginning of the covid-19 pandemic, an enormous amount of articles are being published every day, that report many events foot_0 and studies related to covid. it is very difficult, if not impossible, to keep track of these developing events or to get a comprehensive overview of the temporal and causal dynamics underlying these events. to aid the policy makers in overcoming the information overload, we developed excavatorcovid (or excavator for short), a system that will ingest open-source text sources (e.g., news articles and scientific publications), extract covid-19 related events and relations between them, and build a temporal and causal analysis graph (tcag). excavator combines the following nlp techniques: excavator produces a tcag that is in a machine-readable json format and is also humanunderstandable (visualized via a web-based interactive user interface), to support varied analytical and decision making needs. we hope that excavator will aid government agencies in efforts to understand likely downstream effects of political and economic decisions and events related to the pandemic, and respond in a timely manner to mitigate the impact of covid-19. the benefit of excavator is realized through a comprehensive visualization of events and how they affect each other. we expect the utility of excavator to outlive the covid-19 pandemic: analysts and decision makers will be empowered by excavator to better understand and solve complex problems in the future. we first present our covid-19 event taxonomy, and then we present details about event extraction, causal and temporal relation extraction, measuring event popularity using news text as "quantitative data", and the approach for constructing a tcag. we then describe the system demonstration, present a quantitative analysis of the extractions, and conclude with recommended use cases. the event taxonomy includes 76 event types. each type comes with a name and a short description. figure 1 illustrates several branches of the event taxonomy 2 . the events come from a wide range of domains. we also manually added hyponymy relations via is_a links (e.g., covid-19 is_a virus) between pairs of event types. ## extracting events we developed a neural network model for extracting events defined in the covid-19 event taxonomy (the event classification stage) and extracting 2 the complete taxonomy is available at https: //github.com/bbn-e/learnit/blob/master/ inputs/domains/cord_19/ontology/covid_ event_ontology.yaml. the location and time arguments (the event argument extraction stage), if they are mentioned in text, for each event mention. the structured representation (events with location and/or time) enables analyses of events targeting a specific time or location. both stages use a bert-based sequence tagging model. figure 2(a) shows the model architecture. given a sequence of tokens as input, the model extracts a sequence of tags, one per each token. we use the commonly used begin-inside-outside (bio) tags for both event types and event argument role types for the event classification and argument attachment tasks respectively. event classification: a sequence tagging model is trained to predict bio tags of event types such that it identifies the event trigger span as well as the event type. tags of argument role types, such that it identifies token spans of event arguments as well as their argument role types, with respect to a trigger that has already been identified in the event classification stage and marked in the input sentence in "< @xmath0... < /@xmath1. figure 2 (c) shows an example. we run these two models in a pipeline: the event classification model is applied first to find event triggers and classify their types, then the event argument extraction model is applied to find location and time arguments for each event mention. training data curation. we use learnit rapid customization for event extraction @xcite to curate a dataset for training the event classification model. our developer spent about 13 minutes per event type to find, expand, and filter potential event triggers in a held-out 10% of the aylien coronavirus news corpus. statistics of the curated data set are shown in to train the argument extraction model, we use the related event-argument annotation from the ace 2005 dataset @xcite . we focus on location and time arguments foot_2 and ignore other roles. at decoding time, after extracting the argument mentions for events, we apply the awake @xcite entity linking system to resolve each location argument to a canonical geolocation, and use serif @xcite to resolve each time argument to a canonical time and then convert it to the month level. this allows us to perform analyses of events targeting a specific geolocation or month of interest. ## extracting temporal and causal relations we develop two approaches for extracting temporal and causal relations: a pattern-based approach and a neural network model. we take the union of the type subtype definition causes cause y happens because of x. catalyst if x, intensity of y increases. precondition x must have occured for y to happen. mitigates mitigation if x, intensity of y decreases. preventative if x happens, y can't happen. before before/after x happens before/after y. ## measuring event popularity through time the tcag only provides a qualitative analysis of the temporal and causal relations between the covid-related events. it will be more informative if we can measure the popularity of events through time to enable trend analysis (e.g., does lockdown go up or down between january and may, 2020?) and correlation analysis (e.g., will a stricter lockdown improve or deteriorate the economy?). in order to support these analyses, we produce a timeseries of a popularity score for each event type over time (a.k.a., event timeline). extending our prior work @xcite , we define the popularity score for event type e at time t as: p opularity(e) @xmath2,t+ t 2 ] n e,t cm t in which n e,t is the frequency of event e at month t. we calculate the moving average centered at each t with a sliding window of @xmath3. m t is the total number of articles published in month t. @xmath4. the raw event frequency counts can be inflated due to the increasing level of media activity. therefore, we divide the raw counts by cm t to normalize the counts so that they are comparable across different months. the second corpus is the covid-19 open research dataset @xcite . it contains coronavirus-related research from pubmed's pmc corpus, a corpus maintained by the who, and biorxiv and medrxiv pre-prints. as of 11/08/2020, it contains over 300,000 scholarly articles. we combine these two corpora because news and research articles are complementary: news are rich in real-world events and are up to date, while analytical articles contain more causal relationships. therefore, combining them is likely to lead to a more comprehensive analysis and new insights. overall statistics of extractions. excavator extracted 6.2 million event mentions of 59 types. ## recommended use cases we describe 3 recommended use cases below. more details are in our demonstration video. use case 1: causal and temporal analysis. we can get a panoramic view of the underlying casual and temporal dynamics between events related to covid from the overall tcag. we can start by analyzing the causal or temporal relations centered at an event of interest. for example, figure 4 shows a diverse range of effects and consequences of lockdown, such as economiccrisis (economic), shortage (supply-chain), fearorpanic (mental), etc. interestingly, the graph also reveals surprises such as {lockdown, causes, death}: the ui shows supporting evidence such as "lockdown exacerbates deaths and chronic health problems associated with poverty, ...". furthermore, the tcag shows that lockdown mitigates diseasespread but it also has a negative impact on the economy, which will inform the decision makers that they will need to understand the economic trade-offs when implementing the lockdown policy. we can also analyze longer-distance causal pathways consisting of two or more causal/temporal edges. for example, our demo video shows that covid-19 causes or precedes (before) lockdown, and that lockdown causes or precedes economic-crisis. this helps us understand details about how covid causes economiccrisis. use case 2: trend and correlation analysis. we can inspect the event timeline for a node or an edge to perform a trend analysis and a correlation analysis, respectively. figure 5 shows screenshots of the event popularity timeseries between january and may 2020 for lockdown, economiccrisis and covid-19. first, the user can click on a single event to perform a trend analysis: the popularity of lockdown goes up continuously, indicating an upward trend in implementing lockdown policies in more geographic regions. the user can also click on a edge to perform a correlation analysis for a pair of events: when the user clicks on the edge {lockdown, causes, economiccrisis}, the ui shows a strong correlation between the two upward curves. for another edge "lockdown mitigates covid-19", the ui shows a negative correlation near the end: as lockdown rises, covid-19 slightly falls towards the end. use case 3: analyses targeted at geolocations. the event timeline visualization also allows the user to see the timeline for geolocations such as each u.s. state individually, instead of the aggregate for the entire u.s.. figure 6 is a screenshot showing the 10 timelines for lockdown for the top-10 most frequently mentioned u.s. states. the screenshot shows that the curves for california and new york go much higher than other states. this roughly matches the stricter lockdown policies implemented in the two states during this time period, compared with other states. such targeted analysis is made possible because our events have location and time arguments. we can also make the tcag only show events and relations for a specific state, if a user selects a state of interest in the ui. 9 related work extracting events. event extraction has been studied using feature-based approaches (huang and riloff, 2012; ji and grishman, 2008), or neural networks (chen et al., 2015; nguyen et al., 2016a; wadden et al., 2019; liu et al., 2020 ## conclusion and future work we present excavator, a machine reading system that automatically constructs a temporal and causal analysis graph for covid-19 by reading open-source text documents such as news and scientific publications. our next steps are to integrate modal dependency parsing @xcite for event factuality assessment, and cross-lingual transfer learning @xcite to make excavator applicable to more languages.
| 9,473
|
12
| 2,023
|
PROMT Systems for WMT 23 Shared General Translation Task
|
This paper describes the PROMT submissions for the WMT23 Shared General Translation Task. This year we participated in two directions of the Shared Translation Task: English to Russian and Russian to English. Our models are trained with the MarianNMT toolkit using the transformer-big configuration. We use BPE for text encoding, both models are unconstrained. We achieve competitive results according to automatic metrics in both directions.
|
https://aclanthology.org/2023.wmt-1.12
|
## introduction the wmt shared general translation task is an annual event where different companies and researchers build and test their systems on the test sets provided by the organizers. this year we decided to participate in two directions: english to russian and russian to english. we use the standard transformer-big configuration for our models. the english-russian model is basically the same as last year, whereas the russian-english model is a new one built for wmt23. the rest of the paper is organized as follows: in section 2 we describe in detail the systems we submitted to the shared task. in section 3 we present and discuss the results. we conclude the paper in section 4 with discussion for possible future work. ## systems overview all of our wmt22 submissions are @xcite transformer-big @xcite systems. we use the opennmt toolkit @xcite version of byte pair encoding (bpe) @xcite for subword segmentation. our bpe models are case-insensitive, we use special tokens in the source and target sides to process case (see @xcite for details). all of the systems are unconstrained, i.e. we use all data provided by the wmt organizers, all publicly available data and some private data crawled from different web-sources. we also augment our training data with two types of synthetic data: 1) back-translations @xcite and 2) synthetic data with placeholders as described in @xcite . the back-translations are obtained using the previous versions of our nmt models which are baseline transformers trained with less data (and without some up-to-date data like the news 2021 corpora from statmt.org). we also tag all our synthetic data with special tokens at the beginning of the source sentences as described in @xcite . all models are trained with guided alignment which is used at translation time to handle named entities and document formatting. we obtain alignments using the fast-align @xcite tool. the data statistics for the russian-english language pair are presented in the details regarding different directions can be found in the next section. ## results and discussion the results are presented in as we can see, we outperform our baselines (i.e. previous versions of the models). the gains we observe, however, are not that large. however, other test sets, such as the tico-19 evaluation set 3 @xcite , show more substantial improvements. the bleu score on that test set has grown from 33.8 to 35 points. poor performance on the generaltest2023 set can be due to the problems that our submitted models have with translation of colloquial content. this can be explained by our data preparation scheme. as we have already mentioned above, we want our models to translate formal text better and thus 'sacrifice' colloquial data. the examples of such mistranslations are presented in we made a thorough investigation into the generaltest2023 sets. thus, we found out that there are four major topics for the russian-english test set: 1) movie reviews; 2) news of any the forecast of gdp components for 2019 slightly changed, which is primarily due to the release of actual data for the second quarter of 2019. ## conclusions and future work in this paper we presented our submissions for the wmt23 shared general translation task. we show good results in both directions we participate. we clearly outperform our baselines in both directions. a detailed analysis of the translations shows us that we lose quality in translation of colloquial speech. we have already started to work in this direction. we have synthesized data where, e.g., 'please' is substituted with 'plz' and so on. we plan to train our model on this synthetic data so that it could deal with such colloquial examples.
| 27,030
|
415
| 2,022
|
CONFIT : Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning
|
Factual inconsistencies in generated summaries severely limit the practical applications of abstractive dialogue summarization. Although significant progress has been achieved by using pre-trained neural language models, substantial amounts of hallucinated content are found during the human evaluation. In this work, we first devised a typology of factual errors to better understand the types of hallucinations generated by current models and conducted human evaluation on popular dialog summarization dataset. We further propose a training strategy that improves the factual consistency and overall quality of summaries via a novel contrastive fine-tuning, called CONFIT. To tackle top factual errors from our annotation, we introduce additional contrastive loss with carefully designed hard negative samples and self-supervised dialogue-specific loss to capture the key information between speakers. We show that our model significantly reduces all kinds of factual errors on both SAMSum dialogue summarization and AMI meeting summarization. On both datasets, we achieve significant improvements over state-of-the-art baselines using both automatic metrics, ROUGE and BARTScore, and human evaluation.
|
https://aclanthology.org/2022.naacl-main.415
|
## introduction text summarization is used to generate a concise and accurate summary of a long text while focusing on the sections that convey the most useful information @xcite . in recent years, the resurgence of dialogue summarization has attracted significant research attentions @xcite @xcite @xcite @xcite @xcite @xcite . the goal of dialogue summarization is to condense the conversational input into brief sentences version but cover salient information @xcite . significant progress has been made recently on abstractive dialogue summarization with various pre-trained models. however, such pre-trained models are susceptible to generating hallucinate content that is not supported by the source documents @xcite @xcite . to tackle the issue of factual inconsistency in dialogue summarization, recent works correctly encode the names of speakers @xcite , explicitly incorporate coreference information @xcite , and order the personal named entities @xcite . but it is still challenging to improve the quality of summaries generated by different models and decrease the hallucination at the same time. to better understand the types of hallucinations generated by the pre-trained models, we devised a linguistically motivated taxonomy of factual errors for dialogue summarization, instead of simply classifying the summary as faithful or not. based on our typology, we defined an annotation protocol for factuality evaluation of dialogue summarization. we then conducted a human evaluation of several pre-trained abstractive summarizers, including bart @xcite , pegasus @xcite , and t5 @xcite , aiming at identifying the proportion of different types of factual errors and studying the weaknesses of the pre-trained models. our typology and annotation helps us gain deeper insights into the causes of factual inconsistency. unlike news summarization @xcite , we found that the challenges posed by dialogue summarization are more related to dialogue flow modeling, informal interactions between speakers, and complex coreference resolution. figure 1 shows a dialogue-summary pair with three specific errors. in order to tackle the top factual errors produced by existing models, we propose to replace the most commonly used fine-tuning with a linguisticallyinformed contrastive fine-tuning approach. for hey, do you have betty's number? amanda: lemme check sorry, can't find it. ask larry. he called her last time we were at the park together. i don't know him well. don't be shy, he's very nice. if you say so… i'd rather you texted him. okay. i just texted him. urgh.. alright. bye. hannah: amanda: hannah: amanda: hannah: amanda: hannah: hey, do you have betty's number? amanda: lemme check sorry, can't find it. ask larry. he called her last time we were at the park together. i don't know him well. don't be shy, he's very nice. if you say so… i'd rather you texted him. okay. i just texted him. urgh.. alright. bye. hannah: amanda: hannah: amanda: hannah: amanda: hannah: dialogue (copy 2) hey, do you have betty's number? amanda: lemme check sorry, can't find it. ask larry. he called her last time we were at the park together. i don't know him well. don't be shy, he's very nice. if you say so… i'd rather you texted him. okay. i just texted him. urgh.. alright. bye. hannah: amanda: hannah: amanda: hannah: amanda: hannah: dialogue (copy 1) dialogue (copy 3) hannah needs betty's number but amanda doesn't have it. amanda needs to contact larry. reference (c) missing information (b) modality & tense error (a) coreference error amanda can't find betty's number. larry called her last time they were at the park. amanda will text larry. ## confit model standard fine-tuning parameterizes the probability p α of the generator on a task-specific labeled dataset by maximizing cross-entropy loss. however, the cross-entropy loss has several shortcomings that can lead to factual inconsistency in dialogue summarization due to its sub-optimal generalization and instability. we propose a more efficient fine-tuning method confit for factual consistency driven by the intuition that good generalization requires capturing the similarity in one class and contrasting them in other classes. in confit, we introduce two additional losses: contrastive loss and self-supervised loss. we use two weights, actually which is coefficients, to adjust the ratio of l con and l self in the total loss of confit. the final training objective j (θ) of the proposed framework is as follows: our linguistically-informed typology and annotation help us gain deeper insights into the causes of different factual errors. to help our models generate more faithful summaries, the proposed confit learns to concentrate on the essential elements of dialogue and capture the dynamic role information as illustrated in figure 3 . ## results we observe that for all three pretrained models confit significantly beat baselines on rouge-1, rouge-l, and human faithfulness score for both datasets. for bartscore, we note that, while performance increases on samsum for all models, it decreases on ami. however, given the fact that human evaluators rated the outputs of all three confit models as more faithful than those of their corresponding baselines on both datasets, the decreases in bartscore on ami can likely be attributed to the imperfection of automated metrics at capturing faithfulness in text. ## conclusion we presented confit, a novel method to improve the faithfulness of abstractive dialogue summarization models via contrastive and self-supervised fine-tuning. by adapting the objective function during fine-tuning to incorporate a contrastive loss that learns to distinguish positives from examples with factual errors, and a self-supervised dialogue-specific loss that captures important dialogue information flow between multiple interlocutors, confit can significantly improve the faithfulness of the abstractive summaries generated by transformer-based sequence-to-sequence language models, and reduce multiple categories of factuality errors in the abstractive summaries by large margins. in our experiment on samsum and ami, we demonstrated that confit achieves better empirical performance compared to the baseline models fine-tuned with the traditional crossentropy loss, based on both automatic evaluation metrics and human evaluation. our work provides new insights into improving the faithfulness of abstractive summarization systems using carefully designed novel objective functions for fine-tuning that captures important structures and features of the text to summarize.
| 18,003
|
18
| 2,007
|
Disambiguating automatic semantic annotation based on a thesaurus structure
|
The use/use for relationship a thesaurus is usually more complex than the (para-) synonymy recommended in the ISO-2788 standard describing the content of these controlled vocabularies. The fact that a non preferred term can refer to multiple preferred terms (only the latter are relevant in controlled indexing) makes this relationship difficult to use in automatic annotation applications : it generates ambiguity cases. In this paper, we present the CARROT algorithm, meant to rank the output of our Information Extraction pipeline, and how this algorithm can be used to select the relevant preferred term out of different possibilities. This selection is meant to provide suggestions of keywords to human annotators, in order to ease and speed up their daily process and is based on the structure of their thesaurus. We achieve a 95 % success, and discuss these results along with perspectives for this experiment.
|
https://aclanthology.org/2007.jeptalnrecital-long.18
|
## introduction thesauri are controlled vocabularies, often used for indexing and retrieving documents from collections. the standard thesauri contain two types of elements, preferred and non preferred terms, related with a link called use/use for. this link is considered as (para-)synonymy in the iso-2788 standard @xcite and can thus be useful for (semi-) automatic indexing applications : it enables a program to index a document with a preferred term (which is the type of thesaurus based controlled annotation we are interested in) either if the document contains an occurrence of the preferred term or if it contains occurrences of the corresponding non preferred term. in reality, this use/use for relationship is often more complex, and can generate ambiguity problems when used "as is" in an automatic application. we present in this paper the solution that we have developed in our project for selecting the relevant preferred term, given an occurrence of an ambiguous non preferred term in a text. this selection algorithm is based on the thesaurus's structure. the thesaurus we used in this experiment is the gtaa, which is employed for indexing and retrieving tv programs at the netherlands institute for sound and vision, the dutch national tv archives. our project, choice foot_0 , is collaborating with this institute and focuses on easing and speeding up the work of cataloguers by providing them with a ranked set of keywords referring to their thesaurus' entries as indexing suggestions. we will present our project's goal and the specificity of this use case in the following section (section 2), followed by a description of thesauri in general and the gtaa itself (section 3). in this section, we will show the different semantics of the use/use for relationships and the problem of having multiple links between preferred and non preferred terms. we then present our annotation pipeline (section 3.4), including the algorithm that we elaborated to rank the extracted keywords, and that we propose here for selecting the relevant preferred term out of multiple possibilities (section 3.5). section 5 shows our experiment to evaluate this algorithm in this word sense disambiguation context. we achieved a 95 % of success, but are still facing minor and more important problems. we discuss them and conclude with perspectives for this experiment in section 6. ## the choice project charting the information landscape employing context information, the choice project deals with the suggestion of metadata from textual resources to annotate video documents. in the context of the dutch tv archives, the cataloguers check a set of textual documents, on top of watching the program itself, to make their descriptions. one of the goals of our project is to build on existing information extraction platforms, extend and tune them to our specific needs in order to cope with the particularities of this specific use case and provide the cataloguers with a relevant set of keywords as indexing suggestions. our information extraction is based on the content of the thesaurus that they are currently using at sound and vision, enriched and transformed by us. we present this thesaurus in the following section, and the specificity of our task in the section describing our ranking algorithm. ## related work the task we are interested in in this paper can be related to word sense disambiguation. in @xcite , the authors describe the typical two-step process for this task : many works mention the use of a dictionary as an external knowledge for that purpose ( @xcite , for example), whereas statistically-based or machine-learning methods advertise the corpus-based contextual approach (see for example @xcite ). of course, some mixed approaches exist, as @xcite . in our use case, the set of senses to take into account is the set of possible preferred terms for each ambiguous non preferred term. the method that we experiment here is using external knowledge, but instead of the lexical content of dictionary definitions, or instead of trying to map the lexical environment of the external knowledge to the corpus content, we use the thesaurus independently, and take only into account the number of occurrences of each term as a contextual information. the selection of the relevant sense, i.e. of the relevant preferred term, is made only based on relationships crafted by hand by cataloguing experts when building the thesaurus. therefore it is still different from @xcite , who also based his word sense disambiguation algorithm on a thesaurus. ## conclusion and perspectives we investigated whether our method and the carrot algorithm could be used for disambiguation in an indexing setting. in cases of ambiguity, it only gives suggestions for which preferred term to choose in two cases out of three, but when it gives a suggestion, it is correct so in approximately 19 out of 20 cases. the two bad suggestions came from the same thesaurus concept, and were due to its lack of structure. using another external resource like the princeton university's wordnet thesaurus could help us cope with that problem. however, the interpretation of our success rate and percentage of undecidable cases must be subject of study : it is up to the cataloguers to determine whether these numbers are fair foot_5 . this is the subject of another study, that we will also conduct in the course of our project.
| 212
|
27
| 2,021
|
LOA : Logical Optimal Actions for Text-based Interaction Games
|
We present Logical Optimal Actions (LOA), an action decision architecture of reinforcement learning applications with a neuro-symbolic framework which is a combination of neural network and symbolic knowledge acquisition approach for natural language interaction games. The demonstration for LOA experiments consists of a web-based interactive platform for text-based games and visualization for acquired knowledge for improving interpretability for trained rules. This demonstration also provides a comparison module with other neuro-symbolic approaches as well as non-symbolic state-of-the-art agent models on the same text-based games. Our LOA also provides open-sourced implementation in Python for the reinforcement learning environment to facilitate an experiment for studying neuro-symbolic agents. Demo site: https://ibm.biz/acl21-loa , Code: https://github.com/ibm/loa
|
https://aclanthology.org/2021.acl-demo.27
|
## introduction neuro-symbolic (ns) hybrid approaches have been proposed for overcoming the weakness of deep reinforcement learning @xcite @xcite , including less training data with generalization, external knowledge utilization, and direct explainability of what is learned. study of reinforcement learning (rl) in non-symbolic environments, such as those with natural language and visionary observations, would be an important step towards the real-world application of the approaches beyond classic and symbolic environments. under certain controls necessary for studying rl, text-based games provide complex, interactive, and a variety of simulated environments where the environmental game state observation a recent neuro-symbolic framework called the logical neural networks (lnn) @xcite simultaneously provides key properties of both neural networks (learning) and symbolic logic (reasoning). the lnn can train the constraints and rules with logical functions in the neural networks, and since every neuron in the network has a component for a formula of weighted realvalued logics, it can calculate the probability and contradiction loss for each of the propositions. at the same time, trained lnn follow symbolic rules, which means they yield a highly interpretable disentangled representation. using this benefit of lnn, we proposed a neuro-symbolic rl method that uses pre-defined external knowledge in logical networks, and the method successfully plays on the text-based games @xcite . in this demonstration (demo site: https://ibm.biz/acl21-loa ), we present a logical optimal actions (loa) architecture for neuro-symbolic rl applications with lnn @xcite ) for text-based interaction games. while natural language-based interactive agents are the ambitious but attractive target as real-world applications of neuro-symbolic, it is not easy to provide an environment for the agent. the proposed demonstration uses text-based games learning environment, called textworld @xcite , as a miniature of a natural languagebased interactive environment. the demonstration provides a web-based user interface for visualizing the game interaction, which is including displaying the natural text observation from the environment, typing the action sentence, and showing the reward value from the taken action. the loa in this demonstration also visualizes trained and pre-defined logical rules in lnn via the same interface, and this will help the human user understand the benefits of introducing the logical rules via neuro-symbolic frameworks. we also supply an open-sourced implementation for demo environment and some rl methods. this implementation contains our logical approaches and other state-of-the-art agents. ## logical optimal action our proposing loa is an rl framework which is combining logical reasoning and neural network training. these training and reasoning are provided from functionalities of lnn @xcite that is simultaneously providing key properties of both neural networks and symbolic logic. figure 1 shows the overview architecture for loa. the loa model receives the logical state value as logical fact from the language understanding component which receives raw natural language state value from the environment. the model forwards into lnn for the input to get the optimal action for it, the action goes into the environment to execute the action command, then reward is input to loa agent. the loa will be trained the action decision network in lnn by using the acquired reward value and chosen action from the network. ## loa demo the proposing web-based loa demonstration supports two functionalities: 1) play the text-based game by human interactions, 2) visualize the trained and pre-defined lnn to increase interpretability for acquired rules. for playing the games by web interface, fig. 2 shows an initial view for the loa demonstration. on the left-hand side, we can choose the game from some existing text-based interaction games foot_0 , such as textworld coin-collector game @xcite , textworld cooking game @xcite , textworld commonsense cleanup game (keerthiram murugesan and campbell, 2021), and jericho game @xcite . figure 3 shows the view for playing the textworld game, and fig. 4 shows the view for another game (cleanup task). the human player can input any action by natural language then the demonstration system displays the raw observation output from the environment. for visualizing the trained and pre-defined neurosymbolic network in lnn, fig. 5 and fig. 6 show the example of the lnn output. in these figures, the lnn contains simple rules for the textworld coin-collector game; for example, the rule is the agent takes 'go east' action, when the agent finds the east room ("found west" → "go west"). the round box explains the proposition from the given observation inputs, the circle with a logical function means a logical function node of lnn, and the rectangle box explains an action candidate for the agent. the highlighted nodes (red node) have 'true' value, and nonhighlighted nodes (white node) have 'false' value. in fig. 5 , the agent found the north exit from the given observation ("observation (@xmath0, then the going north room action ("go north") are activated. in fig. 6 , if the user clicks the selectable box, the loa recommends only one action which is 'go north'. in this demonstration, we show the benefit of introducing the lnn into an rl agent, we don't prepare to automatically choose the action by loa framework. however, if we execute the rl with loa framework, the rl agent can converge faster than other non-symbolic and neuro-symbolic methods. after selecting "go north" action at @xmath1, next observation sentence and lnn output for next step are shown in fig 7 . in this step, the agent found two doors, which are east and south; however, the south door is connected to the previous room because the agent took going north action at the previous step. since this lnn is simple lnn, the "go south" action is also recommended in fig 7 . figure 8 shows the output of the complicated lnn which has functionality for avoiding revisiting the visited room. by using such the lnn, loa can output only "go east" action by having contradiction loss in lnn. this is a benefit of introducing the neuro-symbolic framework, and the human user can easily understand the reason for the taken action by the agent with this interpretability by loa. ## conclusion we propose a novel demonstration (url: https://ibm.biz/acl21-loa ) which provides to play the text-based games on the web interface and visualize the benefit of the neuro-symbolic algorithm. this application helps the human user understand the trained network and the reason for taken action by the agent. we also extend more complicated lnn for other difficult games on the demo site. at the same time, we open the source code for the demonstration (url: https://github.com/ibm/loa ).
| 7,589
|
40
| 2,020
|
E nglish-to- C hinese Transliteration with Phonetic Auxiliary Task
|
Approaching named entities transliteration as a Neural Machine Translation (NMT) problem is common practice. While many have applied various NMT techniques to enhance machine transliteration models, few focus on the linguistic features particular to the relevant languages. In this paper, we investigate the effect of incorporating phonetic features for English-to-Chinese transliteration under the multi-task learning (MTL) setting—where we define a phonetic auxiliary task aimed to improve the generalization performance of the main transliteration task. In addition to our system, we also release a new English-to-Chinese dataset and propose a novel evaluation metric which considers multiple possible transliterations given a source name. Our results show that the multi-task model achieves similar performance as the previous state of the art with a model of a much smaller size.
|
https://aclanthology.org/2020.aacl-main.40
|
## introduction transliteration, the act of mapping a name from the orthographic system of one language to another, is directed by the pronunciation in the source and target languages, and often by historical reasons or conventions. it plays an important role in tasks like information retrieval and machine translation @xcite . over the recent years, many have addressed transliteration using sequence-to-sequence (seq2seq) deep learning models @xcite @xcite , enhanced with several nmt techniques @xcite . however, this recent work neglects the most crucial feature for transliteration, i.e. pronunciation. to english ipa chinese pinyin a /"ei./ 艾 ài my /mi/ 米 mǐ depending on the specific language, the written form of a word reveals its pronunciation to various extents. for alphabetical languages such as english and french, a letter, or a sequence of letters, usually reflects the word pronunciation. for example, the word amy (in the international phonetic alphabet, ipa, /"ei.mi/) has the sub-word a corresponding to /"ei./ and my corresponding to /mi/. in contrast, characters in a logographic foot_0 writing system for languages like chinese or japanese do not explicitly indicate sound @xcite . in this paper, we give a treatment to the problem of transliteration from english (alphabet) to chinese foot_1 (logogram) using an rnn-based mtl model with a phonetic auxiliary task. we transform each chinese character to the alphabetical representation of its pronunciation via the official phonetic writing system, pinyin, foot_2 which uses latin letters with four diacritics denoting tones to represent the sounds. for example, the chinese transliteration for amy is 艾米 and the associated pinyin representation is ài mǐ. we summarize the correspondences occurring in this example in due to the similarity between the source name and the pinyin representation, @xcite proposed a sequential transliteration model that uses pinyin as an intermediate representation before transliterating a chinese name to english. in contrast, our idea is to build a model with a shared encoder and dual decoders, that can learn the mapping from english to chinese and pinyin simultaneously. by jointly learning source-to-target and source-to-sound mappings, the encoder is expected to generalize better @xcite and pass more refined information to the decoders. transliteration datasets are often extracted from dictionaries, or aligned corpus generated from applying named entity recognition (ner) system to parallel newspaper articles in different languages @xcite . we use two datasets for our experiments, one taken from news machine transliteration shared task @xcite and the other extracted from a large dictionary. we evaluate the transliteration system using both the conventional word accuracy and a novel metric designed for english-to-chinese transliteration (see section 5). our contributions are as follows: we report accuracy and f-score of 0.299 and 0.6799, respectively, on the news dataset, with a model of size 22m parameters, compared to the previous state of the art @xcite , which achieves accuracy and f-score of 0.304 and 0.6791, respectively, with a model of size 133m parameters. on the dict dataset, for source (x) target (y) pinyin (p) caleigh 凯莉 kai li ## problem formulation we use the word vocabulary to describe the set of characters for the purpose of our task specification. let v src and v tgt denote the source and target vocabularies, respectively. for a source word x of length i and a target word y of length j, we have: we formulate the task of transliteration as a supervised learning problem: given a collection of n training examples, {(x (i) , y (i) )} n @xmath0, the objective is to learn a predictor function, f : @xmath1, of which the parameter space maximizes the following conditional probability: for our multi-task transliteration model, the predictor becomes f mtl : @xmath2, p), where p denotes the written representation of the pronunciation of the target word y. for decoding, we maximize the conditional probabilities, p (p|x, ỹ) and p (y|x, p), where ỹ and p refers to the implicit information channeled by one task to the other. the phonetic information we use for our task refers to the pinyin version of the name in chinese, without tone marks, foot_3 because they are often removed for spelling chinese names in an alphabetical language. we present an example data point in the form of (x, y, p) in ## dataset preparation we experiment with two different english-to-chinese datasets. for simplicity, we denote the one taken from news machine transliteration shared task @xcite as "news," and the one extracted from the dictionary (xinhua news agency, 2007) as "dict." ## model our model is intent on solving english-to-chinese transliteration through joint supervised learning of source-to-target (main) and source-to-pinyin (auxiliary) tasks. training closely related tasks together can help the model to learn information that is often ignored in single-task learning, thus obtaining a better representation in the shared layers (in our case, encoder). moreover, the auxiliary task implicitly provides the phonetic information that is not easily learned through the single main task given the characteristics of chinese (see section 1). our model has a sequence-to-multiple-sequence (seq2multiseq) architecture that contains a shared encoder and dual decoders. between the encoder and decoder is a bridge layer 12 that transforms the 12 we call it "bridge" because it connects the shared encoder to each decoder. it allows flexible choices of the hidden sizes of the encoder and decoder and serves as the intermediate "buffer" before passing the encoder final state to each decoder. the encoder has an embedding layer with dropout @xcite , followed by a 2layer bilstm @xcite . the bridge layer consists of a linear layer followed by tanh activation. the shared encoder passes its final state to the main-task decoder and the auxiliarytask decoder via separate bridge layers. in each decoder, we use additive attention @xcite to compute the context vector (weighted sum of the encoder outputs according to the attention scores), then concatenate it with the target embedding to form the input of the subsequent 2-layer feed-forward lstm. the prediction is made by feeding the concatenation of the lstm's output, the context vector and the target embedding into a linear layer followed by log-softmax. our model is expected to simultaneously maximize the conditional probabilities mentioned in section 2. to achieve this goal, we use the linear combination of the main-task decoder's loss foot_10 (negative log likelihood; l y ) and the auxiliary-task decoder's loss (l p ) as the model's objective function: ## adaptive evaluation metrics we evaluate the transliteration system using word accuracy (acc) and its variants on the 1-best output: source target (f) target (m) med mona 莫娜 莫纳 1 colina 科莉娜 科利纳 2 we use acc and acc+ to denote the original accuracy and its variant with multiple references. the drawback of acc is that it may underestimate the quality of the system because it neglects the possibility of having more than one transliteration for a given source name, as is the case for english-to-chinese transliteration. for example in based on the knowledge of a native chinese speaker, we analyze the english-to-chinese dataset and summarize the key observations for source names with multiple target transliterations as follows: the minimum edit distance (med) between any two target @xmath3, and the lengths are the same; for any two such target names, distinct characters occur in the same position, and they often indicate the gender of the name (see to use act in accord with the above observations, we propose the following criterion for the accuracy indicator function (we refer to it as acc-act). let subscript t denote the position of a character, then i criterion(ŷ,y) = 1 if either med(ŷ, y) = 0 (which covers all the cases for acc) or the following conditions are met in order: there is no guarantee that characters that are interchangeable according to act can replace each other in every scenario. but since we only apply enc dec-m dec-a emb. h 256 256 128 δ 0.1 0.1 0.1 rnn h 512 512 128 δ 0.2 0.2 0.1 ## experimental setup recall from section 4 that we use λ to denote the weighting of the two tasks we train. we set the single-main-task (λ = 1) and the single-auxiliarytask (λ = 0) models as the baselines, and compare the multi-task models of different weightings (λ ∈ { 1 6 , 1 4 , 1 2 , 2 3 , 5 6 , 8 9 }) against them. we conduct experiments on both the news and dict datasets and select the best model for each of them to compare to the previous state of the art. ## discussion furthermore, we present some typical examples in which the multi-task model generates better predictions than the single-task in still, the multi-task model does not consistently handle all names better than the single-task modelespecially for exceptional names that do not have a regular transliteration. for instance, the name fyleman is transliterated into 法伊尔曼, but the character 伊 does not have any source-word correspondence if we consider the pronunciation of the source name. finally, our model can be generalized to other transliteration tasks by replacing pinyin with other phonetic representations such as ipa for english and rōmaji for japanese. in addition, acc-act can be extended to alphabetical languages by, for instance, constructing the alternating sub-word table which stores lists of interchangeable subsequences. another possible future work is to redesign the objective function by treating λ as a trainable parameter or including the correlation information @xcite . ## related work previous work has demonstrated the effectiveness of using mtl on models through joint learning of various nlp tasks such as machine translation, syntactic and dependency parsing @xcite @xcite . in most of this work, underlies a similar idea to create a unified training setting for several tasks by sharing the core parameters. besides, machine transliteration has a long history of using phonetic information, for example, by mapping a phrase to its pronunciation in the source language and then convert the sound to the target word @xcite . there is also relevant work that uses both graphemes and phonemes to various extents for transliteration, such as the correspondence-based @xcite and g2p-based (le and sadat, 2018) approaches. our work is inspired by the intu-itive understanding that pronunciation is essential for transliteration, and the success of incorporating phonetic information such as pinyin @xcite and ipa @xcite , in the model design. ## conclusion we argue in this paper that language-specific features should be used when solving transliteration in a neural setting, and we exemplify a way of using phonetic information as the transferred knowledge to improve a neural machine transliteration system. our results demonstrate that the main transliteration task and the auxiliary phonetic task are indeed mutually beneficial in english-to-chinese transliteration, and we discuss the possibility of applying this idea on other language pairs.
| 1,662
|
602
| 2,025
|
Bayelemabaga: Creating Resources for B ambara NLP
|
Data curation for under-resource languages enables the development of more accurate and culturally sensitive natural language processing models. However, the scarcity of well-structured multilingual datasets remains a challenge for advancing machine translation in these languages, especially for African languages. This paper focuses on creating high-quality parallel corpora that capture linguistic diversity to address this gap. We introduce Bayelemabaga, the most extensive curated multilingual dataset for machine translation in the Bambara language, the vehicular language of Mali. The dataset consists of 47K Bambara-French parallel sentences curated from 231 data sources, including short stories, formal documents, and religious literature, combining modern, historical, and indigenous languages. We present our data curation process and analyze its impact on neural machine translation by fine-tuning seven commonly used transformer-based language models, i.e., MBART, MT5, M2M-100, NLLB-200, Mistral-7B, Open-Llama-7B, and Meta-Llama3-8B on Bayelemabaga. Our evaluation on four Bambara-French language pair datasets (three existing datasets and the test set of Bayelemabaga) show up to +4.5 , +11.4 , and +0.27 in gains, respectively, on BLEU, CHRF++, and AfriCOMET evaluation metrics. We also conducted machine and human evaluations of translations from studied models to compare the machine translation quality of encoder-decoder and decoder-only models. Our results indicate that encoder-decoder models remain the best, highlighting the importance of additional datasets to train decoder-only models.
|
https://aclanthology.org/2025.naacl-long.602
|
## introduction driven by the availability of massive, digitized data sets and advancements in neural architectures @xcite , state-of-the-art natural language processing (nlp) models are widely applied to the world's high-resource languages (e.g., english, french, spanish). they are employed in tasks such as machine translation (mt) @xcite , name entity recognition (ner) @xcite , automatic speech recognition (asr) @xcite . yet the vast majority of the world's languages, and by extension the people who speak these languages, lack the digitized data resources needed to support mt systems @xcite such as google translate and other important language technology applications. these under-resourced languages have yet to benefit from recent advances because they lack the large volumes of text needed to drive language technology development. the case of neural machine translation @xcite is particularly representative as it requires large volumes of parallel data between pairs of source and target languages. moreover, the available data in under-resource languages is often noisy and diverse, with non-standardized spelling, accenting, marking, multiple scripts, code-switching, etc. for example, bambara, a tonal language with a rich morphology from the mande language family foot_0 , has several competing writing systems: adjami (arabic-based), latin, and n'ko. however, as a historically oral-only language, most bambara speakers have never been taught to read or write it. as a rule, available resources don't fit standard writing systems (e.g., systems developed during colonization) or lack standard orthography or ways to express features, such as tonality, absent in colonial scripts. to cope with this problem, language experts are actively working on standardizing the existing vocabulary and coining new words to enrich the language and support automated text processing. initiatives in this direction include masakhane for african languages @xcite , the increased presence of under-resourced languages in the popular machine translation (mt) competitions of the annual conference on machine translation (wmt) @xcite , and africanlp, a workshop dedicated to african language technologies. to help alleviate the scarcity of data for machine translation of under-resourced languages, we introduce bayelemabaga, a new comprehensive dataset for machine translation that comprises 46,976 pairs of bambara and french sentences. we collected data from decades of linguistics work on bambara from inalco foot_1 's corpus bambara de reference foot_2 , aligned collected sentences in both languages, investigated their morphological structure, and curated the content to ensure adequacy for machine translation. we evaluate the adequacy of bayelemabaga by answering the following research questions (rq): bayelemabaga aims to improve the quality of translation models for the bambara language by providing a richer training resource and adaptability to other natural language processing tasks. ## related work several linguistic studies have been conducted on the bambara language, providing valuable insights into its structure @xcite , syntax @xcite , grammar (dombrowsky-hahn, 2020), and phonology @xcite . these studies serve as foundational resources for further research and resource development @xcite @xcite @xcite @xcite . while these linguistic studies are essential for understanding language, more up-to-date and accessible resources that can be utilized by a broader audience, including language learners, educators, researchers from the nlp community, and the general public, are needed. educational materials for learning bambara are relatively scarce compared to more widely taught high-resource languages, such as french or english. however, some resources exist, primarily in textbooks and language learning guides @xcite . while these materials are valuable, they may be outdated or difficult to access, particularly for learners outside academic or linguistics research settings. there is a need for more interactive and accessible educational resources that cater to different learning styles and proficiency levels. some online dictionaries and language learning apps exist @xcite but are often limited in scope or functionality. additionally, there is a lack of digital corpora or databases that could facilitate machine translation (mt), automatic speech recognition (asr), and text-to-speech (tts) @xcite . leveraging technology to create digital resources, such as interactive language learning platforms, mobile apps, and multimedia content, could significantly improve accessibility and engagement for bambara learners and speakers. various organizations and initiatives have been working to promote and preserve the bambara language, such as inalco and the academie malienne des langues (amalan) in mali, which aims to standardize and encourage the use of national languages, including bambara. however, there is a need for more comprehensive and sustained efforts to create resources that support language preservation, such as the development of educational materials, the promotion of bambara in media and literature, and the integration of the language into formal education systems @xcite . additionally, while the bambara language has a rich linguistic heritage and a significant number of speakers, the availability of resources for neural machine translation is limited compared to high-resource languages like french or english @xcite . to address the gaps and meet the growing demand to enable bambara to be a human-technology language, we put together a collaborative effort involving linguists, educators, technology experts, and community stakeholders to curate decades of linguistic data from varying sources, including books, periodicals, news, etc., for machine learning, including machine translation. ## the bayelemabaga dataset we created a parallel text dataset for the dialect continuum of mande languages spoken in west africa. our contribution focuses on the bambara language, described by tapo et al. (2020) as a tonal language (different words with different inflections convey different meanings) with a rich morphological structure, similar to other languages in the mande language family. this family consists of several languages (bambara, dyula, maninka, etc.) spoken by 30-40 million people across the african continent, among whom there are around 15-18 million bambara speakers, primarily in mali. with three central writing systems (adjami, latin, and n'ko), bambara uses diacritical marks to indicate high or low tones in the spoken language, helping distinguish between words that use the same sequence of letters. there are 27 letters in its latin writing script except for q, v, and x, commonly seen in french and english. an example of an additional character is ¢, as shown in the bambara translation of the following phrase. ## experiments we evaluate the quality of the bayelemabaga dataset by comparing its performance before and after curation using various machine translation ## conclusion in this paper, we introduced bayelemabaga, a bambara-french parallel corpora of 47k pairs of sentences collected from 231 data sources and curated to improve the quality of mt tasks. we explored the effect of curated data on mt compared to utilizing the raw dataset. we observed that bayelemabaga improves translation quality by up to +4.5, +11.4, and +0.27 on bleu, chrf++, and africomet scores. furthermore, we investigated the benefits of introducing a new data set by fine-tuning seven mt models (mbart, mt5, m2m-100, nllb-200, mistral-7b, open-llama-7b, and meta-llama3-8b) on bayelemabaga, and evaluating them on three existing bambara-french corpora. our comparisons demonstrated that models fine-tuned on bayelemabaga improve the translation quality across all datasets. we also explored the impact of our new dataset on existing encoder-decoder and decoder-only models. machine and human evaluations showed that the encoder-decoder models yield the highest quality in bambara-french and french-bambar translations. in future work, we will conduct a more detailed human evaluation to explain the performance of decoder-only models and investigate if our observations on machine translation apply to speech data, especially for primarily oral languages (pols).
| 39,686
|
742
| 2,022
|
End-to-End Unsupervised Vision-and-Language Pre-training with Referring Expression Matching
|
Recently there has been an emerging interest in unsupervised vision-and-language pre-training (VLP) that learns multimodal representations without parallel image-caption data. These pioneering works significantly reduce the cost of VLP on data collection and achieve promising results compared to supervised VLP. However, existing unsupervised VLP methods take as input pre-extracted region-based visual features from external object detectors, which both limits flexibility and reduces computational efficiency. In this paper, we explore end-to-end unsupervised VLP with a vision encoder to directly encode images. The vision encoder is pre-trained on image-only data and jointly optimized during multimodal pre-training. To further enhance the learned cross-modal features, we propose a novel pre-training task that predicts which patches contain an object referred to in natural language from the encoded visual features. Extensive experiments on four vision-and-language tasks show that our approach outperforms previous unsupervised VLP methods and obtains new state-of-the-art results.
|
https://aclanthology.org/2022.emnlp-main.742
|
## introduction vision-and-language pre-training (vlp) @xcite @xcite @xcite @xcite has achieved great success on a wide range of vision-and-language tasks, e.g., visual question answering @xcite , image-text retrieval @xcite and text-to-image generation @xcite . the major challenge for vlp is how to bridge the gap between the representations of vision and language modalities, which is typically ad-dressed by training on large-scale parallel imagetext datasets @xcite @xcite with specially designed pre-training tasks. however, these datasets require either extensive human annotations or massive data cleaning efforts, making them difficult to collect, especially when compared to the large amount of unimodal data. to alleviate this problem, there has recently emerged some works exploring unsupervised vision-and-language pre-training (uvlp), where only non-parallel image and text data is leveraged @xcite . specifically, @xcite propose to use image region features and their detected object tags produced by an object detector as pesudo-parallel pairs to bridge the gap between the two modalities. @xcite further enrich the training data with retrieved text pieces based on object tags and pre-train their model with multi-granular alignment tasks. these works achieve competitive results compared to several supervised vlp models, demonstrating the potential of uvlp. however, current research on uvlp adopts a two-step training strategy that first extracts regionbased image features with an external object detector and then builds a multimodal model based on the region features. this is considered to have several limitations for vlp. first, region features may be sub-optimal for vlp because they are designed for object detection tasks rather than general cross-modal understanding and are fixed in the pre-training process @xcite . second, the process of extracting region features is time-consuming, which significantly reduces the inference efficiency @xcite . finally, this two-step training strategy hinders the use of vision pre-trained models (v-ptms) such as vit @xcite and swin transformer @xcite , which are not off-the-shelf object detectors but achieve promising performance on general vision tasks. therefore, how to perform uvlp in an end-to-end manner, i.e., using raw images instead of region features, is still a valuable open question. to explore the question, we propose an endto-end uvlp framework named e2e-uvlp. the framework consists of a vision encoder and a pretrained language model (plm), both of which are pre-trained on unimodal data and they are connected by a linear projection layer. taking image patches as input, our framework is capable of leveraging a wide range of v-ptms. without using object tags in inference, the computational cost introduced by external object detectors is eliminated. inspired by previous works @xcite @xcite , we derive a masked tag prediction (mtp) pre-training task which predicts the masked object tags given a raw image and the other object tags detected from it. combining mtp with the widely used masked language modeling (mlm) task @xcite , we successfully make e2e-uvlp achieve comparable or better results than existing uvlp methods, justifying end-to-end uvlp is feasible. although the mtp task is effective, further investigation reveals that the obtained model is less effective when dealing with complex attributes of objects, e.g., locating objects or determining the relationship between objects in an image. we argue it is due to two pitfalls of the mtp objective: (1) discrepancy between training and inference: an object is referred to by its tag and numerically encoded position in training, while referred to only in natural language in inference. similar discrepancies have been shown to hurt performance significantly in plm studies @xcite . in summary, our contributions are three-fold: ## conclusion we propose a novel framework that performs end-to-end unsupervised vision-and-language pretraining without using costly and sub-optimal region features. to reduce the training-inference discrepancy, we propose a new pre-training task that predicts the locations of objects with synthetic referring expressions that are more similar to real text. experiments show that our approach consistently outperforms existing unsupervised visionand-language pre-training methods, and achieves competitive results compared to supervised visionand-language pre-trained models.
| 15,639
|
3
| 2,020
|
DART : A Lightweight Quality-Suggestive Data-to-Text Annotation Tool
|
We present a lightweight annotation tool, the Data AnnotatoR Tool (DART), for the general task of labeling structured data with textual descriptions. The tool is implemented as an interactive application that reduces human efforts in annotating large quantities of structured data, e.g. in the format of a table or tree structure. By using a backend sequence-to-sequence model, our system iteratively analyzes the annotated labels in order to better sample unlabeled data. In a simulation experiment performed on annotating large quantities of structured data, DART has been shown to reduce the total number of annotations needed with active learning and automatically suggesting relevant labels.
|
https://aclanthology.org/2020.coling-demos.3
|
## introduction neural data-to-text generation has been the subject of much research in recent years @xcite . traditionally, the task takes as input structured data which comes in the form of tables with attribute and value pairs, and generates free-form, human-readable text. unlabeled data data sampler data raw text data text expert scorer update confidence score [ __dg_inform__ [ __arg_eattype_pub__ eattype_pub ] [ __arg_name__ name ] [ __arg_near__ near ] ] name[blue spice] eattype[coffee shop] area[city centre] __dg_inform__ __arg_eattype_pub__ eattype_pub __arg_name__ name __arg_near__ near name blue spice eattype coffee shop area city centre linearizer data sampler uncertainty scorer unlabeled data ( data, raw text ) confidence score expert ( data, text ) quality estimator send to label ## annotation framework dart is a desktop application built with pyqt5 foot_0 . it is compiled into a single executable with pyinstaller foot_1 , a tool that supports both mac os and windows environments. it contains an intuitive interface as described in section 3. annotation experts interact with dart in the following way: (1) a file containing unlabeled data is uploaded. (2) the system samples some data instances from the file, with a selection strategy based on signals from the sequence-to-sequence uncertainty scorer (section 2.1) and performed with the data sampler (section 2.2). (3) experts then annotate the provided data by correcting the suggested labels (available after the first iteration of (1)-( 2 )). (4) during the process of annotation, the labeled corpus quality is indicated by the annotation quality estimators (section 2.3) for experts to determine if the process were to be terminated. we discuss each component in more detail below. ## experiments data. we use two different types of structured data: (a) attribute-value pairs as used in the crowdsourced e2e dataset @xcite , and (b) the graph-structured data as defined in @xcite on the weather domain. to simulate the annotation process, we employ the given training, development and test sets of each datasets for annotation tool evaluation, with the test set kept fixed. this amounts to roughly 42k samples for e2e and 32k for the weather training sets. ## conclusions while a wide range of annotation tools for nlp tasks exists, most of these tools are targeted at non-textual labels. dart is designed to enable the ease of annotation where the labels are textual descriptions and the inputs are structured data. this is the initial version of the tool, and we hope to extend it to include a web-based version and to expand its functionality in the following ways: (1) support different types of encoders, and (2) improve upon the data sampling process.
| 3,493
|
574
| 2,022
|
Learning Disentangled Representations of Negation and Uncertainty
|
Negation and uncertainty modeling are long-standing tasks in natural language processing. Linguistic theory postulates that expressions of negation and uncertainty are semantically independent from each other and the content they modify. However, previous works on representation learning do not explicitly model this independence. We therefore attempt to disentangle the representations of negation, uncertainty, and content using a Variational Autoencoder. We find that simply supervising the latent representations results in good disentanglement, but auxiliary objectives based on adversarial learning and mutual information minimization can provide additional disentanglement gains.
|
https://aclanthology.org/2022.acl-long.574
|
## introduction in formal semantics, negation and uncertainty are operators whose semantic functions are independent of the propositional content they modify @xcite 2 . that is, it is possible to form fluent statements by varying only one of these aspects while leaving the others the same. negation, uncertainty, and content can thus be viewed as disentangled generative factors of knowledge and belief statements (see figure 1 ). disentangled representation learning (drl) of factors of variation can improve the robustness of representations and their applicability across tasks @xcite . specifically, negation and uncertainty are important for downstream nlp tasks such as sentiment analysis @xcite , question answering @xcite , and information extraction @xcite . disentangling nega-trees might not have leaves. tion and uncertainty can therefore provide robust representations for these tasks, and disentangling them from content can assist tasks that rely on core content preservation such as controlled generation @xcite and abstractive summarization @xcite . still, no previous work has tested whether negation, uncertainty, and content can be disentangled, as linguistic theory suggests, although previous works have disentangled attributes such as syntax, semantics, and style @xcite @xcite @xcite . to fill this gap, we aim to answer the following research questions: rq1: is it possible to estimate a model of statements that upholds the proposed statistical independence between negation, uncertainty, and content? rq2: a number of existing disentanglement objectives have been explored for text, all giving promising results. how do these objectives compare for enforcing disentanglement on this task? ## background we here provide relevant background on negation and uncertainty processing, disentangled representation learning in nlp, as well as discussion of how this study fits in with previous work. ## proposed approach we describe our overall model in section 3.1. section 3.2 enumerates three specific desiderata for disentangled representations, and sections 3.3 and 3.4 describe how we aim to satisfy these desiderata. ## experiments we describe our datasets, preprocessing, and data augmentation methods in section 4.1. section 4.2 describes our evaluation metrics and how these target the desiderata for disentanglement given in section 3.2. ## conclusion motivated by linguistic theory, we proposed a generative model of statements in which negation, uncertainty, and content are disentangled latent variables. we estimated this model using a vae, comparing the performance of existing disentanglement objectives. via a suite of evaluations, we showed that it is indeed possible to disentangle these factors. while objectives based on adversarial learning and mi minimization resulted in disentanglement and consistency gains, we found that a decent balance between variable disentanglement and reconstruction ability was obtained by a simple supervision of the latent representations (i.e., the inf objective). also, our 1-dimensional negation and uncertainty representations achieved high predictive performance, despite their simplicity. future work will explore alternative latent distributions, such as discrete distributions @xcite , which may better represent these operators. this work has some limitations. first, our model does not handle negation and uncertainty scope, but rather assumes that operators scope over the entire statement. our model was estimated on relatively short, single-statement sentences to satisfy this assumption, but future work will investigate how operator disentanglement can be unified with models of operator scope in order to apply it to longer examples with multiple clauses. second, while our models achieved high disentanglement, they fell short on the controlled generation task. we found that this was likely due to the models memorizing sentence length, constraining the reconstructions in way that is incompatible with the addition of negation and uncertainty cue tokens. @xcite also noticed this tendency for sentence length memorization in vaes and future will will explore their suggested remedies, such as encoder pretraining. yiyun zhao and steven bethard. 2020. how does bert's attention change when you fine-tune? an analysis methodology and a case study in negation scope. in proceedings of the 58th annual meeting of the association for computational linguistics, pages 4729-4747, online. association for computational linguistics.
| 13,376
|
24
| 2,023
|
Bidirectional Neural Machine Translation ( NMT ) using Monolingual Data for K hasi- E nglish Pair
|
Due to a lack of parallel data, low-resource language machine translation has been unable to make the most of Neural Machine Translation. This paper investigates several approaches as to how low-resource Neural Machine Translation can be improved in a strictly low-resource setting, especially for bidirectional Khasi-English language pairs. The back-translation method is used to expand the parallel corpus using monolingual data. The work also experimented with subword tokenizers to improve the translation accuracy for new and rare words. Transformer, a cutting-edge NMT model, serves as the backbone of the bidirectional Khasi-English machine translation. The final Khasi-to-English and English-to-Khasi NMT models trained using both authentic and synthetic parallel corpora show an increase of 2.34 and 3.1 BLEU scores, respectively, when compared to the models trained using only authentic parallel dataset.
|
https://aclanthology.org/2023.icon-1.24
|
## introduction 1.introduction of machine translation machine translation is a sub-field of natural language processing that deals with the automatic translation of human languages. the translation can be text-text, speech-speech, speech-text and text-speech. text-based machine translation has come a long way, from rule-based translation, example-based translation, statistical machine translation (smt), and to neural machine translation (nmt). recurrent neural networks (rnn) have addressed the problems that rule-based and statistical machine translation approaches had in capturing exceptions in human languages and retaining word dependency. they are, however, slow to train and have limitations when it comes to modeling long-term dependencies. recent advancements in nmt have demonstrated outstanding efficiency by combining the encoder-decoder architecture with attention mechanism. nmt has become more popular in academia and industry as a result of advancements in the attention mechanism. using a self-attention mechanism, @xcite nmt model transformer has attained a state-ofthe-art bleu score in both english-to-german and english-to-french translations. by including nmt into the machine translation methodology, high-quality translation has been achieved. however, the performance of nmt is highly dependent on the size of the dataset. the performance of nmt on low-resource languages is marginal compared to the high-resource languages. therefore, it is necessary to find ways to make up for the shortage of resources in order to increase the translation quality. ## related works few works related to khasi and english translation have been reported. singh and hujon, 2020 has reported findings of the effectiveness of statistical and neural machine translation systems in domain-specific english to khasi translation. it was reported that the smt performed better than the nmt for this language pair. however, the performance of the smt model degraded as the sentence length increased. donald jefferson thabah and purkayastha, 2021 reported a cross-lingual language model pretraining system for bidirectional khasi-english machine translation. the model achieved a bleu score of 39.63 and 32.69 for translating english-khasi and khasi-english respectively when tested on similar domain test sentences. laskar et al., 2021 has reported the development of enkhcorp1.0, a corpus for english-khasi pair, and implemented baseline systems for english to khasi and khasi to english translation based on the neural machine translation approach. another recent work by hujon et al., 2023, discussed the experiments and improvement of the results of neural machine translation using transfer learning for the english-khasi language pair. the study reported that the joint vocabulary of three languages, english, french and khasi has contributed to the outstanding performance of nmt transfer learning model as compared to the nmt baseline model. ## methodology the methodologies employed in this current work can be broadly grouped into four parts, namely : parallel corpus creation, data preprocessing, sub-word tokenization, modelling of khasi-english mt and back-translation of khasi monolingual sentences. ## conclusion english-sen-2 meghalaya village council files fir against scribe patricia mukhim for social media post on assault case. khasi-ref-2 ka dorbar shnong ha meghalaya ka ai fir ia patricia mukhim na bynta ki jingthoh ha social media halor ki case ba leh donbor. t base ki shnong meghalaya ki rim ia ki jingdon jingem kiba kordor pyrshah ia ka jingpynpoi ia ki lad social media katkum ka juk mynta . t base +bpe ka jylla meghalaya ka peit bniah ia u high commissioner uba ki social media ki dang pyrshang ban kurup ia ka rynsan social media . t base +bpe+back ka jylla meghalaya ka peit bniah ia u high commissioner uba ki social media ki dang pyrshang ban kurup ia ka rynsan social media . expanding the machine translation model's effectiveness involves training it with an increased volume of parallel sentences. additionally, exploring augmentation techniques for generating parallel sentences could be beneficial, especially in scenarios with limited resources. notably, the sentence structures found in the bible differ significantly from contemporary sentence structures. to adeptly translate sentences from the current generation, the model must also be trained on up-to-date sentences, consequently expanding its vocabulary.
| 25,302
|
315
| 2,020
|
Domain Adaptation of T hai Word Segmentation Models using Stacked Ensemble
|
Like many Natural Language Processing tasks, Thai word segmentation is domain-dependent. Researchers have been relying on transfer learning to adapt an existing model to a new domain. However, this approach is inapplicable to cases where we can interact with only input and output layers of the models, also known as “black boxes”. We propose a filter-and-refine solution based on the stacked-ensemble learning paradigm to address this black-box limitation. We conducted extensive experimental studies comparing our method against state-of-the-art models and transfer learning. Experimental results show that our proposed solution is an effective domain adaptation method and has a similar performance as the transfer learning method.
|
https://aclanthology.org/2020.emnlp-main.315
|
## introduction word segmentation (ws) is an essential process for several natural language processing (nlp) tasks such as part-of-speech (pos) tagging and machine translation (mt). the accuracy of ws significantly affects the accuracy of these nlp tasks, as shown in experimental results from @xcite while ws is considered relatively simple in english, it is still an open problem in languages without explicitly defined word delimiters, such as thai, chinese, and japanese. however, unlike chinese and japanese, thai ws did not receive much research attention. there are only six notable publications @xcite @xcite @xcite on thai ws for the past ten years. on the other hand, there are at least eight papers from well-established conferences on chinese and japanese ws @xcite @xcite @xcite @xcite within only the last two years. this investigation focuses on the segmentation of thai words since it is a challenging problem that has an excellent opportunity to improve, especially in the area of domain adaptation. like many nlp tasks, thai ws is domaindependent. for instance, @xcite recorded an accuracy drop from 91% to 81% when their model trained on a generic domain corpus @xcite was tested on a social media one @xcite . results from our analysis (section 3) also conform to these findings. one way to solve the domain dependency problem is through transfer learning (tl), which is a common technique in domain adaptations @xcite . however, tl may not be applicable when working with a commercial api or a model that does not support weight adjustments @xcite @xcite . we call this type of model a black box. in this paper, we propose a stacked-ensemble learning solution to overcome the black-box limitation. instead of making changes to the existing model directly, we build a separate model to improve the accuracy of predictions made by the black box. our solution comprises two parts, domain-generic (dg) and domain-specific (ds). the pretrained black box handles the domain-generic part, and a new model is constructed to handle the domain-specific part. all samples go through domain-generic, which makes initial predictions. we rank all predictions according to uncertainty and send the top-k uncertain predictions to domain-specific for further consideration. we combine the predictions from domain-specific with the remaining from domain-generic to form the final predictive results. we conducted extensive experimental studies to assess our solution's performance against a base-line model and transfer learning solutions. we also applied our stacked-ensemble filter-and-refine (sefr) technique to chinese and japanese. experimental results showed that our proposed solution achieved the accuracy level comparable to those of transfer learning solutions in thai. for chinese and japanese, we showed that model adaptation using the sefr technique could improve the performance of black-box models when used in a cross-domain setting. our contributions are as follows. first, we propose a novel solution for adapting a black-box model to a new domain by formulating the problem as an ensemble learning one. second, we derive a filter-and-refine method to speed up the inference process without sacrificing accuracy in some cases. third, we conducted extensive experimental studies; experimental results validate the effectiveness of our solution. fourth, we make our code available at: github.com/mrpeerat/sefr_cut 2 stacked-ensemble method ## performance evaluation we evaluated our sefr solution against state-ofthe-art models on nine benchmark corpora from three languages. specifically we studied the effect of our sefr method and report the performance by adapting a black-box model to a new domain by formulating the problem as an ensemble learning. ## conclusion we proposed a novel solution for adapting a blackbox model to a new domain by formulating it as an ensemble learning problem. we conducted extensive experimental studies using nine benchmark corpora from three languages. for thai word segmentation, the results showed that our method is an effective domain adaptation method and has similar performance as the transfer learning method. the results from japanese and chinese word segmentation experiments showed that our method could improve the performance of japanese and chinese black-box models.
| 4,036
|
442
| 2,022
|
A Generalized Method for Automated Multilingual Loanword Detection
|
Loanwords are words incorporated from one language into another without translation. Suppose two words from distantly-related or unrelated languages sound similar and have a similar meaning. In that case, this is evidence of likely borrowing. This paper presents a method to automatically detect loanwords across various language pairs, accounting for differences in script, pronunciation and phonetic transformation by the borrowing language. We incorporate edit distance, semantic similarity measures, and phonetic alignment. We evaluate on 12 language pairs and achieve performance comparable to or exceeding state of the art methods on single-pair loanword detection tasks. We also demonstrate that multilingual models perform the same or often better than models trained on single language pairs and can potentially generalize to unseen language pairs with sufficient data, and that our method can exceed human performance on loanword detection.
|
https://aclanthology.org/2022.coling-1.442
|
## introduction throughout history, words and phrases have been exchanged between languages around the world @xcite . this can obscure genetic relations between languages (e.g., many people erroneously believe english and french are more closely related than they are) but may also increase comprehension of foreign languages by monoglots (e.g., written french is often partially comprehensible by english speakers). as @xcite observe, detecting that a word is loanword is conceptually straightforward: both similar sound and meaning suggests too great a coincidence for different words to have converged by chance 1 . detecting loanwords computationally has therefore relied on pairwise similarity measures based on transliteration detection and edit distance. however, foundational work in linguistic borrowing, e.g., by @xcite and @xcite , established that when borrowing words into a recipient language, speakers of that language will reproduce existing linguistic patterns when using new words, and the patterns that recipient speakers impose upon a borrowed word vary across time @xcite , and language pairs. some languages may adopt a word without much phonetic change due to already-similar phonotactics. others may fit imported words into a rigid sound pattern, with sometimes significant transformation. still others may change the meaning. changes are particular to the language pair, so automatically detecting loanwords between arbitrary languages is challenging. however, if successful, such capabilities would also provide benefits to many other nlp tasks such as machine translation, coreference, and named-entity recognition (ner), because common vocabulary, coreferents, or named entities across languages may often be loanwords. here, we present a novel method for automated loanword detection between arbitrary language pairs. we build upon existing edit distancebased approaches, incorporate semantic similarity metrics from multilingual language models mbert @xcite and xlm @xcite , and a method of assessing alignment of phonemes between donor words and loans to account for differences in phonotactics between the relevant languages. we also present and evaluate on the wiklow (wiktionary loanword) dataset, currently consisting of 13 language pairs with a high density of loanwords and 3 further language pairs with a lower density of loanwords. we also provide a methodology for expanding the dataset to new language pairs. we demonstrate that our method to detect loanwords across all language pairs in the dataset performs comparably to or better than existing methods on language-specific loanword detection tasks, that multilingual models can perform better than models trained on individual language pairs, even on data from that pair itself, and that our model can also exceed human performance. 2 our method supports both loanword detection and construction of parallel corpora of loanwords for other tasks. our conclusions suggest that there are some general principles of loanword detection that can be picked up by machine learning models independent of specific languages, and we propose follow-up challenges for nlp research in this area. ## related work prior approaches to detecting loanwords computationally follow the intuition mentioned above: that if two words in otherwise not closely related languages have similar meaning and sound similar, then this is likely evidence of borrowing. @xcite use a levenshtein-distance based approach to identify language groups and loanwords among languages of central asia. delz (2013)/köllner (2021) proposes theoretical approaches to loanword identification based on phylogenetic methods. @xcite also point out an issue we address herein: loanwords may be transformed to fit the borrowing language's phonology and phonotactics, so pronunciation similarity may be a weaker than ideal method. existing data resources relevant to loanwords include the the automated similarity judgment project (asjp) database @xcite and the world loanword database (wold) @xcite . our data source is wiktionary, which has previously been used in related etymological tasks by de melo (2014) and @xcite . one thing we should note is that much work in computational loanword detection and similar tasks is targeted at a specific language or group of languages, e.g., @xcite , japanese @xcite , uyghur @xcite @xcite , spanish @xcite , central asian languages @xcite , or turkic and indo-iranian @xcite . our approach attempts to address the problem at a multilingual level. we use and extend existing work in phonological processing by the nlp community, including the epitran @xcite and @xcite packages for representing phonetic and articulatory features. we incorporate semantic similarity measures from multilingual language models mbert and xlm, and develop a method of scoring the level of alignment of phonemes between a donor and a loanword to account for differences in language-specific phonology and phonotactics. our approach in principle supports loanword detection on any pair of languages supported by the upstream packages/models epitran, mbert, and xlm, but we discuss how we have (sec. 3) and can (sec. 8) also extend our approach to languages that are not at present covered by all of these. a work at a similar scale, albeit on the slightly different task of cognate classification, is jäger (2018), which evaluates pmi and svm-based methods over the asjp database. cognate detection work generally uses similar methods to those we use here, e.g., semantic and phonetic similarity @xcite , orthographic distance @xcite combined with semantic information @xcite , or global constraints @xcite . work in translation lexicons (e.g., @xcite ) is also relevant, for the hybrid approach to similarity metrics. loanword detection may be useful for phylogenetic reconstruction, like cognate detection @xcite . however, cognates are valid for reconstructing common ancestry; loanwords are not. for historical reconstruction, the two must be separated. many in the nlp community adopt a definition of "cognate" that subsumes loanwords (e.g., @xcite ). we do not adopt this definition, and use the linguistic definition that treats loanwords and cognates as distinct. ## data collection the wiklow dataset is collected using the process outlined in this section, which can be run for any pair of languages that have loans between them catalogued in wiktionary, making it easy to expand to new data. we begin by collecting data from wiktionary categories of the form [recipient]_terms_borrowed_from_ [donor] foot_1 . each link in the category is scraped for a loanword in the recipient language and the original form of that word in the donor language. we also collect all the available lemmas in the donor language, which we use later to calculate the closest phonetic neighbors for each loanword. we also collect homonyms for each loanword where available; homonyms are considered those words that have more than one etymology, where one is a loan from the relevant donor language foot_3 . using the epitran package (mortensen et al., 2018), we transliterate both loans and original words into the international phonetic alphabet (ipa). the epitran package can be extended to support new languages, as we did here in the case of finnish, using omniglot foot_4 as a resource. epitran is not a perfect mapping to real pronunciation, especially in the case of abjads such as arabic script, a point of relevance later (sec. 4.4, sec. 7.1). having gathered positive examples of loanwords, we need to gather sufficient negative examples to both train an algorithm, and to try and fool the trained algorithm. negative examples can be: to create the synonyms dataset, we take a list of 440 english words, each of which has multiple synonyms associated with it. with the google translate api, we translate the main word into one language from our current relevant pair, and each synonym into the other. we then construct word pairs in the donor and recipient language using the cartesian product of each word with each translated synonym. we remove any duplicates, and any pairs that also occur in the loanword dataset, as we do not want true positives labeled as negatives when training the loanword detection model. to create the hard negatives dataset, we use the panphon package @xcite to compute six edit distances (see sec. 4.2) between the ipa transcriptions of the gathered loanwords, and up to 20,000 candidate lemmas of the donor language, which are also transliterated into the ipa using epitran. the result here is that each loanword is paired with up to six candidates that have a low phonetic edit distance but are not the original word in the donor language. we remove duplicates where multiple distance metrics chose the same closest neighbor, and where pairs cooccur with the synonyms or loans datasets. finally in the randoms dataset, we pair each loan with a random word in the donor language. ## similarity metrics every word pair in the wiklow dataset has measures of textual, phonetic, semantic, and articulatory similarity associated with it. ## evaluation for evaluation, we create three data distributions for each language pair. one (the balanced distribution), contains half loanwords and half nonloans. this is a well-behaved distribution wellsuited for machine learning. the non-loans are drawn roughly 1 7 from the hard negatives, 4 7 from the synonyms, and 2 7 from the randoms, reflecting the notion that relatively few words in a language are likely to be very phonetically close to a loanword on average, while there are likely to be many more words of synonymous or similar meaning. another distribution attempts to approximate the actual proportion of loanwords from the donor language into the recipient language (the "realistic" distribution, or realdist). sometimes this proportion is well-documented, and at other times not. 8 . where a figure is provided in the linguistic literature, we use it. otherwise, we take the number of loanwords we collected from wiktionary and divide it by the total number of lemmas in the borrowing language, and impose a lower bound of 10%, to maintain enough loanwords in the testing set. the non-loans portion of the realdist set is drawn in the same proportions as in the balanced set. for all language pairs currently in the wiklow dataset, the realdist @xmath0, but for other language pairs, e.g., korean-chinese, >50% loanwords is certainly possible or likely @xcite . the final distribution (abbreviated alldata), takes all the data we collected from wiktionary, to purposely overweight the dataset against loanwords, to test our method in a difficult condition. to each distribution, we concatenate two one-hot vectors representing the scripts of the languages in the pair. this allows certain models to learn dependencies between the scripts and other variables, e.g., if the languages are written in different scripts, the textual levenshtein distance becomes nearly meaningless. each distribution was divided into a 90:10 train/test split, and then shuffled. we evaluate four different binary classifiers on all distributions: a logistic regressor (lr), a linear svm, a random forest (rf), and a deep neural network (nn). the neural network consists of 3 layers of 512, 256, and 128 hidden units respectively, all with relu activation and followed by 10% dropout, and a final sigmoid activation, and is trained for 5,000 epochs with adam optimization and bce loss. we perform the evaluations listed below. single multilingual model (smm) for each different data distribution, we train a single model on the data from every language pair listed in ## results our primary metrics are precision, recall, and f1score on positive loanword identification. distribution of the 4 classifiers we evaluated. the remaining tables and figures all focus on the results of the neural network, are sorted by decreasing number of loanwords in the language pair, and are discussed in sec. 7. ## discussion we can quantitatively compare our approach to that of @xcite , who report 75.35% average precision, 74.09% average recall, and 74.71% average f1 on loanword detection in uyghur on borrowings from russian, arabic, turkish, and chinese. our results are on different language pairs but are comparable to or exceed this, particularly if the testing set is balanced between loans and non-loans. in fig. 2 , we can see that in most cases, the multilingual model outperforms the single-pair models on the same language pair on loanword retrieval, though this effect is most pronounced in language pairs with a higher density of loanwords. the model trained on the smaller pruned realdist data sees an appreciable drop in precision, but an equal or greater increase in loanword recall, and this effect is especially pronounced in pairs with fewer loanwords in the data overall, suggesting that training on a more realistic distribution may be advantageous when prioritizing reducing false negatives. fig. 3 shows the correlation between test set size and performance of the smm (including unseen language pairs). there appears to be a strong correlation between the proportion of loanwords in a test set (as expected, a balanced set leads to optimal performance), but also the raw size of the test set itself. the model performs better on larger test sets, unseen or not, regardless of what data it was trained on. we speculate that this may be because when a borrowing language borrows a lot of words from a donor language, it does so at around the same time (e.g., english from norman french), meaning fa-ar hu-de de-it ca-ar p (+) 95 95 73 100 97 100 100 75 75 73 54 25 r (+) 75 36 33 20 97 93 92 30 64 30 29 10 f1 (+) 84 52 46 33 97 96 96 43 69 43 38 14 ## conclusions and future work automated loanword detection enables a number of downstream tasks. coreferents and named entities across languages may often be loanwords, and common vocabulary enables potential improvements in machine translation @xcite . parallel corpora of loanwords also afford learning cross-lingual contextual word embedding mappings-inspired by the success of pre-transformer embedding mappings @xcite , and the potential of post-transformer alignments @xcite . these can be incorporated into the transformer architecture to provide auxiliary signals to enhance translation in two ways: i) introducing another multi-head attention between the input language embeddings and their mappings in the target language space-similar to the second multi-head attention block in the original transformer architecture @xcite . we propose to map embeddings between a source language l x and target language l y by computing a transformation matrix between paired representations of semantically-equivalent words or sentences, then to compute attention weights between these mapped embeddings, and concatenate these auxiliary attention outputs with the attention between tokens from l x and already-generated tokens from l y . ii) unmasking identified loanwords in the target language in the decoder's input, which is expected to provide further context to the decoder in the target language. this would replicate a uniquely human linguistic capability: the ability to pick up context in an unfamiliar language by picking out known words (i.e., loans from a known language). fig. 4 shows a proposed architecture for these operations. mapping between embedding spaces also allows expanding our method and dataset to new languages not covered by mbert or xlm through resources like indicbert @xcite .
| 14,398
|
1
| 2,024
|
A uto T emplate: A Simple Recipe for Lexically Constrained Text Generation
|
Lexically constrained text generation is one of the constrained text generation tasks, which aims to generate text that covers all the given constraint lexicons. While the existing approaches tackle this problem using a lexically constrained beam search algorithm or dedicated model using non-autoregressive decoding, there is a trade-off between the generated text quality and the hard constraint satisfaction. We introduce AutoTemplate, a simple yet effective lexically constrained text generation framework divided into template generation and lexicalization tasks. The template generation is to generate the text with the placeholders, and lexicalization replaces them into the constraint lexicons to perform lexically constrained text generation. We conducted the experiments on two tasks: keywords-to-sentence generations and entity-guided summarization. Experimental results show that the AutoTemplate outperforms the competitive baselines on both tasks while satisfying the hard lexical constraints. The code is available at https://github.com/megagonlabs/autotemplate
|
https://aclanthology.org/2024.inlg-main.1
|
## introduction text generation often requires lexical constraints, i.e., generating a text containing pre-specified lexicons. for example, the summarization task may require the generation of summaries that include specific people and places @xcite , and advertising text requires the inclusion of pre-specified keywords @xcite . however, the black-box nature of recent text generation models with pre-trained language models @xcite makes it challenging to impose such constraints to manipulate the output text explicitly. hokamp and @xcite and others tweaked the beam search algorithm to meet lexical constraints by increasing 1: illustration of autotemplate. we build the model input x by concatenating the constraint lexicons z with mask tokens. for the conditional text generation task, we further concatenate input document x. we also build the model output ỹ by masking the constraint lexicons in summary y. then, we can train a standard sequence-to-sequence model, p(ỹ | x), generate masked template ỹ given input x, and post-process to achieve lexically constrained text generation. the weights for the constraint lexicons, but it often misses to include all the constrained lexicons. @xcite and others introduced specialized non-autoregressive models @xcite that insert words between the constraint lexicons, but the generated texts tend to be lower-quality than standard autoregressive models. on the other hand, classical template-based methods @xcite ) can easily produce text that satisfies the lexical constraints as long as we can provide appropriate templates. nevertheless, it is impractical to prepare such templates for every combination of constraint lexicons unless for specific text generation tasks where the output text patterns are limited, such as data-to-text generation tasks @xcite . still, if such a template could be generated automatically, it would be easier to perform lexically constrained text generation. we propose autotemplate, a simple framework for lexically constrained text generations by automatically generating templates given constrained lexicons and replacing placeholders in the templates with constrained lexicons. the autotemplate, for example, can be used for summarization tasks, as illustrated in figure 1 , by replacing the constraint lexicons (i.e., {japan, akihito}) in the output text with placeholder tokens during training and using these constraints as a prefix of the input, creating input-output pairs, and then using a standard auto-regressive encoder-decoder model @xcite to train the autotemplate model. during the inference, the constraint lexicons are prefixed in the same way, the model generates the template for the constraints, and the placeholder tokens are replaced with the constraint lexicons to perform lexically constrained text generation. we evaluate autotemplate across two tasks: keywords-to-sentence generation on one-billion-words and yelp datasets ( §3.1), and entity-guided summarization on cnndm @xcite and xsum datasets @xcite ( §3.2). the autotemplate shows better keywordsto-sentence generation and entity-guided summarization performance than competitive baselines, including autoregressive and non-autoregressive models, while satisfying hard lexical constraints. we will release our implementation of autotemplate under a bsd license upon acceptance. ## autotemplate autotemplate is a simple framework for lexically constrained text generation ( §2.1), divided into two steps: template generation ( §2.2) and lexicalization ( §2.3). the template generation task aims to generate the text with placeholders ỹ, which we defined as a template, given constraint lexicons z, and the lexicalization is to replace these placeholders with the constraints to perform lexically constrained text generation. ## experiments we present experiments across two tasks: keywords-to-sentence generation ( §3.1), and entity-centric summarization ( §3.2). ## analysis does autotemplate generate fluent text? au-totemplate decomposes the lexically constrained text generation task into template generation and lexicalization tasks. the template generation task constraint entities: { game boy , apple , chris gallizzi , nintendo } autotemplate: case adds iconic game boy buttons to apple handset. it also lets gamers play their existing cartridges on their handset. developer chris gallizzi said: 'we wanted to create a retro device that can be easily adapted into any modern gamer's arsenal of devices' nintendo advised keeping cartridges away from dust, where possible, to avoid gameplay glitches. constraint entities: { hyperkin , nintendo , game boy color , start and select } autotemplate: hyperkin has designed a case that adds the iconic directional arrows from the nintendo game boy color . it was originally devised as part of an april fool's joke, but the popularity and demand for a real product was so high the firm has announced plans to sell it. it will feature an eight-way d-pad, two action buttons, a start and select button, and a battery that can be charged through the phone. to this end, we compare the fluency of the output text by autotemplate and baselines. we specifically used the grammatical acceptability classifier based on roberta-large fine-tuned on cola dataset @xcite following @xcite 9 and show the micro averaged accuracy of sentence-level grammaticality. 10 we show the results in for the entity-guided summarization task, au-9 https://huggingface.co/cointegrated/ roberta-large-cola-krishna2020 10 although we can also measure fluency using the perplexity of an external language model, it can assign low perplexity to unnatural texts containing common words @xcite . therefore, we decided to evaluate fluency using the classifier. totemplate shows similar fluency with the stateof-the-art autoregressive text generation models, including bart and ctrlsum, indicating that the autotemplate can generate as fluent text as the state-of-the-art direct generation models. ## further related work template-based text generation for classical text generation systems, templates were an important building block @xcite @xcite . the advantage of a template-based system is that it can produce faithful text, but it can produce disfluent text if an inappropriate template is selected. therefore, the current primary approach is to produce fluent text directly from the input using end-to-end neural generation models. more recent studies have focused mainly on using templates as an auxiliary signal to control the stylistic properties of the output text, such as deriving templates as latent variables @xcite @xcite and using retrieved exemplars as soft templates @xcite @xcite . copy mechanism the copy mechanism was originally introduced to deal with the out-ofvocabulary problem in machine translation by se-lecting the words from the source for the generation in addition to the vocabulary, such as the unknown word replacement with post-processing @xcite , and the joint modeling of unknown word probabilities into encoder-decoder models @xcite , but with the advent of subword units @xcite , the unknown word problem has been diminished. thus, the copy mechanism is not widely used now for handling out-of-vocabulary problems. however, the copy mechanism still plays a vital role in more complex text generation tasks such as involving numerical computation @xcite or logical reasoning @xcite . specifically, they produce special tokens that serve as placeholders and replace them with the desired words in postprocessing. autotemplate adapts a similar copy mechanism to perform lexically constrained text generation, showing that it can cover all the constrained entities in its outputs, even for more complex conditioning (more than ten entities). ## conclusions this study proposes autotemplate, a simple yet effective framework for lexically constrained text generation. the core idea is to decompose lexically constrained text generation into two steps, template generation, and lexicalization, by converting the input and output formats. the template generation can be done with standard encoder-decoder models with beam search so that autotemplate can perform lexically constrained text generation without using dedicated decoding algorithms such as non-autoregressive decoding and constrained beam search. experimental results show that the au-totemplate significantly outperforms the competitive baselines across keywords-to-sentence generation and entity-guided summarization tasks while satisfying the lexical constraints.
| 33,409
|
180
| 2,021
|
Chase: A Large-Scale and Pragmatic C hinese Dataset for Cross-Database Context-Dependent Text-to- SQL
|
The cross-database context-dependent Text-to-SQL (XDTS) problem has attracted considerable attention in recent years due to its wide range of potential applications. However, we identify two biases in existing datasets for XDTS: (1) a high proportion of context-independent questions and (2) a high proportion of easy SQL queries. These biases conceal the major challenges in XDTS to some extent. In this work, we present Chase, a large-scale and pragmatic Chinese dataset for XDTS. It consists of 5,459 coherent question sequences (17,940 questions with their SQL queries annotated) over 280 databases, in which only 35% of questions are context-independent, and 28% of SQL queries are easy. We experiment on Chase with three state-of-the-art XDTS approaches. The best approach only achieves an exact match accuracy of 40% over all questions and 16% over all question sequences, indicating that Chase highlights the challenging problems of XDTS. We believe that XDTS can provide fertile soil for addressing the problems.
|
https://aclanthology.org/2021.acl-long.180
|
## introduction the problem of mapping a natural language utterance into an executable sql query in the crossdatabase and context-dependent setting has attracted considerable attention due to its wide range of applications @xcite . this problem is notoriously challenging, due to the complex contextual dependencies among questions in a sequence. consider the question sequence in figure 1 . in order to understand the last question, one needs to figure out the elliptical object of the verb "培养(have)" from the first two questions in the sequence, which is "状 元 球员(first pick player)". questions like this are context-dependent, since they require resolutions of contextual dependencies such as ellipsis in this question. there are also context-independent questions that can be understood individually, such as the first question in figure 1 . for ease of reference, we refer to this cross-database context-dependent text-to-sql problem as xdts. to study the challenges in xdts, a continuous effort has been dedicated to constructing datasets, including sparc @xcite and cosql @xcite . however, through a careful analysis on existing datasets, we identify two biases in them and these biases conceal the major challenges in xdts to some extent. first, there are only a limited number of context-dependent questions in existing datasets. specifically, only 32% of questions in cosql are context-dependent, and only 66% of question sequences have context-dependent questions. sparc has more context-dependent ques-tions than cosql, but it still has 48% of contextindependent questions. such a limited number of context-dependent questions is unexpected, because prior work @xcite has shown that questions within a database dialogue are highly likely to be context-dependent, and how to effectively model the context to understand a context-dependent question is one of the major challenges in xdts. second, 40% of sql queries in both sparc and cosql are particularly easy, involving at most one condition expression. this biased distribution of sql queries is potentially caused by their construction methods. in fact, we find that sql queries for question sequences created from scratch are much more challenging. upon identifying the limitations of existing datasets, we present chase, a large-scale and pragmatic chinese dataset for xdts. chase consists of 5,459 question sequences (17,940 questions with their sql queries annotated) over 280 multitable relational databases. compared with sparc and cosql, the number of context-independent questions in chase is reduced from 48% and 68% to 35%, and the number of easy sql queries is reduced from 40% and 41% to 28%. moreover, chase has richer semantic annotations, including the contextual dependency and schema linking @xcite of each question. chase is also the first chinese dataset for xdts. chase is made up of two parts: chase-c and chase-t. in chase-c, we recruit 12 chinese college students who are proficient in sql to create question sequences from scratch and annotate corresponding sql queries. to ensure the diversity and cohesion of question sequences, we propose an intent recommendation method. when a student is going to raise a question, an intent category is randomly sampled with the method, and the student is recommended to write the question and sql query according to it. in chase-t, inspired by the construction of cspider @xcite , we translate all the questions, sql queries, and databases in sparc from english to chinese. we also try our best to mitigate the biases in sparc. to understand the characteristics of chase, we conduct a detailed data analysis and experiment with three state-of-the-art (sota) xdts approaches, namely, editsql @xcite , igsql @xcite , and our extension of rat-sql @xcite . the best approach only achieves an exact match accuracy of 40% over all questions and 16% over all question sequences, indicating that chase presents significant challenges for future research. the dataset, benchmark approaches, and our annotation tools are available at https://xjtu-intsoft.github.io/chase . in summary, this paper makes the following main contributions: ## study of existing datasets in this section, we first formally define the problem of xdts and its evaluation metrics. then, we present our study to understand the limitations and biases of existing datasets in contextual dependency and sql hardness distribution. ## dataset construction given the limitations of existing datasets, we present chase, a large-scale and pragmatic chinese dataset for xdts. unlike the construction of sparc and cosql, we do not specify a final goal for each question sequence. instead, we motivate our annotators to raise diverse and coherent questions via an intent recommendation method. based on this method, we collect a set of relational databases, and we recruit annotators to create question sequences from scratch and annotate corresponding sql queries. data collected in this way are referred as chase-c. besides, inspired by the construction of cspider @xcite and vietnamese spider @xcite , we translate all the questions, sql queries, and databases in sparc from english to chinese. during translation, we also try out best to mitigate the biases in sparc. data collected with this method are referred as chase-t. chase is make up of both chase-c and chase-t. since all existing datasets for xdts are constructed for english, prior work on this problem primarily focuses on english, leaving other languages underexplored. to enrich the language diversity, in this paper, we construct chase for chinese, and we leave the support of more languages as our important future work. ## data statistics and analysis we compute the statistics of chase and conduct a thorough analysis to understand its three characteristics: contextual dependency, sql hardness distribution, and mention of database schema items. ## experiments to understand the performance of the sota approaches on chase, chase-c, and chase-t, we experiment with the three approaches introduced in section 2.2. appendix a.3 provides the details of our adaptations for chinese inputs and the experimental setup. ## conclusion and future work this work presents chase, to date the largest dataset for xdts, consisting of 5,459 question sequences over 280 databases. each question in chase has rich semantic annotations, including its sql query, contextual dependency, and schema linking. experimental results show that chase highlights the challenging problems of xdts and there is a long way for us to achieve real textto-sql demands of users. currently, chase is constructed for chinese. we plan to support more languages in the future. besides, we plan to explore the ways to utilize the rich semantic annotations in chase to address the challenges in xdts.
| 6,996
|
124
| 2,025
|
S ym B a: Symbolic Backward Chaining for Structured Natural Language Reasoning
|
To improve the performance and explainability of LLM-based natural language reasoning, structured reasoning can be applied to generate explicitly structured proofs. Among different methods for structured reasoning, we specifically focus on backward chaining, where the proof goal is recursively decomposed to subgoals by searching and applying rules. We argue that current LLM-based backward chaining systems (e.g. Least-to-most prompting and LAMBADA) are incomplete, as they omit crucial algorithmic components identified from the classic backward chaining algorithm in computational logic (SLD Resolution). To this end, we propose a novel backward chaining system, SymBa (Symbolic Backward Chaining), which integrates a symbolic solver and an LLM. In SymBa, the solver controls the proof process, and the LLM is only called when the solver requires new information to complete the proof. Empowered by completeness, SymBa achieves a significant improvement in seven deductive, relational, and arithmetic reasoning benchmarks compared to the baselines.
|
https://aclanthology.org/2025.naacl-long.124
|
## introduction large language models (llms) trained with massive amounts of natural language text have shown remarkable reasoning ability in various fields, including logical and arithmetic reasoning @xcite . however, autoregressively generated explanations as in chain-ofthoughts might contain factual and logical errors, which tend to be more covert as llms scale up @xcite . to enhance the accuracy and explainability of natural language reasoning, structured reasoning has been frequently explored as an alternative. in this task, one must provide an explicitly structured explanation, i.e. a proof tree (also known as entailment tree). these structured explanations offer high interpretability by showing how premises connect to intermediate and final conclusions @xcite . among popular approaches for structured reasoning, we focus on backward chaining @xcite . backward chaining reasoners start from the goal and apply rules that decompose the goal into a set of subgoals. it is known to be efficient as it does not require a combinatorial search to generate the next step @xcite . consequently, previous works have proposed llm-based backward chaining systems, which utilize few-shot llms to execute subtasks of the backward chaining process @xcite @xcite . however, we argue that popular llm-based backward chaining systems, namely least-to-most prompting @xcite and lambada @xcite , are incomplete. we compare their implementation to a classic backward chaining algorithm from computational logic-sld resolution @xcite -and provide minimal examples that show their incompleteness in section 3.1. to address this issue, we propose symba (symbolic backward chaining), a method that applies an sld resolution-based symbolic solver directly to natural language reasoning. in symba, the solver controls the proof process, and the llm is only called when the solver requires new information to complete the proof. by this novel solver-llm integration, symba benefits from both the completeness of the sld resolution and the natural language reasoning capability of llms. symba outperforms baselines on answer accuracy, proof accuracy, and efficiency in seven benchmarks from deductive, relational, and arithmetic reasoning. empirical results show that least-tomost prompting suffers from low proof accuracy in complex problems. lambada, on the other hand, cannot handle relational and arithmetic reasoning properly. we claim that these are the direct consequences of their incomplete design. in summary, our contributions are as follows. ## analysis 6.1 solver ablation in previous sections, we show that least-to-most's lack of backtracking reduces proof accuracy, and lambada's lack of binding propagation restricts goal: is danielle niece of harry? gold reasoning path: least-to-most prompting: q. who is danielle's father? a. dale. q. who is the brother of #1? a. unknown. ▷ planning failure q. danielle can be inferred as the niece of harry. a. yes. ▷ shortcut exploitation lambada: danielle is niece of harry. ├ danielle is a daughter of someone. │ └ danielle is the daughter of dale. └ harry is a brother of someone. └ harry is the brother of kenneth. ∴ proved. ▷ invalid bridging entities danielle dale kevin debra harry morgan brian valerie kenneth figure 7: example from clutrr. the proof is correct if it shows a chain of bridging entities, possibly omitting some. least-to-most exploits shortcut, as it mispredicted the reasoning path but answered the final question correctly. lambada cannot resolve the coreference between bridging entities, leading to disconnected proof. in this section, we directly manipulate the solver algorithm, while the llm portion (single-step statement generation) remains as it is. in the -backtrack setting, the symbolic solver will apply only one decomposition and binding even if there are multiple possible ways, as in figure 2(a) . in the -bindingprop setting, the bindings obtained from previous subgoals are not propagated to subsequent ones, as in figure 2(b) . analogous to lambada, -bindingprop cannot answer gsm8k by design, as there is no way to pass the calculated results to the root goal. the -bindingprop outperforming lambada in deductive benchmarks can again be attributed to negation handling. ## conclusion while backward chaining is a promising direction for structured natural language reasoning, current llm-based approaches like least-to-most and lambada are only incomplete reproductions of backward chaining as they leave out backtracking and binding propagation. to this extent, we build symba directly from the sld resolution algorithm. in symba, a symbolic solver controls the proof, while an llm searches and translates relevant natural language statements into symbolic representations. symba outperforms backward chaining baselines in diverse reasoning tasks including deductive, relational, and arithmetic reasoning. not only does it reach the correct answer more frequently, but also demonstrates improved proof accuracy and efficiency than baselines. from both theoretical and empirical perspectives, we believe that symba significantly extends the horizon of llm-based backward chaining.
| 39,226
|
53
| 2,025
|
H ate I mg P rompts: Mitigating Generation of Images Spreading Hate Speech
|
The emergence of artificial intelligence has proven beneficial to numerous organizations, particularly in its various applications for social welfare. One notable application lies in AI-driven image generation tools. These tools produce images based on provided prompts. While this technology holds potential for constructive use, it also carries the risk of being exploited for malicious purposes, such as propagating hate. To address this we propose a novel dataset “HateImgPrompts”. We have benchmarked the dataset with the latest models including GPT-3.5, LLAMA 2, etc. The dataset consists of 9467 prompts and the accuracy of the classifier after finetuning of the dataset is around 81%.
|
https://aclanthology.org/2025.nlp4dh-1.53
|
## introduction in the era of rapid technological advancement, the emergence of generative ai tools such as dall-e has revolutionized the landscape of content creation @xcite . these tools harness the power of artificial intelligence to generate images based on textual prompts, offering unprecedented versatility and creativity. while such advancements bring forth numerous benefits across various domains, they also pose inherent risks @xcite , particularly in the realm of spreading hate speech. images hold a unique potency in communication, transcending linguistic barriers and conveying complex ideas with remarkable efficiency. in the digital age, where visual content proliferates across online platforms, the impact of imagery on shaping societal discourse cannot be overstated. generative ai tools, with their ability to swiftly translate textual prompts into visual representations, have the potential to amplify the dissemination of hate speech at an alarming rate. hate speech, characterized by expressions that incite violence, discrimination, or hostility against individuals or groups based on attributes such as race, ethnicity, religion, or gender, remains a persistent and pervasive issue in contemporary society. while traditional forms of hate speech often rely on textual rhetoric, the introduction of generative ai adds a new dimension by enabling the rapid creation of visually compelling and emotionally evocative content to accompany such rhetoric. the visual nature of generated images not only enhances the persuasive power of hate speech but also facilitates its dissemination across online platforms with unprecedented speed and reach @xcite . in an interconnected digital ecosystem where attention is scarce and information overload is common, visually striking content tends to garner greater engagement and virality, thereby amplifying the impact of hate speech on public discourse @xcite . furthermore, the anonymity afforded by online platforms coupled with the ease of access to generative ai tools lowers the barrier for individuals or groups seeking to propagate hateful ideologies through visual means. this convergence of technology and human behavior creates fertile ground for the proliferation of hate speech, posing significant challenges for policymakers, technologists, and society at large. motivation: ai tools such as dall-e, midjourney, foocus, and others have the potential for misuse in creating images that propagate hate. when these images circulate on social media, they can significantly impact users. ai-generated images are often difficult for humans to detect, making mitigation crucial to prevent unethical use of these tools. the key contributions of our work are as follows: in the recent literature there are few works proposing deepfake detection techniques. patel et al. (2023) proposed an architecture for improving the detection of the deepfake images. the proposed architecture is the classifier of deepfake vs real images. woo et al. (2022) proposed a new architecture for detecting deepfake images using frequency attention distillation. wang et al. (2022) proposed a gan architecture for detection of deep fake images. deepfake detection can be deployed in social media sites for mitigating the spread of deepfake images in social media but this approach may not be appropriate in real-time environments as there can be images that can spread good or culture. the classifier may detect the images that spread good as deepfake. so, it is not suggested to deploy in social media platforms. to mitigate and prevent the misuse of ai for unethical purposes, it would be beneficial to restrict ai tools from generating images that incite hatred. sathvik et al. (2024) proposed a dataset for mitigating the llms to generate gossips on celebrities. the dataset is the collection of prompts labeled as 0 or 1. the classifiers trained on the dataset can be deployed in real-time chat systems for filtering the prompts that generate gossip. gehman et al. (2020) proposed a novel dataset which has the collection of prompts that are toxic which includes racist, discrimination, etc. the data presented included toxic vs non toxic. the prompts are based on gpt-2, there are various recent llms released that may act different on this prompts than expected. the recent papers focused on the detection of the deepfake images and there are datasets proposed for mitigating the gossips. the uniqueness of this paper lies in proposing a dataset for mitigating misuse of generative ai image generation tools instead of text. ## experimental results and discussion performance in finetuning (ft) setting: in the ft setting, where models are trained specifically on the hateimgprompts dataset, gpt-3.5 demonstrates superior performance compared to other performance in few shot (fs) setting: under the fs scenario, where models are trained with a limited amount of data, gpt-3.5 continues to display robust performance with precision and accuracy values exceeding 70%. llama 2 also maintains competitive results, particularly in precision and recall metrics. while gemini shows reasonable performance, it falls slightly short compared to gpt-3.5 and llama 2 across all metrics. in the zero shot (zs) setting, where models are evaluated without any prior training on the hateimgprompts dataset, both llama 2 and gpt-3.5 consistently demonstrate strong performance across precision, recall, and accuracy metrics. their ability to generalize well to unseen data highlights their robustness in hate speech detection tasks. although gemini performs relatively well, it trails behind the top-performing models, especially in precision and recall. the experimental results underscore the effectiveness of large-scale pre-trained language models such as gpt-3.5 and llama 2 in detection task, particularly when fine-tuned on specific datasets. these models exhibit strong adaptability and performance across various settings, showcasing their potential for real-world applications in combating online hate speech. real-time application: the classifiers trained on the dataset can be implemented within dall e, midjourney, and other ai image generation tools to serve as a filter for detecting hateimg-prompts. in the event that a prompt is identified as a hateimgprompt, it will be prevented from accessing the backend server. instead, the system can issue a warning or generate a response stating, "the prompt you provided has the potential to spread hate. we are committed to preventing such unethical use cases. we apologize for not fulfilling your request." if the classifier detects it to be nhip then the prompt should be input to the ai model to generate the image. this will mitigate the risk of ai misusing for spreading hate. ## conclusion and future work we propose a novel dataset named "hateimg-prompts" for mitigating the ai image generation tools to generate images that spread hate. the models trained on the dataset as a binary classification models performed with accuracy of around 81%. the classifiers trained can be seamlessly deployed in image generation tools. the future work could be developing prompts in various other languages as there are ai image generation tools that can generate images with prompts of languages other than english. also, we would like to build a dataset with explainable ai so that the prompts can be changed automatically based on the hate content or can recommend the user to change that particular word or context from the prompt.
| 40,114
|
152
| 2,024
|
D ialog S tudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI
|
Despite advancements in conversational AI, language models encounter challenges to handle diverse conversational tasks, and existing dialogue dataset collections often lack diversity and comprehensiveness. To tackle these issues, we introduce DialogStudio: the largest and most diverse collection of dialogue datasets, unified under a consistent format while preserving their original information. Our collection encompasses data from open-domain dialogues, task-oriented dialogues, natural language understanding, conversational recommendation, dialogue summarization, and knowledge-grounded dialogues, making it an incredibly rich and diverse resource for dialogue research and model training.To further enhance the utility of DialogStudio, we identify the licenses for each dataset, design external knowledge and domain-aware prompts for selected dialogues to facilitate instruction-aware fine-tuning. To improve transparency and support dataset and task-based research, as well as language model pre-training, all datasets, licenses, codes, and models associated with DialogStudio will be made publicly accessible.
|
https://aclanthology.org/2024.findings-eacl.152
|
## introduction recent years have seen remarkable progress in conversational ai, primarily driven by the advent of approaches and language models @xcite @xcite . despite the advancements, these models could fall short when handling various tasks in a conversation due to the lack of comprehensive and diverse training data. current dialogue datasets @xcite are typically limited in size and taskspecific, which thus results in suboptimal ability in task-oriented model performance. additionally, the lack of dataset standardization impedes model generalizability. a few recent works @xcite @xcite have introduced a large collection of datasets, which includes diverse tasks based on public datasets. for instance, flant5 @xcite presents the flan collections with a wide array of datasets and tasks. despite this breadth, the coverage of dialogue datasets within the flan collection remains notably sparse, featuring only about ten datasets. although opt @xcite have incorporated collections with several dialogue datasets, these collections remain inaccessible to the public. in contract, efforts like instructdial @xcite and @xcite consist of more dialogue datasets, but they lack diversity and comprehensiveness. for instance, parlai mainly includes open-domain dialogue datasets, which are exclusively accessible through their platform. other collections @xcite @xcite often distill single dataset from chatgpt or process datasets into a sequence-to-sequence format to support language model training, featuring only input-output pairs such as dialogue context and system response. however, previous collections often overlook other crucial dialogue information, constraining their utility for research on individual datasets, tasks, and broader applications. to overcome the aforementioned challenges, we introduce dialogstudio, the most comprehensive and diverse collection of publicly available dialogue datasets, unified under a consistent format. by aggregating dialogues from various sources, dialogstudio promotes holistic analysis and the development of models adaptable to a variety of conversational scenarios. the collection spans an extensive range of domains, aspects, and tasks, and it is inclusive of several categories: open-domain dialogues, task-oriented dialogues, natural language understanding, conversational recommendation, dialogue summarization, and knowledge-grounded dialogues. thus, it can provide support for research in both individual dialogue tasks and largescale language pre-training. dialogstudio stands out not only for its comprehensive coverage but also for its accessibility. it offers easy access with a unified format and documents. a straightforward load dataset() command through huggingface allows users to seamlessly interact with the collection, and we have included documentation for each dataset to enhance usability. we anticipate that this collection will enable comprehensive and standardized training and evaluations of dialogue models, fostering fair comparisons and propelling further advancements in conversational ai. furthermore, we identify dialogue domains, design external knowledge for available dialogues and create tailored prompts for selected datasets accordingly. leveraging these datasets from di-alogstudio, we have constructed instruction-aware models, with capacities ranging from 770m to 3b parameters. these models have the ability to handle various external knowledge and are adept at both response generation and general tasks, demonstrating the benefits of dialogstudio. the main contributions of this paper are as follows: ## datasets unification and access we collect and process a wide range of datasets, involving different domains, types, and tasks. since these datasets originally contain various information and format, we propose a unification strategy to process all the datasets such that they can be loaded in the same data loader. ## experiments in this section, we present the pre-training details, methodologies, and metrics used to assess the performance of our dialogstudio model. the evaluation process aims to measure the model's ability to both solve task-oriented dialogues and understand general prompt-based instruction. ## conclusion in this study, we have introduced dialogstudio, a comprehensive collection that aggregates more than 80 diverse dialogue datasets while preserving their original information. this aggregation not only represents a significant leap towards consolidating dialogues from varied sources but also offers a rich tapestry of conversational patterns, intents, and structures, capturing the nuances and richness of human interaction. utilizing dialogstudio, we developed corresponding models, demonstrating superior performance in both zero-shot and few-shot learning scenarios. in the spirit of open research and advancing the field, we are committed to releasing dialogstudio to the broader research community.
| 31,044
|
23
| 2,023
|
Naturalistic Causal Probing for Morpho-Syntax
|
Probing has become a go-to methodology for interpreting and analyzing deep neural models in natural language processing. However, there is still a lack of understanding of the limitations and weaknesses of various types of probes. In this work, we suggest a strategy for input-level intervention on naturalistic sentences. Using our approach, we intervene on the morpho-syntactic features of a sentence, while keeping the rest of the sentence unchanged. Such an intervention allows us to causally probe pre-trained models. We apply our naturalistic causal probing framework to analyze the effects of grammatical gender and number on contextualized representations extracted from three pre-trained models in Spanish, the multilingual versions of BERT, RoBERTa, and GPT-2. Our experiments suggest that naturalistic interventions lead to stable estimates of the causal effects of various linguistic properties. Moreover, our experiments demonstrate the importance of naturalistic causal probing when analyzing pre-trained models. https://github.com/rycolab/naturalistic-causal-probing
|
https://aclanthology.org/2023.tacl-1.23
|
## introduction contextualized word representations are a byproduct of pre-trained neural language models and have led to improvements in performance on a myriad of downstream natural language processing (nlp) tasks (joshi et al., 2019; kondratyuk, 2019; zellers et al., 2019; brown et al., 2020). despite this performance improvement, though, it is still not obvious to researchers how these representations encode linguistic information. one prominent line of work attempts to shed light on this topic through probing (alain and bengio, 2017), also referred to as auxiliary prediction (adi et al., 2017) or diagnostic classification (hupkes et al., 2018). in machine learning parlance, a probe is a supervised classifier that is trained to predict a property of interest from the target model's representations. if the probe manages to predict the property with high accuracy, one may conclude that these representations encode information about the probed property. while widely used, probing is not without its limitations. @xcite for instance, probing a pre-trained model for grammatical gender can only tell us whether information about gender is present in the representations, foot_1 it cannot, however, tell us how or if the model actually uses information about gender in its predictions (ravichander et al., 2021; elazar et al., 2021; ravfogel et al., 2021; lasri et al., 2022). furthermore, supervised probing cannot tell us whether the property under consideration is directly encoded in the representations, or if it can be recovered from the representations alone due to spurious correlations among various linguistic properties. in other words, while we might find correlations between a probed property and representations through supervised probing techniques, we cannot uncover causal relationships between them. in this work, we propose a new strategy for input-level intervention on naturalistic data to obtain what we call naturalistic counterfactuals, which we then use to perform causal probing. through such input-level interventions, we can ascertain whether a particular linguistic property has a causal effect on a model's representations. a number of prior papers have attempted to tease apart causal dependencies using either input-level or representation-level interventions. for instance, work on representational counterfactuals has investigated causal dependencies via interventions on neural representations. while quite versatile, representation-level interventions make it hard-if not impossible-to determine whether we are only intervening on our property of interest. another proposed method, templated counterfactuals, does perform an input-level intervention strategy, which is guaranteed to only affect the probed property. under such an approach, the researcher first creates a number of templated sentences (either manually or automatically), which they then fill with a set of minimal-pair words to generate counterfactual examples. however, template-based interventions are limited by design: they do not reflect the diversity of sentences present in natural language, and, thus, lead to biased estimates of the measured causal effects. naturalistic counterfactuals improve upon templatebased interventions in that they lead to unbiased estimates of the causal effect. in our first set of experiments, we employ naturalistic causal probing to estimate the average treatment effect (ate) of two morpho-syntactic features-namely, number and grammatical gender-on a noun's contextualized representation. we show the estimated ate's stability across corpora. in our second set of experiments, we find that a noun's grammatical gender and its number are encoded by a small number of directions in three pre-trained models' representations: bert, roberta, and gpt-2. @xcite we further use naturalistic counterfactuals to causally investigate gender bias in roberta. we find that roberta is much more likely to predict the adjective hermoso(a) (beautiful) for feminine nouns and racional (rational) for masculine. this suggests that roberta is indeed gender-biased in its adjective predictions. finally, through our naturalistic counterfactuals, we show that correlational probes overestimate the presence of certain linguistic properties. we compare the performance of correlational probes on two versions of our dataset: one unaltered and one augmented with naturalistic counterfactuals. while correlational probes achieve very high (above 90%) performance when predicting gender from sentence-level representations, they only perform close to chance (around 60%) on the augmented data. together, our results demonstrate the importance of a naturalistic causal approach to probing. ## probing there are several types of probing methods that have been proposed for the analysis of nlp models, and there are many possible taxonomies of those methods. for the purposes of this paper, we divide previously proposed probing models into two groups: correlational and causal probes. on one hand, correlational probes attempt to uncover whether a probed property is present in a model's representations. on the other hand, causal probes, roughly speaking, attempt to uncover how a model encodes and makes use of a specific probed property. we compare and contrast correlational and causal probing techniques in this section. ## the causal framework the question of interest in this paper is how contextualized representations are causally affected by a morpho-syntactic feature such as gender or number. to see how our method works, it is easiest to start with an example. let's consider the following pair of spanish sentences: the meaning of these sentences is equivalent up to the gender of the noun programador, whose feminine form is programadora. however, more than just this one word changes from (1) to ( 2 ): the definite article el changes to la and the adjective talentoso changes to talentosa. in the terminology of this paper, we will refer to programador as the focus noun, as it is the noun whose grammatical properties we are going to change. we will refer to the changing of (1) to ( 2 ) as a syntactic intervention on the focus noun. informally, a syntactic intervention may be thought of as taking part in two steps. first, we swap the focus noun (programador) with another noun that is equivalent up to a single grammatical property. in this case, we swap programador with programadora, which differs only in its gender marking. second, we reinflect the sentence so that all necessary words grammatically agree with the new focus noun. the result of a syntactic intervention is a pair of sentences that differ minimally, that is, only with respect to this one grammatical property (figure 1 ). another way of framing the syntactic intervention is as a counterfactual: what would (1) have looked like if programador had been feminine? the rest of this section focuses on formalizing the notion of a syntactic intervention and discussing how to use them in a causal inference framework for probing. a note on inanimate nouns. when estimating the effect of grammatical gender here, we restrict our investigation to animate nouns, for example, programadora/programador (feminine/ masculine programmer). grammatical gender of inanimate nouns is lexicalized, meaning that each noun is assigned a single gender, for example, puente (bridge) is masculine. in other words, there is not a non-zero probability of assigning each lemmata to each gender, which violates a condition called positivity in causal inference literature. thus, we cannot perform an intervention on the grammatical gender of those words, but rather would need to perform an intervention on the lemma itself. we refer to gonen et al. (2019) for an analysis of the effect of gender on inanimate nouns' representations. note that a similar lexicalization can also be observed in a few animate nouns, for example, madre/padre (mother/father). in such cases, to separate the lemma from gender, we assume that these words share a hypothetical lemma, which in our example represents parenthood, and combining that with gender would give us the specific forms (e.g., madre/padre). ## approximating the ate in this section, we show how to estimate equation( 6 ) from a finite corpus of sentences s. ## dataset we use two spanish ud treebanks (nivre et al., 2020) in our experiments: spanish-gsd (mcdonald et al., 2013) and spanish-ancora (taulé et al., 2008). we only analyze gender on animate nouns and use open multilingual word-net (gonzalez-agirre et al., 2012) to mark the animacy. corpus statistics for the datasets can be found in ## insights from ate estimators in the following experiments, we first use the estimators introduced in §4 to approximate the ate of number and grammatical gender on contextualized representations. we look at how stable these ate estimates are across datasets, and whether they change across words with different parts of speech. we then analyze whether the ate (as an expected value) was an accurate description of how representations actually change in individual sentences. finally, we compute the ate of gender on the probability of predicting specific adjectives in a sentence, thereby measuring the causal effect of gender in adjective prediction. ## insights from naturalistic counterfactuals in the following experiments, we rely on a dataset augmented with naturalistic counterfactuals. we first explore the geometry of the encoded paired values computed using equation ( 14 ) to measure causal gender bias in masked adjective prediction. morpho-syntactic features. we then run a more classic correlational probing experiment, highlighting the importance of a causal framework when analyzing representations. ## conclusion we propose a heuristic algorithm for syntactic intervention which, when applied to naturalistic data, allows us to create naturalistic counterfactuals. although similar analyses have been run by prior work, using either templated or representational counterfactuals (elazar et al., 2021; vig et al., 2020; bolukbasi et al., 2016, inter alia), our syntactic intervention approach allows us to run these analyses on naturalistic data. we further discuss how to use these counterfactuals in a causal setting to probe for morpho-syntax. experimentally, we first showed that ate estimates are more robust to dataset differences than either our naïve (correlational) estimator, or template-based approaches. second, we showed that ate can (at least partially) predict how representations will be affected after intervention on gender or number. third, we employ our ate framework to study gender bias, finding a list of adjectives that are biased towards one or other gender. fourth, we find that the variation of gender and number can be captured by a few principal axes in the nouns' representations. and, finally, we highlight the importance of causal analyses when probing: when evaluated on counterfactually augmented data, correlational probe results drop significantly.
| 26,774
|
626
| 2,024
|
ORPO : Monolithic Preference Optimization without Reference Model
|
While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we revisit SFT in the context of preference alignment, emphasizing that a minor penalty for the disfavored style is sufficient for preference alignment. Building on this foundation, we introduce a straightforward reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the need for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models including Llama-2 Chat and Zephyr with more than 7B and 13B parameters: achieving up to 12.20% on AlpacaEval 2.0 (Figure 1), and 7.32 in MT-Bench (Table 2). We release code and model checkpoints for Mistral-ORPO- \alpha (7B) and Mistral-ORPO- \beta (7B).
|
https://aclanthology.org/2024.emnlp-main.626
|
## introduction pre-trained language models (plms) with vast training corpora such as web texts @xcite or textbooks @xcite have shown remarkable abilities in diverse natural language processing (nlp) tasks @xcite @xcite @xcite . however, the models must undergo further tuning to be usable in downstream applications, typically through processes such as instruction tuning and preference alignment. instruction-tuning @xcite @xcite ) trains 1 github: https://github.com/xfactlab/orpo 2 models: orpo collection 4.96 7.7 9.44 8.35 10.99 11.33 12.2 llama-2 mistral alpacaeval 2.0 llama (7b) llama (13b) llama-orpo (7b) zephyr-a zephyr-b mistral-orpo-a mistral-orpo-b 0.0 2.5 5.0 7.5 10.0 12.5 win rate (%) algorithm rlhf dpo orpo figure 1: alpacaeval 2.0 result of llama-2 (7b) and mistral (7b) fine-tuned with orpo (blue) in comparison to the state-of-the-art models. notably, mistral-orpo-α & β surpasses zephyr β and llama-2-chat (13b) with a single epoch training exclusively on the ultrafeedback. to further align these models with human values, additional training is required with pairwise preference data using techniques such as reinforcement learning with human feedback @xcite and direct preference optimization @xcite . existing preference alignment methods typically consist of a multi-stage process, as shown in figure 2 , typically requiring a second reference model and a separate warm-up phase with supervised fine-tuning (sft) @xcite @xcite , which adds additional resource overheads. alignment without reward model recently proposed techniques for preference alignment mitigate the need for reinforcement learning @xcite @xcite @xcite . rafailov et al. (2023) introduce direct preference optimization (dpo), which removes the reward modeling stage. azar et al. (2023) prevented potential overfitting problems in dpo through identity preference optimization (ipo). ethayarajh et al. (2024) and cai et al. (2023) proposed kahneman-tversky optimization (kto) and unified language model alignment (ulma) that does not require the pair-wise preference dataset, unlike rlhf and dpo. song et al. (2023) and xu et al. (2024) further suggest incorporation of the softmax value of the reference response set in the negative log-likelihood loss to merge the supervised finetuning and preference alignment. alignment with supervised fine-tuning there have been approaches to build human-aligned language models by conducting supervised finetuning (sft) only with filtered datasets @xcite @xcite @xcite @xcite . @xcite demonstrated that sft with a small amount of data with fine-grained curation could be sufficient for building helpful language model assistants. furthermore, @xcite and haggerty and chandra (2024) proposed an iterative process of fine-tuning the supervised fine-tuned language models with their own generations after fine-grained selection of aligned generations and @xcite suggested that a curated subset of preference dataset is sufficient for alignment. ## odds ratio preference optimization we introduce a novel preference alignment algorithm, odds ratio preference optimization (orpo), which incorporates an odds ratio-based penalty to the conventional supervised fine-tuning (sft) loss (i.e., negative log-likelihood (nll)) for differentiating the generation styles between favored and disfavored responses. we discuss the effects of sft on preference alignment in section 3.2 and explain the mechanism of orpo in section 3.3. ## experimental results first, we assess the general instruction-following abilities of the models by comparing the preference alignment algorithms in single-turn (section 5.1) and multi-turn (section 5.2) instruction following benchmarks. then, we compare orpo against other alignment methods in the controlled setting, using opt with various model sizes (section 5.3). ## conclusion in this paper, we introduced a reference-free monolithic preference alignment method, odds ratio preference optimization (orpo), by revisiting and understanding the value of the supervised fine-tuning (sft) phase in the context of preference alignment. orpo was consistently preferred by the fine-tuned reward model against sft and rlhf across the scale, and the win rate against dpo increased as the model size increased. furthermore, we validate the scalability of orpo with 2.7b and 7b pre-trained language models by exceeding the larger state-of-the-art instruction-following language models in alpacaeval. specifically, mistral-orpo-α and mistral-orpo-β achieved 11.33% and 12.20% in alpacaeval 2.0 , 7.23 and 7.32 in mt-bench, thereby underscoring the efficiency and effectiveness of orpo.
| 30,041
|
663
| 2,020
|
Multitask Learning for Cross-Lingual Transfer of Broad-coverage Semantic Dependencies
|
We describe a method for developing broad-coverage semantic dependency parsers for languages for which no semantically annotated resource is available. We leverage a multitask learning framework coupled with annotation projection. We use syntactic parsing as the auxiliary task in our multitask setup. Our annotation projection experiments from English to Czech show that our multitask setup yields 3.1% (4.2%) improvement in labeled F1-score on in-domain (out-of-domain) test set compared to a single-task baseline.
|
https://aclanthology.org/2020.emnlp-main.663
|
## introduction broad-coverage semantic dependency parsing (sdp) 1 was first introduced in the semeval shared task @xcite and aims to provide semantic analysis of sentences by capturing semantic relations between all content-bearing words in a sentence. the rich graph structure introduced by sdp allows the model to cover a wide range of semantic phenomena such as negation, comparatives, possessives and various types of modifications that have not been previously analyzed in other models such as semantic role labeling @xcite . despite all advantages provided by sdp, resources with annotated semantic dependencies are limited to the three languages released in the se-meval shared tasks @xcite @xcite namely english, czech and chinese. this data scarcity motivates us to use well-known and traditionally used transfer methods such as annotation projection for building sdp models for languages without semantically annotated data. in annotation projection, we assume that we have access to sentence-aligned corpora that can be used for transferring semantic annotations from a richresource source language to the target language 1 we use broad-coverage semantic dependencies and semantic dependencies interchangeably throughout this paper. top pat-arg twhen top pat-arg twhen figure 1: projecting sdp annotations from an english to a czech sentence. semantic dependencies of the english sentence (top) are projected using alignments (dashed lines in the middle) to obtain projected semantic dependencies (bottom) for the target sentence. motivated by the large amount of similarities between syntactic and semantic dependencies, we further propose a simple but effective multitask learning framework to leverage supervised syntactic parse information and improve the representation learning capability in the intermediate layers of our semantic parser. our multitask learning approach, despite its simplicity, yields significant improvements in the performance of the vanilla semantic dependency parser built using annotation projection. we conducted annotation projection experiments from english to czech. our experiments show that our multitask setup yields 3.1% and 4.2% improvement in the labeled f1 results on in-domain and out-of-domain evaluation sets respectively. furthermore, we explore the efficacy of contextualized word representations, bert @xcite and elmo @xcite as features in our annotation projection model and find a marginal gain by using those contextual features. to the best of our knowledge, this work is the first study to develop an enhanced semantic depen-dency parser through multitasking in the absence of annotated data. ## related work after the semeval shared tasks on broad-coverage semantic dependency parsing @xcite @xcite , there have been many studies to build supervised sdp models @xcite @xcite @xcite @xcite , however, all efforts were restricted to the three languages released through semeval shared tasks. there have been extensive number of studies that use annotation projection to cure data scarcity in different tasks such as part-of-speech tagging @xcite , syntactic parsing @xcite , semantic role labeling @xcite and semantic parsing @xcite . nevertheless, none of the previous works, to the best of our knowledge, looked into using annotation projection for building sdp models for languages without semantically annotated data. motivated by the fact that different semantic representations or formalisms cover different aspects of sentence-level semantics, there has been a line of studies to apply multitask learning over different semantic annotations @xcite @xcite or target cross-framework meaning representation @xcite . these studies use the shared semantic information across different representations to enhance the sdp model for a given language, however, none of them addressed the case that no semantically annotated data is available for a language. this paper is the first work that aims to build an sdp model based on cross-lingual transfer without any annotation in the target language of interest. ## the parsing model for an input sentence @xmath0, • • • , x n with n words, the goal of a semantic dependency parsing model is to learn binary dependency decisions y i,@xmath1, 1} for every head index 0 ≤ @xmath2, where x 0 is the dummy root token. for every head-dependent pair (i, j), such that y i,@xmath3, the parser finds a label l i,j from a set of predefined semantic dependency labels l. in most cases, the parsing decision is decomposed in two steps: unlabeled dependency parsing, and labeling each dependency edge. the only constraint here is that the final semantic graph should be acyclic. we use the standard model of @xcite for which the parsing model is based on a simple head selection algorithm. this model learns dependency edge scores s edge (i, j) for all possible head-dependent pairs (i, j). the final parsing decision is a sign function: similarly, the parser learns a labeling function s label l (i, j) for every pair that y i,@xmath4: our parsing model uses a deep neural model in which the first layer is the embedding layer that consists of word, part-of-speech tag, and character representations. the second layer consists of deep bidirectional lstms @xcite ) that construct recurrent representations r i for every word. the third layer uses four singlelayer feed-forward neural networks (fnn) as attention mechanisms for head and dependent binary decisions and label assignments. the final layer uses a bilinear function to score the fnn outputs. for training the model, the sigmoid cross-entropy function is used for the edges, and the softmax cross-entropy function is used for the labels. the two losses are interpolated to calculate the final loss value with a coefficient 0 < λ < 1. ## projecting semantic dependencies for a source sentence @xmath5, • • • , x m with m words, and a target sentence @xmath6, • • • , x n with n words, we obtain one-to-one alignments by running an unsupervised word alignment algorithm on both directions. we use the intersected alignments @xmath7, • • • , a m such that 0 ≤ a @xmath8. for every source dependency relation y i,@xmath9, 1} where a i , a @xmath10, we project the dependency edge and label to the target sentence y a i ,a @xmath11,j and l a i ,a @xmath12,j (if @xmath13. we then train a supervised parsing model on the projected dependencies. these projected dependencies are usually partial and contain some noise that are caused by different reasons such as translation shifts and alignment errors. ## multitask learning with syntax modeling auxiliary tasks in a multitask learning framework allows the main task to benefit from structural or statistical similarities found in one or more auxiliary tasks to improve the model learned for a target task @xcite . given the large amount of (labeled and unlabeled) correlations existing among syntactic and semantic dependencies, we consider syntactic dependency parsing as the auxiliary task for semantic dependency parsing. in order to find out the best parameter sharing structure, we try the following parameter sharing variations: 1) sharing embedding and recurrent layers, 2) sharing embedding and recurrent layers with an additional task-specific recurrent layer, 3) sharing all three layers, but with an additional task-specific recurrent layer, and 4) sharing all intermediate layers. figure 2 shows the first case for which only the first two layers are shared between the two tasks. the overall loss value for the multitask model is computed by interpolating semantic and syntactic losses using an interpolation coefficient ω which is tuned on the development data. we use projected semantic dependencies and syntactic dependency parses generated using a supervised parser to train the multitask model. thus the training data for the target language has projected semantic annotations plus fully parsed syntactic trees. ## experiments and results we consider english as the source language and czech as the target language. we use the semeval 2015 @xcite in-domain and out-ofdomain test sets to evaluate our models. since the psd (prague semantic dependencies) annotation is available for both english and czech, we use that throughout our experiments. we use giza++ @xcite with its default configuration to obtain intersected word alignments on the europarl parallel corpus @xcite . the training data used in our projection experiments is drawn from europarl which contains text from the political domain. the in-domain czech test set provided by the semeval 2015 contains translated texts from corresponding sections of wsj in the newswire domain, whereas the out-of-domain evaluation set for czech (also provided by semeval 2015) is drawn from prague dependency treebank 3.0 @xcite which mainly contains text from journals and scientific articles, thus considered of a fairly differ-ent domain compared to europarl (political). we explore efficacy of multitasking in our annotation projection model by comparing the multitask results with the single-task baseline model that does not use any multitasking. the training corpus of czech with projected annotations contains 612k sentences but due to computational limitations, we train all models on a sample of 80k sentences foot_0 randomly selected from original projections. in order to simulate a fully unsupervised approach, we use 5% of the projected data as the held-out data during training. parsing parameters we use the structural skipgram model of @xcite for english word embeddings and run word2vec @xcite on wikipedia text to acquire the word vectors for czech. we use udpipe (straka and straková, 2017) pretrained models v1.2.0 (trained on the universal dependencies v2.0) to produce automatic part-of-speech tags. we train the biaffine dependency parser of @xcite on the universal dependencies corpus v2.0 @xcite to generate supervised syntactic parses in our multitask learning experiments. all modules are implemented using the dynet library @xcite . we mainly use the hyper-parameters used in @xcite except that we use a character bilstm without any linear transformation layers. we use word and part-of-speech vectors of size 100, with 3-layer lstms of size 600, and feed-forward layers of size 600. we use a dropout of probability of 0.2 for words and partof-speech tags, and 0.25 for the recurrent and unlabeled feed-forward layers, and 0.33 for the labeled feed-forward layers. the interpolation constants λ and ω are set to 0.025 and 0.975 respectively to prioritize the semantic task as our main task in the multitask framework. we use the adam optimizer (kingma and ba, 2014) with a learning rate of 0.001 on minibatches of approximately thousand tokens. we also concatenate the contextual vectors to the input layer as additional features to the parser. we use the pretrained elmo embeddings @xcite of size 1024 from @xcite . their model is trained on the set of 20-million-words data randomly sampled from the raw texts released by the conll 2018 shared task for czech and uses the same model and hyper-parameters as @xcite . we use the pretrained multilingual bert models @xcite of size 768 from xiao (2018) with 12 layers and 12 heads. due to computational limitations, we only use the pretrained bert models in the input layer without finetuning. ## conclusion we have described a semantic dependency parsing model based on annotation projection that do not use any annotated semantic data in the target language. we enhance the target semantic model by incorporating syntax in a multitask learning framework. we demonstrate that our multitask model outperforms the single-task model on both in-domain and out-of-domain test sets on the czech language.
| 4,380
|
86
| 2,025
|
Improved N orwegian B okmål Translations for FLORES
|
FLORES+ is a collection of parallel datasets obtained by translation from originally English source texts. FLORES+ contains Norwegian translations for the two official written variants of Norwegian: Norwegian Bokmål and Norwegian Nynorsk. However, the earliest Bokmål version contained non-native-like mistakes, and even after a later revision, the dataset contained grammatical and lexical errors. This paper aims at correcting unambiguous mistakes, and thus creating a new version of the Bokmål dataset. At the same time, we provide a translation into Radical Bokmål, a sub-variety of Norwegian which is closer to Nynorsk in some aspects, while still being within the official norms for Bokmål. We discuss existing errors and differences in the various translations and the corrections that we provide.
|
https://aclanthology.org/2025.wmt-1.86
|
## introduction this paper describes our submission to the wmt 25 open language data shared task, where participants were asked to contribute to open dataset collections such as flores+, the mt seed dataset or other parallel datasets. we have chosen to focus on the norwegian bokmål part of the flores+ dataset, as the authors notice non-fluencies in the dataset in 2024, and notified the original authors. these issues were attempted resolved in a process that lead to additional errors, which are the ones that form the basis of this paper. in addition to correcting these translations, we translate the resulting norwegian bokmål dataset into a version of a specific variety of written norwegian called radical bokmål. having these two normed varieties can be beneficial for experiments where variation in norwegian spelling norms is important. we summarize some of the encountered errors in the newest bokmål translations, and show some results on several machine translations baselines for the new and existing norwegian versions. ## the norwegian language and its norwegian is one of the official languages of norway, along with sámi languages and norwegian sign language. it is a north-germanic language historically descendant of western norse, but following large saxon and east norse influences, is largely mutually intelligible with its neighbors swedish and danish, and more different from icelandic and faroese. however, following centuries of having danish as norways national language, nationalist movements in the late 19 th century lead to the establishment of two written standards: landsmål (today nynorsk), which was based on dialects "untainted" by danish, and rigsmål (riksmål, today bokmål), which was norwegianized danish. nynorsk historically aimed at preserving norwegian-specific features, which means that saxon and danish influences are less pronounced in nynorsk than in bokmål. ## the flores dataset the flores dataset is an evaluation dataset for multilingual machine translation, consisting of a dev and a devtest part with about 1000 sentences each. the dataset is multiparallel and englishcentric: the original sentences are in english, and all other language variants were produced by translation. several versions were made available over time, reflecting efforts to increase language coverage and address quality issues. flores101 was the first version of flores, covering 101 languages, including norwegian (goyal et al., 2021). while the authors claimed that the sentences were "[...] translated in 101 languages by professional translators through a carefully controlled process", we observed severe quality problems with the norwegian bokmål translations. see further discussions in 3.1. flores200 was a continuation from both flo-res101 and guzmán et al. (2019), with an increased coverage of 200 languages (nllb team, 2022). the norwegian sentences appear to be unchanged between flores101 and flores200. the flores200 translations were used, among others, in the belebele (bandarkar et al., 2024) benchmark. quality problems in the former therefore directly affect results reported on the latter dataset. we have used the bokmål sentences from flores200 both as an aid in correcting the translations, and as a point of comparison agains the new dataset as a whole. these sentences initially struck the authors as unnatural, with exampled reported such as translating iron (the metal) as strykejern (eng. 'clothes iron'), and (judicial) court as hoff (eng. royal court). flores+ the responsibility for the flores datasets was eventually moved to the open language dataset initiative. @xcite as a result, the updated versions are referred to by flores+ and published on huggingface. @xcite in january 2024, the authors of this paper reached out to the original flores101 authors to express concern over the quality of the norwegian bokmål dataset, based on flores200. following this, the dataset was updated, as indicated by a changelog note from november 11 th 2024 foot_4 . this note informs that the norwegian version has been updated after quality assessment, but with no further information. going through these changes, we see, however, that not all errors were corrected, and that new ones were introduced. correcting these errors is the main focus of this paper. ## translation correction we introduce our methodology and discuss some of the encountered errors. see appendix a for selected example sentences in all languages involved in this process. ## conclusion even after its initial correction, several obvious and non-native-like mistakes remained in the flores+ bokmål dataset. our attempt has corrected the most obvious mistakes, making sure that there are at least no grammatical or lexical mistakes in the dataset, without introducing excessive changes to the work done by the professional translators. we hope that these corrections make results from these datasets more reliable. on a more personal note, this is not the first time the authors experience problems with context and understanding coming in the way when translating datasets that are supposed to be the basis of massive-parallel datasets. we urge the creators of such original datasets to perhaps add clarifying remarks where there might be misunderstandings. following the observation that close to 70% of all sentences in the corrected dataset contained at least one lexical or grammatical error, we recommend earlier users of the dataset to reevaluate results used on this dataset. there is also some reason to doubt the claims that all these translations were indeed done by professional translators, and we hope that future dataset creators will use the native professional communities to gain valuable feedback in these situations.
| 41,218
|
174
| 2,020
|
M eister M orxrc at S em E val-2020 Task 9: Fine-Tune Bert and Multitask Learning for Sentiment Analysis of Code-Mixed Tweets
|
Natural language processing (NLP) has been applied to various fields including text classification and sentiment analysis. In the shared task of sentiment analysis of code-mixed tweets, which is a part of the SemEval-2020 competition, we preprocess datasets by replacing emoji and deleting uncommon characters and so on, and then fine-tune the Bidirectional Encoder Representation from Transformers(BERT) to perform the best. After exhausting top3 submissions, Our team MeisterMorxrc achieves an averaged F1 score of 0.730 in this task, and and our codalab username is MeisterMorxrc
|
https://aclanthology.org/2020.semeval-1.174
|
## introduction language is an indispensable and important part of human daily life. natural language is everywhere as a most direct and simple tool of expression. natural language processing is to transform the language used for human communication into a machine language that can be understood by machines. it is a model and algorithm framework for studying language capabilities. in recent years, nlp research has increasingly used new deep learning methods. as an important branch of artificial intelligence, language models are models that can estimate the probability distribution of a group of language units (usually word sequences). these models can be built at a lower cost and have significantly improved several nlp tasks, such as machine translation, speech recognition and parsing. the processing flow of natural language can be roughly divided into five steps: obtaining anticipation, preprocessing the corpus, characterizing, model training, and evaluating the effect of modeling. with the rapid development of the internet, the frequency of online communication on social software such as weibo, twitter, and forums is getting higher and higher, and the internet itself has also changed from "reading internet" to "interactive internet". the internet has not only become an important source for people to obtain information, but also an important platform for people to express their opinions and share their own experiences and directly express their emotions. the achievements of nlp research laid a good foundation for text sentiment analysis. text sentiment analysis is an important research branch in the field of natural language understanding, involving theories and methods in the fields of linguistics, psychology, artificial intelligence, etc. it mainly includes the processing of text sources, the subjective and objective classification of network text, and the subjective text analysis and other steps. due to the huge inclusiveness and openness of the internet itself, it attracts users of different races, different languages, different cultural backgrounds and different religious beliefs to communicate with each other here. therefore, mixed language sentiment classification will be an important research for nlp direction. ## related work sentiment analysis is a research with a long history that helps us understand the connections and relationships between objects. in recent years, many scholars have made great progess on sentiment analysis. a basic task in sentiment analysis is classifying the polarity of a given text at the document, sentence, or feature/aspect level-whether the expressed opinion in a document, a sentence or an entity feature/aspect is positive, negative, or neutral. subsequently, the method described in a patent by @xcite , looked specifically at sentiment and identified individual words and phrases in text with respect to different emotional scales. many other subsequent efforts were less sophisticated, using a mere polar view of sentiment, from positive to negative, such as work by turney @xcite , and pang @xcite who applied different methods for detecting the polarity of product reviews and movie reviews respectively. one can also classify a document's polarity on a multi-way scale, which was attempted by pang @xcite and snyder @xcite . but according to our findings, this research becomes particularly difficult in multilingual societies, especially in many code-mixed texts. though some researchers have explored in the field, there is still a long way to go. sharma and srinivas explore various methods to normalize the text and judged the polarity of the statement as positive or negative using various sentiment resources @xcite . bhargava and sharma develop a flexible and robust system for mining sentiments from code mixed sentences for english with combination of four other indian languages (tamil, telugu, hindi and bengali) @xcite . ghosh and das extract sentiment (positive or negative) from facebook posts in the form of code-mixed social media data using a machine learning approach @xcite .
| 6,081
|
16
| 2,021
|
A Computational Model for Interactive Transcription
|
Transcribing low resource languages can be challenging in the absence of a good lexicon and trained transcribers. Accordingly, we seek a way to enable interactive transcription whereby the machine amplifies human efforts. This paper presents a data model and a system architecture for interactive transcription, supporting multiple modes of interactivity, increasing the likelihood of finding tasks that engage local participation in language work. The approach also supports other applications which are useful in our context, including spoken document retrieval and language learning.
|
https://aclanthology.org/2021.dash-1.16
|
## introduction understanding the "transcription challenge" is a prerequisite to designing effective solutions, minimizing bottlenecks @xcite . we must face realities such as the lack of a good lexicon, the short supply of transcribers, and the difficulty of engaging people in arduous work. sparse transcription is an approach to transcribing speech in these low-resource situations, an approach which is well suited to places where there is limited capacity for transcription. sparse transcription admits multi-user workflows built around shared data, for human-in-the-loop transcriptional practices, or "interactive transcription" @xcite . sparse transcription is 'sparse' because we do not produce contiguous transcriptions up front. instead, we transcribe what we can, and lean on computational support to amplify those efforts across the corpus. this is not suggested as an alternative to contiguous transcription, but as a more efficient way to produce it, especially in those situations where linguists and speakers are "learning to transcribe" @xcite . sparse transcription relies on word spotting. wordforms that occur frequently in the transcribed portion of a corpus are used to spot forms in the untranscribed portion. these are presented for manual verification, speeding up the contiguous transcription work while indexing the entire corpus. sparse transcription accepts the realities of early transcription: we lack a good lexicon; we need to grow the lexicon as we go; and we do not have a ready workforce of transcribers. moreover, in the context of language documentation, transcription is iterative and interactive. linguists and speakers leverage complementary skills to accomplish the task @xcite @xcite . sparse transcription leverages the kind of work speakers are motivated to do. for example, when it comes to recordings, speakers tend to engage with the content more than the particular form of expression @xcite . identifying key words and clarifying their meanings is often more engaging than puzzling over the transcription of unclear passages @xcite ). an indexed corpus can be searched to identify additional high-value recordings for transcription. we report on a computational model for interactive transcription in low-resource situations. we discuss the kinds of interactivity which the sparse transcription model enables, and propose an extension which provides real-time word discovery in a sparse transcription system. for concreteness we also present a user interface which provides real-time suggestions as the user enters words. we work with speakers of kunwinjku (iso gup), a polysynthetic indigenous language of northern australia. members of this community have expressed interest using technology to support their own language goals. through this work we hope to support language learning and corpus indexing, and produce locally meaningful results that help to decolonize the practice of language technology @xcite . this paper is organized as follows. section 2 gives an overview of the sparse transcription model. section 3 describes a particular use case of sparse transcription: interactive transcription. in section 4 we describe the system architecture and the design decisions which enable an interactive humancomputer workflow. section 5 describes the user interface and shows screenshots of the implementation. we conclude with a summary in section 6. ## the sparse transcription model following bird (2020b), we understand transcription to be the task of identifying meaningful units in connected speech. these units belong to a growing inventory (the glossary, or lexicon); their orthographic representation is generally not settled. we add each new meaningful unit to the glossary as it is encountered, initializing the entry with a form and a gloss. thus, a transcriptional token is a pairing of a locus in the speech stream with a glossary entry. we are agnostic about the size of this unit; it could be a morpheme, word, or multi-word expression. transcription begins with a lexicon. there is always a word list, since this is what is used for establishing the distinct identity of a language. there may also be some historical transcriptions, and these words can be included in the initial lexicon. from this point on, transcription involves growing the lexicon. the speech stream is broken up into 'breath groups' which we use as manageable chunks for transcription. in the course of transcription, it is a natural thing for a non-speaker linguist to attempt to repeat any new word and have a speaker say it correctly and give a meaning. thus, the process is interactive in the interpersonal sense. we hear and confirm the word in context, and record it in the lexicon with a lexical identifier and a pointer to where it occurs in the media. in the background, a sparse transcription system uses this confirmed glossary entry to spot more instances. word spotting is an automatic task which discovers putative tokens of glossary entries. glossary entries are already stored with pointers to occurrences in particular breath groups. discovering new instances through word spotting then becomes a retrieval task, where each breath group is seen as a mini-document. breath groups which are determined to contain the exemplar lexical entry are queued for speaker confirmation. confirmed spottings are updated with pointers to their respective breath groups. word spotting proceeds iteratively and interac-tively, continually expanding the lexicon while transcribing more speech. as we focus on completing the contiguous transcription of a particular text, we grow the lexicon and the system attempts to discover other instances across the wider corpus. as the system calls our attention to untranscribed regions, which may be difficult to complete for a variety of reasons, we effectively marshall the whole corpus to help us. a sparse transcription system is a form of computer supported collaborative work, in that it alleviates productivity bottlenecks via automation and asynchronous workflows @xcite . the sparse transcription model-organized around a growing glossary of entries with pointers to instances in speech-can underlie a variety of special-purpose apps which support various tasks in the transcription workflow. for example, le ferrand et al. (2020) demonstrate the use of a word confirmation app based on word-spotted data for the purpose of confirming automatically-generated hypotheses. we have prototyped a system which implements the core functionalities described in this section, and which includes a user interface which supports interactive transcription. figure 2 gives a schematic view of the sparse transcription model. 1 3 learning to transcribe a linguist, learning to transcribe, is capable of listening to audio and quickly transcribing the lexemes they recognize. as lexemes are recorded, they are added to the transcriber's personal glossary. entries in this glossary may be morphs, words, or other longer units such as multi-word expressions. the record-keeping of the glossary helps manage the linguist's uncertainty in an accountable way, as they give the task their best first-pass. as is the standard behavior in sparse transcription, a glossary is updated with links from glossary entries to the segment of audio in which they were found. speakers of the language can access a view of the linguist's glossary entries, and confirm entry tokens for admission to the global glossary. the design decision to maintain personal glossaries for individual users and postpone adjudication with a shared, canonical glossary is an extension of the concept defined in the sparse transcription model. figure 1 : word spotting in the sparse transcription model begins when the user confirms the existence of a glossary entry in the audio. a token is created for that instance of the glossary entry, and can be used to spot similar instances in other breath groups across the corpus. figure 2 : the sparse transcription model: audio is segmented into breath groups, each one a mini spoken document where words may be spotted (with given probability); interpretations span one or more breath groups @xcite . multiple transcribers can contribute to the shared glossary, initializing their own project with the current state of the global lexicon. confirmed glossary entries can be used to spot similar entries across the whole corpus, maximizing the efforts of the learner, and providing more pointers from a glossary entry to breath groups where it occurs. over time, this process leads to more contiguous transcriptions as the transcriber revisits and revises their lexicon in the course of their transcription work. however, there is an opportunity here to get more immediate feedback from the system. a sparsely transcribed breath group (whether system or human transcribed) provides signal about the breath group as a whole. combined with the fact that the human is currently engaged in entering their hypotheses, we can provide system suggestions conditioned on sparsely transcribed data which are updated interactively as the user types. anchored at the locus of a known lexeme, and conditioned on additional available signal i.e., a predicted phone sequence, the system posits suggestions for untranscribed regions. we can refer to this as 'local word discovery' (fig. 3 ). working together with the system, a linguist's hypotheses can be queued for confirmation in the same way that word spotting queues hypotheses for speaker confirmation. simultaneously, the tran-scriber leverages a model to get immediate feedback on the connections between what they hear and what a model encodes about the language, potentially aiding language learning @xcite . up to this point, we have established the interactive nature of transcription on three levels. first, it is interpersonally interactive, as a linguist works with speakers to associate forms with meanings. second, sparse transcription is interactive in the sense that it attempts to amplify the effort of transcribers by propagating lexical entries across the whole corpus via word spotting. finally, the implementation of local word discovery is interactive in the context of the "learning to transcribe" use case. it occupies a distinct niche with a smaller feedback loop than word spotting: transcription hints are polled from the model and filtered with every keystroke (figs. 6 7 8 ). it is improved by word spotting because contiguous transcriptions reduce uncertainty in the input to the local word discovery model. it allows a linguist to prepare and prioritize work for the interpersonally interactive task of confirming entries with a speaker. figure 3 : sparsely transcribed input can be leveraged for local word discovery methods which are complementary to word spotting. ## system architecture the interactive transcription use case calls for a variety of computational agents. some agents service computationally-expensive batch tasks, while others are coupled with user events down to the level of keystrokes. agents are implemented as containerized services, some corresponding to long-running tasks, e.g. media processing, while others are integral to the user interface, e.g. phone alignment. the implementation supports restful endpoints, and a real-time websocket-based api. the api layer responds to events in the client, and endpoints support the methods in the data model. there are three main kinds of operation; simple crud operations like uploading media, data model operations such as adding a token to a glossary, and real-time queries such as word discovery. data validation is distributed across the client and the server, for performance reasons and to mitigate the effects of network dropouts. the client replicates a subset of the server data model, storing this in the browser's database and synchronizing it with the server opportunistically. we utilise a continuous web socket session to relay user input to the server, fetching and displaying results in real time. commonly seen in web search, this is a form of distributed user interface where computational resources are distributed across platforms and architectures @xcite . this is achieved via asynchronous programming with observable streams, via implementations of the reactive x pattern for javascript (rxjs) on the client and python (rxpy) on the server. input events from the browser are filtered, debounced and piped through a websocket transport to a session handler on the back end. similarly, components of the client sub-scribe to session event streams coming from the back end, such as aligning user input to a phone stream, and presenting a series of word completions. the system makes use of several agents whose implementation may vary across contexts or evolve over time. we have implemented the following agents: audio pre-processing. when a user adds an audio file to a transcription project, the audio is preprocessed and we store metadata and alternative representations which are useful for downstream tasks. for example, the pipeline includes voice activity detection (vad), which identifies breath groups. next, we calculate peaks-acoustic amplitude values-which we use to visualize speech activity over time. finally, the audio is resampled and sent to the phone recognition agent, and the results are displayed beneath the waveform as extra information to support transcription. phone recognition. allosaurus is a universal phone recognizer trained on over 2,000 languages @xcite . the model can be used as-is to provide phones from a universal set, or it can be fine-tuned with language specific phonemic transcriptions. the model currently we currently deploy is fine-tuned on 68 minutes of kunwinjku speech across 5 speakers. we calculated a 25.6% phone error rate on 10 minutes of speech from a hold-out speaker. word spotting. word spotting traditionally is audio exemplar matching against spans of raw audio @xcite . it has been shown to be feasible in low resource scenarios using neural approaches @xcite . le ferrand et al. (2020) describes several plausible speech representations suited for low-resource word spotting. local word discovery. this is distinct from word spotting, which locates more tokens of existing glossary entries. local word discovery attempts to fill in untranscribed regions between existing tokens. this agent provides transcription hints via a smaller feedback loop, the third kind of interactivity discussed in section 3. the system retrieves the potentially large set of suggested words, and filters it down interactively as the transcriber types. the model is free to favor recall, because the raw suggestions do not need to be immediately revealed. we implement local word discovery using a finite state analyzer for kunwinjku @xcite , modified to recognize possible word-forms given a stream of phones and the offsets of known lexemes. we use panphon to estimate articulatory distances between lexemes and phone subsequences to obtain rough alignments @xcite . ## user interface the user interface (fig. 5 ) is inspired by minimalist design, motivated by the need for an inclusive agenda in language work (cf. @xcite . in the left column is a waveform which has been automatically segmented into breath groups. below the waveform is a map of waveform peaks, to facilitate navigation across long audio files. useful context is also displayed, including the transcript of the preceding breath group, followed by the sequence of phones produced from the audio, with user transcriptions aligned roughly to the phone sequence. below this is the input box, scoped to the current breath group, where users enter lexemes, with occasional suggestions offered by the local word discovery module, and which filter interactively per keystroke (figs. 6 7 8 ). in the right column, there is a running transcript of the audio file, with the text of the transcript for the current breath group shown in bold. the user interface is designed to be navigable entirely through the keyboard, to support ergonomic transcription (cf. @xcite . ## conclusion transcription is especially challenging when we lack a good lexicon and trained transcribers. consequently, we seek to bring all available resources to bear, including the knowledge of speakers, linguists, and a system, all of whom are "learning to transcribe." we presented a use case for interactive transcription and showed how this can be supported within the sparse transcription model. in designing and implementing a sparse transcription system for a specific use case, we elaborated on some concepts presented in @xcite . we examined various kinds of interactivity in low-resource language transcription, and we proposed local word discovery as a grammatically-informed approach to word spotting. this allows individual users to manage their local lexicon independently of the task of curating a canonical lexicon, enabling multi-user workflows. finally, we reported on the architecture and implementation of an interactive transcription system. it enables a transcriber to take care of much of the arduous transcription task up front, and to allocate more meaningful work for speakers. the product of interaction with the system is an expanded lexicon, which can be used to index the corpus for information retrieval, thus supporting the community goal of access to knowledge locked up in many hours of recorded audio. additionally, we anticipate that support for growing personal lexicons will be a valuable resource for the language learning that takes place alongside transcription. in short, the system is designed to produce the content that language communities care about, in a way that leverages the kind of language work that people are willing to do. operationalizing the sparse transcription model makes it possible to streamline field-based transcriptional practices, and is expected to lead to further implementations of special purpose interfaces that support transcription of low-resource languages. figure 8 : thus, the user is guided to grammatically valid transcriptions which can be added to their lexicon.
| 8,099
|
392
| 2,023
|
Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP
|
When deploying machine learning systems to the wild, it is highly desirable for them to effectively leverage prior knowledge to the unfamiliar domain while also firing alarms to anomalous inputs. In order to address these requirements, Universal Domain Adaptation (UniDA) has emerged as a novel research area in computer vision, focusing on achieving both adaptation ability and robustness (i.e., the ability to detect out-of-distribution samples). While UniDA has led significant progress in computer vision, its application on language input still needs to be explored despite its feasibility. In this paper, we propose a comprehensive benchmark for natural language that offers thorough viewpoints of the model’s generalizability and robustness. Our benchmark encompasses multiple datasets with varying difficulty levels and characteristics, including temporal shifts and diverse domains. On top of our testbed, we validate existing UniDA methods from computer vision and state-of-the-art domain adaptation techniques from NLP literature, yielding valuable findings: We observe that UniDA methods originally designed for image input can be effectively transferred to the natural language domain while also underscoring the effect of adaptation difficulty in determining the model’s performance.
|
https://aclanthology.org/2023.findings-emnlp.392
|
## introduction deep learning models demonstrate satisfactory performance when tested on data from the training distribution. however, real-world inputs encounter novel data ceaselessly that deviate from the trained distribution, commonly known as distributional shift. when confronted with such inputs, machine learning models frequently struggle to differentiate them from regular input. consequently, they face challenges in adapting their previously acquired knowledge to the new data distribution, resulting sorry, i cannot handle your request. sure. i'll open the window. whip me up a sandwich real quick? figure 1 : the model trained with formal language (source domain) will likely face spoken language (target domain) in the real world. the model is expected to properly handle such transferable input despite the distributional shift. (middle) at the same time, the model should discern unprocessable inputs (bottom) from the target domain. the aforementioned phenomenon represents a longstanding challenge within the machine learning community, wherein even recent cutting-edge language models @xcite @xcite do not serve as an exception to this predicament @xcite . in response to these challenges, existing literature proposes two distinct approaches. the first approach, known as domain adaptation (da) @xcite @xcite , endeavors to establish alignment between a new set of data from an unknown distribution and the model's prior knowl-edge distribution. the objective is to enhance the model's generalization capability and reduce performance drop springing from the distributional shift. in parallel, a distinct line of work, referred to as out-of-distribution (ood) detection @xcite @xcite , focuses on discerning inputs originating from dissimilar distributions. they opt to circumvent potential risks or disruptions arising from shifted inputs, thereby enriching system robustness and resilience. while both approaches offer unique advantages addressing specific distributional shifts, integrating their merits could substantially enhance robustness. in pursuit of this objective, a novel field called universal domain adaptation (unida) @xcite has emerged, aiming to harness the synergies of both ood detection and da when confronted with distributional shifts. unida leverages the best of the two worlds and offers comprehensive perspectives that integrate the merits of these two research areas. the essence of unida lies in measuring the uncertainty of the data from the shifted distribution precisely. then, we can enhance the model's transferability by distinguishing portions of low-uncertainty inputs that can be adequately handled with the current model's knowledge. simultaneously, we enrich the robustness of the model to ood inputs by discerning the remaining samples that cannot be processed normally. however, distinguishing between these inputs and properly processing them becomes increasingly challenging without explicit supervision. despite the versatility of unida, this topic has yet to be explored in the natural language processing (nlp) literature. as a cornerstone in enhancing reliability against distributional shifts in nlp, we introduce a testbed for evaluating the model's robustness in a holistic view. first, we construct various adaptation scenarios in nlp, utilizing an array of thoughtfully selected datasets. to discern the degree to which our proposed datasets incorporate the various degree of challenges in unida, we define two novel metrics: performance drop rate (pdr) and distinction difficulty score (dds). using these metrics, we verify that our testbed captures a broad spectrum of distributional shifts. finally, based on the suggested setting, we systematically compare several unida methods inherently designed for the task, against heuristic combinations of previous approaches for the parts of the problem, i.e., ood detection and da. our empirical results show that unida methods are fully transferable in the nlp domain and can robustly respond to various degrees of shift. moreover, we find out that the adaptation difficulty notably affects the performance of the methods. in certain circumstances, da methods display comparable or even better performance. we release our dataset, encouraging future research on unida in nlp to foster the development of more resilient and domain-specific strategies. foot_0 2 universal domain adaptation ## testbed design the primary objective of our research is to construct a comprehensive benchmark dataset that effectively captures the viewpoint of unida. to accomplish our objective, we attempt to create a diverse dataset that encompasses a range of difficulty levels and characteristics, such as domains, sentiment, or temporal change. these variations are the fundamental elements that can significantly influence the overall performance. specifically, we initially select datasets from multiple practical domains and approximate the adaptation difficulty by quantifying different shifts with our newly proposed metrics. in the following subsections, we provide an in-depth explanation of our dataset along with the analysis of our benchmarks. ## conclusion and future work in this study, we present a testbed for evaluating unida in the field of nlp. the testbed is designed to exhibit various levels of domain and category gaps through different datasets. two novel metrics, pdr and dds, were proposed which can measure the degree of domain and category gap, respectively. we assessed unida methods and the heuristic combination of cda and ood detection in our proposed testbed. experimental results show that unida methods, initially designed for the vision domain, can be effectively transferred to nlp. additionally, cda methods, which are not fully optimized in unida scenario, produce comparable results in certain circumstances. recent trends in nlp focus on large language models (llms) of their significant generalization abilities. however, the robustness of llms from the perspective of unida remains uncertain. as part of our future work, we assess the performance and the capabilities of llms from a unida viewpoint.
| 24,481
|
8
| 2,024
|
DCU - NLG - PBN at the GEM ’24 Data-to-Text Task: Open-Source LLM PEFT -Tuning for Effective Data-to-Text Generation
|
LLMs have been used in various tasks with impressive success, including data-to-text generation. However, one concern when LLMs are compared to alternative methods is data contamination, in other words, for many datasets the data used in training these models may have included publicly available test sets. In this paper, we explore the performance of LLMs using newly constructed datasets in the context of data-to-text generation for English, Chinese, German, Russian, Spanish, Korean, Hindi, Swahili, and Arabic. We performed a testing phase to evaluate a range of prompt types and a fine-tuning technique on Mistral 7B and Falcon 40B. We then fully evaluated the most promising system for each scenario: (i) LLM prompting in English followed by translation, and (ii) LLM PEFT-tuning in English followed by translation. We find that fine-tuning Mistral outperforms all other tested systems and achieves performance close to GPT-3.5. The few-shot prompting with a dynamic selection of examples achieves higher results among prompting. The human evaluation to be carried out by the shared-task organisers will provide insight into the performance of the new datasets. In conclusion, we observed how the fine-tuning of an open-source LLM can achieve good performance close to state-of-the-art closed-source LLM while using considerably fewer resources.
|
https://aclanthology.org/2024.inlg-genchal.8
|
## introduction with the advancement of large language models (llms), their capabilities have been explored in many tasks including data-to-text generation, which maps structured input data into a suitable output text containing all and only provided information. however, the datasets for many data-totext tasks have been available online for years and might have been used to train llms. in the work reported here, we participate in the gem 2024 shared task @xcite using new datasets which are not available online. in more detail, we address the data-to-text generation task using two settings: llm prompting and fine-tuning. however, fine-tuning llms for specific tasks remains challenging, often constrained by computational resources. to mitigate this, we use a parameter efficient fine-tuning (peft) technique to substantially reduce the number of parameters participating in training, making the finetuning process far more computationally efficient while maintaining model performance. in both explored settings, we use an external machine translation (mt) system to translate our englishgenerated texts into chinese, german, russian, spanish, korean, hindi, swahili, and arabic. the paper is structured as follows. section 2 describes data and task, and section 3 presents the general approach, prompt types, testing phase and the specific systems we fully evaluated. experimental set-up and results are outlined in section 4, and section 5 provides conclusions. all the code and generated texts are available on github. 1 2 data and task the data-to-text task converts input data, specifically rdf triples representing subject | predicate | object combinations, into coherent and contextually appropriate text that accurately conveys all and only the information present in the input triples. the gem 2024 shared task provides datasets for two subtasks: (i) webnlg-based, utilising the official webnlg @xcite test set, and (ii) wikidata-based, using newly obtained triples from wikidata. each subtask includes three parallel datasets: factual, counterfactual, and fictional. the factual dataset consists of triples found in webnlg or wikidata. the counterfactual dataset switches entities based on their class, creating hypothetical scenarios. finally, the fictional dataset replaces original entities with those created via llm prompting. for all datasets, only the test set is provided, containing the input triples with predicates in english. no training data is available, and reference texts are not provided. however, for the webnlg-based factual dataset, references can be extracted from the original webnlg english dataset, allowing for some level of automatic evaluation. ## systems we consider two settings to create our systems using pretrained llms (figure 1 ): (i) generate text in english using out-of-the-box llms with prompting, (ii) generate text in english using a fine-tuned llm. in the first setting, we employ pretrained llms without additional training and use various prompting strategies to guide the model in generating text based on the input rdf triples. in the second setting, we fine-tune pretrained llms using low-rank adaptation (lora). regardless of the generation method, the generated english text is then translated into chinese, german, russian, spanish, korean, hindi, swahili, and arabic using a machine translation system. ## experimental set-up and results we executed our experiments using the transformer library 4 of huggingface and the paid-for google translate api 5 in late march/early april 2024. the systems are tested using the six datasets described in section 2. all generated texts are post-processed as described in section 3.2. all systems are executed on a nvidia a100 gpu with 80gb ram. following the webnlg 2023 evaluation setup @xcite , we perform an automatic evaluation on the webnlg-based factual dataset in english computing bleu @xcite , chrf++ @xcite , meteor @xcite , and bertscore @xcite . we compare our two systems against the best system proposed by @xcite , i.e. gpt-3.5 using few-shot prompt with fixed examples. an additional human evaluation will be performed by the organisers of the shared task and at the time of writing the results are not available yet. refer to the shared task report for more details. ## conclusion we explored the effectiveness of pretrained llms for data-to-text generation focusing on two settings: llm prompting and llm fine-tuning with lora. we first conducted a testing phase comparing the performance of mistral 7b and falcon 40b models using various prompting strategies and fine-tuning techniques, evaluated on the webnlg 2020 validation set. the results demonstrated that fine-tuning with lora substantially enhances the performance of the mistral 7b model. this model outperformed all other tested systems, including falcon 40b. among the prompting strategies, the few-shot in-context learning with dynamic examples based on the triple set length and predicates achieved the best results, indicating the importance of contextually relevant example selection. we submitted the two system settings, llm prompting + mt and llm fine-tuning with lora + mt, using mistral 7b to the gem 2024 shared task in english, chinese, german, russian, spanish, korean, hindi, swahili, and arabic. our findings highlight the potential of lora for efficient fine-tuning of llms, offering a competitive performance close to state-of-the-art models like gpt-3.5, but with substantially smaller model sizes and reduced resource requirements. the success of dynamic example selection in prompting also underscores the need for tailored approaches to optimize model performance.
| 33,473
|
14
| 2,021
|
On Releasing Annotator-Level Labels and Information in Datasets
|
A common practice in building NLP datasets, especially using crowd-sourced annotations, involves obtaining multiple annotator judgements on the same data instances, which are then flattened to produce a single “ground truth” label or score, through majority voting, averaging, or adjudication. While these approaches may be appropriate in certain annotation tasks, such aggregations overlook the socially constructed nature of human perceptions that annotations for relatively more subjective tasks are meant to capture. In particular, systematic disagreements between annotators owing to their socio-cultural backgrounds and/or lived experiences are often obfuscated through such aggregations. In this paper, we empirically demonstrate that label aggregation may introduce representational biases of individual and group perspectives. Based on this finding, we propose a set of recommendations for increased utility and transparency of datasets for downstream use cases.
|
https://aclanthology.org/2021.law-1.14
|
## introduction obtaining multiple annotator judgements on the same data instances is a common practice in nlp in order to improve the quality of final labels @xcite . cases of disagreement between annotations are often resolved through majority voting, averaging, or adjudication in order to derive a single "ground truth", often with the aim of training supervised machine learning models. however, in relatively subjective tasks such as sentiment analysis or offensiveness detection, there often exists no single "right" answer @xcite . enforcing such a single ground truth in such tasks will sacrifice valuable nuances about the task that are embedded in annotators' assessments of the stimuli, especially their disagreements @xcite . annotators' socio-demographic factors, moral values, and lived experiences often influence their interpretations of language, especially in subjective tasks such as identifying political stances @xcite , sentiment @xcite , and online abuse @xcite . for instance, feminist and anti-racist activists systematically disagree with crowd workers in their hate speech annotations @xcite . similarly, annotators' political affiliation is shown to correlate with how they annotate the neutrality of political stances @xcite . a potential adverse effect of majority voting in such cases is that it may sideline minority perspectives in data. in this paper, we analyze annotated data for eight different tasks across three different datasets to study the impact of majority voting as an aggregation approach. we answer two questions: our analysis demonstrates that in the annotations for many tasks, the aggregated majority vote does not uniformly reflect the perspectives of all annotators in the annotator pool. for many tasks in our analysis, a significant proportion of the annotators had very low agreement scores (0 to 0.4) with the majority vote label. while certain individual annotator's labels may have low agreement with the majority label due to valid/expected reasons (e.g., if they produced noisy labels), we further show that these agreement scores may vary significantly across different socio-demographic groups that annotators identify with. this finding has important fairness implications, as it demonstrates how the aggregation step can sometimes cause the final dataset to under-represent certain groups' perspectives. meaningfully addressing such issues in multiplyannotated datasets requires understanding and accounting for systematic disagreements between annotators. however, most annotated datasets often only release the aggregated labels, without any annotator-level information. we argue that dataset developers should consider including annotator-level labels as well as annotators' sociodemographic information (when viable to do so responsibly) when releasing datasets, especially those capturing relatively subjective tasks. inclusion of this information will enable more research on how to account for systematic disagreements between annotators in training tasks. however, the current practice in the nlp community continues to be applying different aggregation strategies to arrive at a single score or label that makes it amenable to train and evaluate supervised machine learning models. oftentimes, datasets are released with only the final scores/labels, essentially obfuscating important nuances in the task. the information released about the annotations can be at one of the following four levels of information-richness. firstly, the most common approach is one in which multiple annotations obtained for a data instance are aggregated to derive a single "ground truth" label, and these labels are the only annotations included in the released dataset (e.g., founta et al. ( 2018 )). the aggregation strategy most commonly used, especially in large datasets, is majority voting, although smaller datasets sometimes use adjudication by an 'expert' (often one of the study authors themselves) to arrive at a single label (e.g., in @xcite ) when there are sub-stantial disagreements between annotators. these aggregation approaches rely on the assumption that there always exist a single correct label, and that either the majority label or the 'expert' label is more likely to be that correct label. what it fails to account for is the fact that in many subjective tasks, e.g., detecting hate speech, the perceptions of individual annotators may be as valuable as an 'expert' perspective. secondly, some datasets (e.g., jigsaw (2018); davidson et al. (2017) ) release the distribution across labels rather than a single aggregated label. in binary classification tasks, this corresponds to the percentage of annotators who chose one of the labels. in multi-class classification, this may be the distribution across labels obtained for an instance. while this provides more information instances, annotators and individual annotations present in the datasets. for hate-speech and emotion datasets, we use the binary label in the raw annotations, whereas for the sentiment dataset, we map the 5-point ordinal labels (-2, -1, 0, +1, +2) in the raw data to a binary distinction denoting whether the text was deemed positive or negative. foot_1 while the emotion dataset contains annotations for 28 different emotions, in this work, for brevity, we focused on the annotations for only the six standard ekman emotions @xcite ) -anger, disgust, fear, joy, sadness, and surprise. in particular, we use the raw annotations for these six emotions, rather than the mapping of all 28 emotions onto these six emotions that @xcite use in some of their experiments. ## utility of annotator-level labels another argument in favor of retaining annotatorlevel labels is their utility in modeling disagreement during training and evaluation. @xcite and @xcite incorporated annotator disagreement in the loss functions used during training to improve predictive performance. @xcite and @xcite use a multi-task approach to incorporate annotator disagreements to improve machine translation and part-of-speech tagging performance, respectively. chou and lee (2019) and guan et al. (2018) developed learning architectures that model individual annotators as a way to improved performance. @xcite show the utility of detecting clusters of annotators in hate-speech detection based on how often they agree with each other. finally, davani et al. (2021) introduce a multi-annotator architecture that models each annotators' perspectives separately using a multi-task approach. they demonstrate that this architecture helps to model tators, and non-binary gender identity with one annotator). ## discussion and conclusion building models to predict or measure subjective phenomena based on human annotations should involve explicit consideration for the unique perspectives each annotator brings forth in their annotations. annotators are not interchangeable-that is, they draw from their socially-embedded experiences and knowledge when making annotation judgments. as a result, retaining their perspectives separately in the datasets will enable dataset users to account for these differences according to their needs. we demonstrated that annotation aggregation may unfairly disregard perspectives of certain annotators, and sometimes certain sociodemographic groups. based on our analysis, we propose three recommendations aimed to avoid these issues: annotator-level labels: we urge dataset developers to release the annotator-level labels, preferably in an anonymous fashion, and leave open the choice of whether and how to utilize or aggregate these labels for the dataset users. socio-demographic information: sociodemographic identity of the annotators is crucial to ascertain that the datasets (and the models trained on them) equitably represent perspectives of various social groups. we urge dataset developers to include socio-demographic information of annotators, when viable to do so responsibly. documentation about recruitment, selection, and assignment of annotators: finally, we urge dataset developers to document how the annotators were recruited, the criteria used to select them and assign data to them, and any efforts to ensure representational diversity, through transparency artefacts such datasheets @xcite or data statements @xcite .
| 10,444
|
20
| 2,023
|
Multimodal Hate Speech Event Detection - Shared Task 4, CASE 2023
|
Ensuring the moderation of hate speech and its targets emerges as a critical imperative within contemporary digital discourse. To facilitate this imperative, the shared task Multimodal Hate Speech Event Detection was organized in the sixth CASE workshop co-located at RANLP 2023. The shared task has two subtasks. The sub-task A required participants to pose hate speech detection as a binary problem i.e. they had to detect if the given text-embedded image had hate or not. Similarly, sub-task B required participants to identify the targets of the hate speech namely individual, community, and organization targets in text-embedded images. For both sub-tasks, the participants were ranked on the basis of the F1-score. The best F1-score in sub-task A and sub-task B were 85.65 and 76.34 respectively. This paper provides a comprehensive overview of the performance of 13 teams that submitted the results in Subtask A and 10 teams in Subtask B.
|
https://aclanthology.org/2023.case-1.20
|
## introduction the rise of social media has altered the global communication and information landscape, allowing people from all walks of life to share their opinions and perspectives on a wide range of topics, including heated geopolitical events @xcite . this free-flowing exchange of ideas, however, has not been without difficulties. the rapid proliferation of hate speech, which includes harsh language, disrespectful statements, and discriminatory rhetoric directed at individuals or groups based on their ethnicity, national-ity, or beliefs, is one of the most alarming concerns afflicting online platforms @xcite . in times of political crisis, such as the russia-ukraine crisis, the prevalence of hate speech becomes even more pronounced @xcite . its impact goes beyond dividing communities; it also brings about considerable concerns for sustaining peace and stability in regions facing conflict-related issues. text-embedded images have gained popularity due to their easy sharability and the combination of visual and textual elements, making them a common mode for information sharing @xcite @xcite . however, this convenience also has a downside -it amplifies the prevalence of hate speech in social media. to combat the propagation of hate content through text-embedded images, the identification of hate speech within such media holds significant importance @xcite @xcite . by detecting and curbing hate speech within these images, we can work towards maintaining a healthier digital environment. in an attempt to curb hate speech in the context of the russia-ukraine crisis, @xcite proposed a multimodal dataset of 4,723 text-embedded images annotated for presence of hate speech, direction of hate speech (targeted vs untargeted) and targets of hate speech. building on this groundwork and to attract greater attention toward the issue of hate speech in text-embedded images, we introduced a shared task at the case 2023 workshop (co-located with ranlp 2023) utilizing the dataset. the shared task has two subtasks: subtask a which deals with the identification of hate speech and subtask b which deals with the identification of targets in hate speech. through this shared task, we intend to stimulate active engagement and collaboration in addressing this critical challenge of identifying and mitigating hate speech within the digital landscape, specifically in the context of textembedded images. the rest of the paper is organized as follows: section 2 gives a brief outlook of the related works in multimodal hate speech classification. in section 3, the subtasks of the shared task are presented. similarly, section 4 describes the crisishatemm dataset in brief. section 5 describes the system that we used in the competition along with the evaluation metrics. similarly, section 6 sheds light on the methodologies used by the teams that submitted the system description papers. section 7 gives a brief analysis of the system descriptions, and section 8 finally concludes the paper. ## related work the task of detecting hate speech in social media has gained significant traction, primarily focusing on text-based content @xcite . however, there has been lesser efforts in classification of text-embedded images for hate speech in social media @xcite . in recent times, there has been a notable surge in scholarly interest towards identifying hate speech in memes or images containing text @xcite @xcite @xcite . memes often combine images and text with the intention of humor. on the other hand, text-embedded images are essentially images that incorporate text within them. this category encompasses not only memes but also other forms of textual-visual content, such as screenshots taken from tv headlines. in these cases, the image itself serves to provide context, while the accompanying text conveys the information within that context. while meme analysis has been a focal point for researchers, the examination of hate speech in these text-embedded images deserves equal attention. the introduction of this shared task stems from the recognition of this research gap. similarly, the exploration of memes or multimodal textual-visual data has predominantly concentrated on the broader scope of general social media platforms. the efforts to create dedicated datasets and conduct research within specific contexts have been quite limited. recently, some research have shown efforts to understand such multimodal textual-visual data for specific contexts and applications. for instance, @xcite investigated harmful memes and their targets in the context of the covid-19 pandemic. they labeled covid-19-related memes to indicate harmfulness and the targets of these harmful memes. expanding on this work, @xcite also studied memes related to the us election using the same labeling approach. additionally, @xcite introduced a dataset containing 10,244 memes critical of vaccines. these initiatives are gradually paving the way for future research that aligns with specific contexts. this shared task is also an attempt to attract the attention of the research community, encouraging their involvement in context-oriented investigations. ## task description according to @xcite , hate speech is a particular form of offensive language that considers stereotypes to express an ideology of hate. here, we assume that offensive language is a type of opinion-based information that is highly confrontational, rude, or aggressive @xcite , which may be led explicitly or implicitly @xcite . in the same settings, hate speech is a particular form of offensive language used against target groups, mostly based on their social identities. ## dataset in our shared task, we used the crisishatemm dataset @xcite . this dataset consists of a total of 4,723 text-embedded images centered around the russia-ukraine crisis @xcite . within these 4,723 text-embedded images, 2,058 did not have any instances of hate speech, while the remaining 2,665 contained elements of hate speech. among these 2,665 images with hate speech, a subset of 2,428 text-embedded images exhibited instances of targeted or directed hate speech. in our shared task, we used only text-embedded images that exhibited directed hate speech and those that did not have any hate speech. thus, a total of 4,486 text-embedded images were used in our shared task. we split the dataset into train, evaluation, and test stages for both subtasks a and b in a stratified manner, maintaining a proportionate split ratio of approximately 80-10-10. ## evaluation and competition this section describes our competition environment including ranking methods and other details regarding the competition. ## discussion the methods from different participants gave interesting insights into various methods. particularly, transformer-based methods were seen to be more effective. most participants utilized bertbased variations to extract textual features from the dataset. for the extraction of visual features, participants turned to vision transformers, clip @xcite , and established methods like inception-v3. the methodology proposed by @xcite suggested that syntactical and entity features are equally important to leverage textual information from the dataset, particularly from instances that were related to the identification of targets of hate speech. while it is important to comprehend the utility of transformer-based models, @xcite suggested that traditional machine learning algorithms can also give a satis-factory performance in hate speech classification. while their algorithm excelled in subtask a, addressing target identification remained challenging for such traditional machine learning approaches. the promising direction for future research is to explore the applications of vision-language models specifically pretrained for the classification of hate speech in text-embedded images of memes. ## conclusion in conclusion, through our shared task at case 2024, we were able to contribute to promoting the research and interest in hate speech and target classification in text-embedded images. the shared task was successful in attracting over 50 participants. the participants altogether made over 250 submissions on the test set. the highest performance of f1-score 85.65 was achieved in subtask a and f1-score 76.34 in subtask b. this shows that there is still scope for improvement in the tasks proposed in our shared task. building on the momentum of this successful shared task, we intend to continue the shared task in the future with more subtasks in languages other than english. this expansion will aim to foster a more inclusive understanding of hate speech detection that goes beyond linguistic and cultural boundaries.
| 21,046
|
24
| 2,025
|
Know- AI at TSAR 2025 Shared Task Difficulty-aware Text Simplification System
|
Text simplification is an active research topic with applications in multiple domains. In a simplification pipeline, assessment of text difficulty plays a crucial role as a quality control mechanism it acts as a critic and guides models to generate text at the difficulty level required by the user. This paper presents our Difficulty-aware Text Simplification System. We evaluate our pipeline using the TSAR shared task dataset and discuss challenges in constructing corpora for training models to assess text difficulty.
|
https://aclanthology.org/2025.tsar-1.24
|
## introduction text simplification is a widely studied task in natural language processing (nlp), with applications in accessibility, education, and communication. it is important in many applications where the userse.g., non-native speakers-struggle to understand complex or standard language. the goal is to reduce the linguistic complexity of a text, while maintaining the original text's core meaning and coherence. increasingly, official legislation in europe (inclusion europe) requires government organizations, ngos and other public agencies to provide information to clients in clear and accessible form, including for readers who may be unable to understand standard language. we are motivated especially by applications of simplification in secondlanguage (l2) education, where personalized learning is supported by adapting text to the learner's proficiency level @xcite . our simplification pipeline, 1 shown in figure 1 , uses a critic consisting of two parts: (a) difficultyit evaluates the difficulty level of a text simplified by a large language model (llm), and (b) semantic 1 simplification.py similarity-it checks how well the simplified text preserves the semantics/meaning of the original text. this framework was introduced in @xcite , in l2 education. in this paper, we adapt the framework for simplification in english. the pipeline iteratively attempts to generate a "simplified" version of an input text. if the generated text is above the target level of difficulty, then feedback-including the generated text and its currently assessed level-is sent back to the llm to revise the output. the pipeline makes several attempts at simplification to reach the target difficulty level. we experiment with several critics in the pipeline, including an open-source transformer-based model that classifies text by difficulty level, and a regression model that we train using english-language texts labeled with difficulty levels. the paper is organized as follows: section 2 gives a brief overview of related work. section 3 describes the shared task and the evaluation methods. section 4 presents the architecture of our simplification pipeline. section 4.1 describes the experiments with controlling the behavior of a llm via the difficulty critic. section 5 presents results and analysis. section 6 concludes the paper and discusses directions for future work. ## related work prior approaches to text simplification relied on assessment of text difficulty to identify sentences requiring simplification. for example, @xcite trained a model to detect linguistically complex sentences; @xcite developed readability assessment tools to support simplifying texts for low-literacy readers. readability metrics have also been incorporated directly into rule-based simplifiers: woodsend and lapata (2011) integrate the flesch-kincaid grade formula @xcite into optimization-based simplification. more recent approaches to simplification leverage readability predictors as feedback within generation loops. alkaldi and inkpen (2023) use a readability classifier in a reinforcement learning framework to iteratively simplify text until it reaches the desired difficulty. large-scale neural systems have combined readability prediction with controllable generation techniques to produce text at the target difficulty level @xcite . ## task description the shared task on readability-controlled text simplification @xcite involves simplifying english-language paragraphs written at upper-intermediate or advanced levels. participants are required to produce simplified versions at a target readability, specified as a cefr level: common european framework of reference for languages @xcite . our experiments are based on the test dataset provided by the tsar shared task. the test set consists of english paragraphs at level b2 or higher, each associated with a target level (a1, a2, or b1). no training data, and no reference simplifications are provided. the evaluation involves measuring multiple aspects of the simplified texts: these metrics are calculated using the official evaluation scripts released by the shared task organizers with the test dataset. the semantic similarity in the evaluation scripts uses meaningbert @xcite . meaningbert is a bertbased semantic similarity model that measures how well meaning is preserved between two texts, particularly for tasks such as text simplification and paraphrase assessment. ## system overview we next describe how we use the critic model to guide in llm-based text simplification pipeline (see figure 1 ). the pipeline begins by determining the difficulty of a source text, either with a difficulty model or manual annotation. the text, together with the target cefr level and a prompt, is passed to a llm, which produces a candidate output. the critic model evaluates the candidate's difficulty; if it matches the target level, the process ends. otherwise, the llm is re-prompted with the previous output and the discrepancy from the target. this loop continues for up to n iterations-a predefined maximum, to balance between cost and quality. the system then outputs either a satisfactory simplification, or an error if the target is not reached. ## results and analysis in this section, we examine the results of simplification with different critic models. beyond exactmatch accuracy, we assess how well the predicted difficulty levels match the intended simplification direction. the direction consistency metric measures whether predictions respect the target level ordering for each input. ## discussion and future work the effectiveness of our proposed pipeline depends on the choice of difficulty assessment model used in the critic, since it guides the simplification process. in addition to the models above, we experimented with training our own difficulty assessment model. although this approach did not appear in our submissions for the shared task, it shows much promise for future work. this section summarizes the lessons learned from this attempt. first, since no training data were provided for the shared task, we construct a training, development and test set-test set 1-by taking an existing corpus foot_3 described in @xcite and translating it from finnish into english, using the opus machine translation (mt) toolkit foot_4 @xcite . it is crucial to note that we found that the opus models are particularly strong at preserving the cefr levels of the original source text in the mt output text. 7 we also use the reference set provided by tsar as a second test second, following the methodology of katinskaia et al. (2025), we train a regression model to predict difficulty. we were unable to gather a sufficient amount of training data and tune our regression model in time for the actual tsar competition; therefore, as a fallback, we used alllang2-cefr2 rather than the regression model as a critic in our submission for the shared task. we next check how well difficulty prediction works-on its own, apart from the simplification task. for test set 1, the difficulty prediction results are in figures 6 and 7 . the regression model shows a clear advantage over the alllang2-cefr2 model, exhibiting a clear step-wise pattern that aligns well with cefr levels. it consistently outperforms the baseline across all evaluation metrics. the evaluation metrics for difficulty prediction are shown in the top part of for test set 2, the evaluation metrics for difficulty prediction are in the bottom of several factors may compromise the performance of our regression model. first, the dataset figure 9: difficulty estimation using regression model in test set 2. is machine-translated, which may distort the true difficulty of the texts. ideally, training data is manually annotated for difficulty. however, manual annotation is very complex and time-consuming. second, the translated dataset is still small, restricting the model's ability to generalize across different linguistic phenomena. in future work, we plan to extend the setup relying solely on gpt-4o for text simplification, to consider other models, including smaller models fine-tuned for the simplification task. we will investigate more advanced models to improve the assessment of difficulty, which is central for the simplification pipeline. larger, more accurate, and more diverse training datasets should further improve performance and generalization. ## lay summary this study investigates text simplification, in the context of the shared task on text simplification, accessibility, and readability (tsar). we present a difficulty-aware simplification pipeline based on large language models (llms) and small models for simplification assessment. we use text data in english, of varying levels of difficulty, ranging from a1 to c1 on the cefr scale. we evaluate performance according to several criteria, including error rates of difficulty assessment models in their assessment of the difficulty of texts in a held-out test set, and the success rates of the simplification pipeline, relative to reference texts provided by the organizers of the shared task. the paper a. discusses the performance of a number of critic models for assessing difficulty of a text, and b. compares the performance of the simplification pipeline driven by the different critics. kristian woodsend and mirella lapata. 2011. learning to simplify sentences with quasi-synchronous grammar and integer programming. in proceedings of conference on empirical methods in natural language processing, edinburgh, scotland.
| 41,003
|
150
| 2,022
|
CQR - SQL : Conversational Question Reformulation Enhanced Context-Dependent Text-to- SQL Parsers
|
Context-dependent text-to-SQL is the task of translating multi-turn questions into database-related SQL queries. Existing methods typically focus on making full use of history context or previously predicted SQL for currently SQL parsing, while neglecting to explicitly comprehend the schema and conversational dependency, such as co-reference, ellipsis and user focus change. In this paper, we propose CQR-SQL, which uses auxiliary Conversational Question Reformulation (CQR) learning to explicitly exploit schema and decouple contextual dependency for multi-turn SQL parsing. Specifically, we first present a schema enhanced recursive CQR method to produce domain-relevant self-contained questions. Secondly, we train CQR-SQL models to map the semantics of multi-turn questions and auxiliary self-contained questions into the same latent space through schema grounding consistency task and tree-structured SQL parsing consistency task, which enhances the abilities of SQL parsing by adequately contextual understanding. At the time of writing, our CQR-SQL achieves new state-of-the-art results on two context-dependent text-to-SQL benchmarks SParC and CoSQL.
|
https://aclanthology.org/2022.findings-emnlp.150
|
## introduction the text-to-sql task is one of the widely followed branches of semantic parsing, which aims to parse natural language questions with a given database into sql queries. previous works @xcite @xcite focus on context-independent text-to-sql task. however, in reality, as users tend to prefer multiple turns interactive queries @xcite , the text-to-sql task based on conversational context is attracting more and more scholarly attention. the generalization challenge of the context-dependent text-to-sql task lies in jointly representing the multi-turn their how many pc members do we have? figure 1 : an example of context-dependent text-to-sql task demonstrates the phenomenon of co-reference, ellipsis, and user focus changes. the cqr module converts contextual questions to self-contained questions, which can be understood without the context. for context-dependent text-to-sql, it is common to train a model in an end-to-end manner that simply encoding the concatenation of the multiturn questions and schema, as shown in figure 2 (a). to exploit context-dependence information, @xcite propose a dynamic relation decay mechanism to model the dynamic relationships between schema and question as conversation proceeds. @xcite and @xcite leverage previously predicted sql queries to enhance currently sql parsing. however, we argue that these end-to-end approaches are inadequate guidance for the contextual dependency phe- ## proposed method in this section, we first formally define the contextdependent text-to-sql task and introduce the backbone network of cqr-sql. afterwards, the technical details of cqr-sql are elaborated in two subsections: schema enhanced recursive cqr and latent cqr learning for text-to-sql in context. ## experiments in this section, we conduct several experiments to assess the performance of proposed methods in §2. ## conclusions we propose cqr-sql, a novel context-dependent text-to-sql approach that explicitly comprehends the schema and conversational dependency through latent cqr learning. the method introduces a schema enhanced recursive generation mechanism to generate domain-relative self-contained questions, then trains models to map the semantics of self-contained questions and multi-turn question context into the same latent space with schema grounding consistency task and sql parsing consistency task for adequately context understanding. experimental results show that cqr-sql achieves new state-of-the-art results on two classical context-dependent text-to-sql datasets sparc and cosql.
| 16,602
|
71
| 2,023
|
Exploring Zero and Few-shot Techniques for Intent Classification
|
Conversational NLU providers often need to scale to thousands of intent-classification models where new customers often face the cold-start problem. Scaling to so many customers puts a constraint on storage space as well. In this paper, we explore four different zero and few-shot intent classification approaches with this low-resource constraint: 1) domain adaptation, 2) data augmentation, 3) zero-shot intent classification using descriptions large language models (LLMs), and 4) parameter-efficient fine-tuning of instruction-finetuned language models. Our results show that all these approaches are effective to different degrees in low-resource settings. Parameter-efficient fine-tuning using T-few recipe on Flan-T5 yields the best performance even with just one sample per intent. We also show that the zero-shot method of prompting LLMs using intent descriptions is also very competitive.
|
https://aclanthology.org/2023.acl-industry.71
|
## introduction intent classification is the primary natural language understanding task for a virtual agent or a chatbot. providing intent-utterances for training intent classification models is a laborious process. in this paper, we address this problem by exploring zero and few-shot intent identification using large language models (llms) as well as instruction finetuned models. zero-shot and few-shot intent prediction completely remove or substantially reduce the work to provide intent-utterances, respectively. we demonstrate that the following four approaches work well in practice for zero/few-shot intent classification. here is the outline of the rest of the paper. in section 2 we describe the related work. in section 3 we detail the datasets used. in section 4 we describe the four approaches covered in this work for zero/few-shot intent classification. finally, we conclude with observations in sections 5 and 6. ## related work recent work has successfully used domain adaptation and contrastive learning for few-shot intent classification. one approach is to use embeddings from a bert model @xcite pretrained on domain data to search for utterances belonging to new intents in the domain @xcite . in a similar vein, @xcite finetune a bert model on few-shot data using contrastive learning which learns to discriminate between semantically similar sentences. our work on domain adaptation differs from these mainly due to our setting which involves serving thousands of customers. for legal reasons, we cannot co-mingle data from these customers to pre-train a single model. instead, we pre-train a sentence encoder based on an intent taxonomy and out-of-the-box intents, which consist of human generated synthetic data. in this setting, we can only train very lightweight models for each customer, e.g. a dense layer on top of a pre-trained sentence encoder. data augmentation is another widely used technique to solve the problem of data scarcity. recent work on data augmentation has focused on using multiple methods to improve model performance @xcite . llms like gpt-3 @xcite ) can be prompted to generate labeled training data for intent classification @xcite . the quality of generated training data using llms is highly dependent on the prompts. in this work, we show various prompt-based approaches that generate diverse data for training and boost the performance of intent classifiers. as the usage of conversational agents grows, it is important for them to generalize to new intents. recent work has focused on performing zero-shot intent detection on unseen intents and domains. using knowledge from ontologies or attributes @xcite can help in detecting and generalizing to new intents. a more recent approach by @xcite makes modifications to capsule networks to generalize to unseen domains. embeddings of intent descriptions have also shown to be quite meaningful in generalizing to new intents and services @xcite . while these methods are effective, they all require training on an initial set of intents. large language models (llms) like gpt-3 @xcite and more recently instruction finetuned models like @xcite have shown good zero-shot performance on newly seen tasks without any prior training data on those tasks. in this work, we show that these models are also effective for zero-shot intent classification using just intent descriptions. ## datasets we use public and private intent classification datasets to benchmark different approaches. for evaluation on public dataset, we use the english train and test sets from massive for intent classification. massive contains utterances directed at a physical device spanning 60 intents and 18 domains. for more details on the massive dataset @xcite , we encourage readers to refer to their paper. we also use private benchmarking datasets internal to our company. these datasets contain various intents and utterances in the enterprise setting spanning 3 different domains: it service management (itsm), hr and customer service management (csm). the utterances are inspired by interactions between humans and chatbots and are typically queries from goal-oriented conversations where the user needs to resolve an issue. additionally, some of these datasets also contain out-of-scope (oos) utterances in their test set i.e. utterances that do not belong to any intent, in order to benchmark irrelevance detection of intent classification models. ## methodology in this section, we describe the various methods we evaluate for zero and few-shot learning. ## observations comparing results across the 4 approaches, we notice that all 4 approaches are effective in low resource settings. we find that domain adaptation is a cheap option in terms of size of the models but it still requires 5-10 training utterances per intent for getting accuracy above 70%. data augmentation using paraphrasing further helps in most cases by 2-4 percentage points. however, expanding to new domains requires sentence-pairs data for training the sentence encoder which can involve days of human labeling. zero shot classification using intent descriptions with llms and instruction finetuned models performs even better than domain adaptation with data augmentation and doesn't require any utterances to be configured per intent. however a good description for each intent is required. additionally, these models can be expensive to operationalize. inference on flan-t5-xxl requires using a100 gpus. gpt-3 is not open-source and based on a pricing model which can be expensive to scale to thousands of customers. parameter efficient fine-tuning (peft) of instruction finetuned models like flan-t5-xl and flan-t5-large offers the best performance across all methods and often by a large margin. moreover, these models are only a fraction of the size of gpt-3 and flan-t5-xxl and much easier to operationalize at scale with far lesser compute resources. ## conclusion in this paper, we addressed the task of zero and few-shot intent identification using large language models (llms). we presented four approaches, namely domain adaptation, data augmentation, zero-shot prediction with prompting, and parameter-efficient fine-tuning. our experimental results demonstrate that llms and larger instruction fine-tuned language models are very effective in zero-shot setting with in-context prompting. smaller instruction finetuned models with adapters are even better when adapter-finetuned on just 1 or 3 examples per intent. we hope these results are useful for practical deployment of conversational agents in low-resource settings as well as aiding non-practitioners in building their intent classification models. in the future, we plan to extend this work by domain adapting smaller instruction finetuned models in a multi-task setting and exploring their zero-shot capabilities.
| 20,554
|
499
| 2,023
|
Reasoning Makes Good Annotators : An Automatic Task-specific Rules Distilling Framework for Low-resource Relation Extraction
|
Relation extraction is often challenged by insufficient labeled data. Previous methods exploit knowledge from unlabeled data by generating pseudo labels in a self-training pipeline, which suffers a gradual drift problem. Logic rules, a transferable and explainable form of expert knowledge, have achieved promising success by improving the model with weak labels. But manually writing comprehensive rules set is challenging and tedious. To alleviate the human labor of writing high-quality rules, in this work, we propose ARIA, an Automatic task-specific Rules distilling framework. Specifically, we guide the pre-trained language model to reason rules as experts and compose them into robust compound rules for data labeling. Besides, ARIA could continuously enrich the rules set to power the labeling ability by discovering reliable model-labeled data for distinguishable rules generation. Experiments on two public datasets demonstrate the effectiveness of ARIA in a low-resource scenario.
|
https://aclanthology.org/2023.findings-emnlp.499
|
## introduction relation extraction is a fundamental task in natural language processing. training supervised models with manually annotated data is labor-intensive. this motivates methods for model learning under a low-resource setting with limited annotations. semi-supervised methods @xcite aim to explore knowledge from the unlabeled data for better model generalization. self-training pipeline @xcite iteratively adds the model's high-confidence predictions over unlabeled set to the training set and re-trains the model. however, the noise in the model-labeled data may accumulate during the training process (gradual drift problem). the logic rule is an explainable and transferred form to summarize knowledge, which could replace human for weak labels generation. since the human written rules @xcite are timeconsuming and difficult to be completed enough for emerging domains, some work attempts to generate logic rules automatically. for example, the distant supervised methods @xcite extract the knowledge base (kb)'s facts as rules for data labeling. these methods label the sentences containing the specific entity pair with kb's relation label regardless of the context, which generates noise labels easily. thus, how to handle the data's context for accurate labeling deserves studying. recently, the pre-trained language model (plm) shows broad cognitive capabilities that could be distilled into downstream tasks and work well even without any training data @xcite . specifically, some work @xcite propose the chain of thought prompts to exploit the plm's reasoning ability by guiding it to generate the intermediate natural language reasoning process like human. the explainable reasoning process infers the association between the input and output and could be deemed as the prerequisite for the output answer. motivated by this, given labeled instances as input, we guide the plm to imitate human relation reasoning manner and summarize the key information supporting relation inference from the reasoning process into transferred rules automatically. we propose an automatic task-specific logic rules distilling framework, aria, which leverages plm to replace human for continuously highquality labeling rules discovery. as shown in figure 1 , starting from limited seed data, aria alter-nates between data-to-rule stage and rule-to-data stage. the former guides the plm to reason specific rules from labeled data by following human reasoning paths and asks the plm to merge them into compound rules. the latter adopts the rules over the unlabeled set to improve the re model, which is leveraged to generalize beyond the existing rules. then we filter reliable model-labeled data for further high-quality rules generation. different from the previous work, which summarizes rules based on the restricted knowledge from the experts or knowledge bases, aria could continuously explore comprehensive rules by guiding plm to imitate human reasoning manner. there are two major challenges to automatically generate task-specific rules for accurate labeling: 1) how to guide the plm to follow human's reasoning manner and summarize crucial information of relation inference into comprehensive rules; 2) how to diversify the reasoned rules set with high-quality rules to improve the labeling ability. to solve the first issue, we guide plm to imitate typical reasoning chains of human (e.g., induction, abduction) by defining several types of meta-rule templates, which derives the key information related to the relation inference that can be used for the rules construction. for each data, to summarize rules from it, we guide the plm's reasoning by prompts, which are built by following the reasoning-specific meta-rule templates, and the output reasoning words are used to build different types of rules. since some work @xcite shows reasoning in different ways helps answer correctly, we compose the reasoning rules into compound rules and ask plm to pick out the most comprehensive compositions for robust labeling. to solve the second issue, the re model improved by the rule-labeled data, is used to generalize beyond the existing rules and alleviate the gradual drift problem. since the plm's reasoning rules conclude the information crucial for relation inference, for each relation, we pick the model-labeled data that could generate rules consistent with this relation's existing rules and distinguishable from other relations' by modeling their rules' relevance. specifically, we propose a graph-based data filter, which builds a graph of both model-labeled data and seed data to propagate their rules' features. for each relation, the model-labeled data with features close to its seed data and far from others' are picked for further rules generation. compared with previous work, our method leverages plm's broad knowledge rather than human knowledge or the restricted knowledge of jointly trained modules to discover data for high-quality rules generation. in summary, our contributions are four-fold: embedding roberta as a safe and efficient annotator, aria gets competitive labeling precision as chatgpt. we also show the smallscaled plm's potential to assist chatgpt for reliable reasoning: the in-context learning enhanced by our rules' representative information could reduce chatgpt's hallucination and improve at most 23.04% in precision. weakly supervised methods. logic rules are proposed to improve the model with weak labels. since the manually written rules are expensive and difficult to be completed enough for emerging domains @xcite , many works discover rules automatically from kb @xcite ; seed data reasoning rules generator compound rules compiler compound rules set rule-labeled data training set re model filtered data reasoning prompts ℳ reasoning rules enumerate compound rules evaluating prompts unlabeled data set add gat graph-based data filter ℳ r q q r meta-rules model-labeled data figure 2: the overall framework of aria. in each iteration, aria 1) leverages the reasoning rules generator to imitate human's reasoning manner and summarize rules from the labeled data; 2) asks the compound rules compiler to compose reasoning rules into robust compound rules; 3) utilizes compound rules to label data for training set enrichment; 4) learns a re model to predict on the unlabeled data and uses graph-based data filter to select reliable model-labeled data for further rules generation in the next iteration. ye and ling, 2019). recently, language models has shown their ability on various tasks based on prompting @xcite @xcite . since the scale of kb is limited, prboost @xcite asks plm by prompt to predict the relation directly and take the predictions as rules. this rule construction manner lacks capturing the task-specific reasoning process, making the rules less transferred and explainable. instead, aria builds rules in a fine-grained manner by guiding plm with different reasoning paths. besides, prboost requires human for rules selection, while aria could automatically pick the data that could generate high-quality rules by modeling their reasoning rules' dependency. ## method we introduce aria in this section. given limited seed data, aria continuously distills specific rules by plm for labeling over the unlabeled set and improves the task model with rule-labeled data. overview. as shown in figure 2 , our framework iterates among the four steps: 1) from data to rules: given a set of labeled data, we propose a reasoning rules generator to guide the plm to generate specific reasoning rules of different reasoning path from each labeled data. 2) from simple rules to compound rules: a compound rules compiler is proposed to compose each data's reasoning rules and ask the language model to pick the comprehensive composition as compound rules. notice that in step 4) we pick the model-labeled data rather than rule-labeled data for further rules discovery, which are less likely to accumulate repeating patterns in the rules set based on the model's generalization ability. for initialization, the seed data are taken as the input in the first iteration. then for the iteration t + 1, the input labeled data is the filtered data output from the iteration t. ## conclusion we propose aria, which guides the plm to summarize comprehensive rules as human. specifically, we build a reasoning rules generator to replace human for high-quality rules generation and a com-
| 24,587
|
320
| 2,021
|
WER - BERT : Automatic WER Estimation with BERT in a Balanced Ordinal Classification Paradigm
|
Automatic Speech Recognition (ASR) systems are evaluated using Word Error Rate (WER), which is calculated by comparing the number of errors between the ground truth and the transcription of the ASR system. This calculation, however, requires manual transcription of the speech signal to obtain the ground truth. Since transcribing audio signals is a costly process, Automatic WER Evaluation (e-WER) methods have been developed to automatically predict the WER of a speech system by only relying on the transcription and the speech signal features. While WER is a continuous variable, previous works have shown that positing e-WER as a classification problem is more effective than regression. However, while converting to a classification setting, these approaches suffer from heavy class imbalance. In this paper, we propose a new balanced paradigm for e-WER in a classification setting. Within this paradigm, we also propose WER-BERT, a BERT based architecture with speech features for e-WER. Furthermore, we introduce a distance loss function to tackle the ordinal nature of e-WER classification. The proposed approach and paradigm are evaluated on the Librispeech dataset and a commercial (black box) ASR system, Google Cloud’s Speech-to-Text API. The results and experiments demonstrate that WER-BERT establishes a new state-of-the-art in automatic WER estimation.
|
https://aclanthology.org/2021.eacl-main.320
|
## introduction asr systems are ubiquitous now. they are available across applications such as voice assistants, assisted living or hands free device usage. however, with the widespread usage of asr systems, there comes a heavy need for asr evaluation as well -to select, compare or improve alternate asr *the authors contributed equally to the work. systems. wer is widely considered as the standard metric for asr evaluation. a higher wer means a higher percentage of errors between the ground truth and the transcription from the system. wer is calculated by aligning the two text segments using string alignment in a dynamic programming setting. the formula is as follows: the overall contributions of our paper can be summarized follows: ## related work while the importance of an automatic wer prediction system is immense, there have not been many works directly addressing it. related works such as exploring the word-level confidence in asr prediction are abundant @xcite . there have also been works predicting the errors or error estimates as well in some form @xcite @xcite . these approaches either predict some of the errors described in wer prediction or alternate metrics to rate asr systems such as accuracy or error type classification. however, they lack calculation of the complete wer score. transcrater @xcite was one of the first works which aim at predicting wer directly. they propose a neural network in a regression setting trained on various features such as parts of speech, language model, lexicon, and signal features. however, more recent approaches @xcite @xcite phrase wer prediction as a classification approach. @xcite propose two types of models based on the input available -the glassbox model which uses internal features of the target asr system such as its confidence in transcribing the audio clip; and the black box model which only uses the transcripts and other features generated from the transcript such as the word and the grapheme count. they propose a bag-of-words model along with additional transcription features such as duration for e-wer. the black box setting is a harder task since asr model features such as the average log likelihood and the transcription confidence can give a good indication on how many errors may have occurred during the automatic transcription. however, the black box approach is not specific to the architec-tural design of an asr system and can be used with any asr system without access to its internal metrics such as the aforementioned confidence. thus our proposed approach is trained in a black box setting. @xcite build a cnn based model for wer classification. we built models based on them as baselines to evaluate wer-bert's performance. they are further explained in the sections 4 and 6. asr errors often make a transcription ungrammatical or semantically unsound. identifying such constructs is also reflected in the dataset of corpus of linguistic acceptability(cola) @xcite . cola is a dataset intended to gauge at the linguistic competence of models by making them judge the grammatical acceptability of a sentence. cola is also part of the popular glue benchmark datasets for natural language understanding @xcite . bert @xcite is known for outperforming previous glue state-of-the-art models, including the cola dataset. ## dataset for our experiments, we have used the librispeech dataset @xcite which is a diverse collection of audio book data along with the ground text. it has around 1000 hours of audio recordings with different levels of complexity. we pass these audio clips through an asr system to get its transcripts and the wer is calculated by comparing it with the ground text. this paper reports findings in the experiments run with google cloud's speech-to-text api. we chose this commercial asr system, rather than reporting results on an internal asr system, since it's easily accessible through the google-api and the results are reproducible. for our experiments, we have used the 10 and 100 hour datasets and made a 60:20:20 split into train, dev and test sets for each dataset. as can be seen in as shown in equation 1, the wer of an utterance is the fraction obtained by the division of 2 integers -errors per sentence (err), which is the total number of insertions, deletions and substitutions needed to convert an asr's transcript to the ground text, and the word count of the ground text (n). since the wer of a sentence is a continuous variable between 0 and 1 (mostly), a common way to model this is through a regression model. @xcite instead present a way to turn this into a classification problem for e-wer. they experiment with various combinations of text and audio signal inputs and show that a classification approach outperforms its corresponding regression approach trained on the same inputs. elloumi et al. ( 2018 )'s approach estimates wer directly with a 6 class classification model (with classes corresponding to 0%, 25%, 50%, 75%, 100% and 150%). once the model is trained, the predictions are calculated as follows: ## wer-bert in this section we explain our proposed architecture wer-bert, which is primarily made of four subnetworks. our architecture is shown with details in figure 3 . signal sub-network: elloumi et al. (2018) use the raw signal of the audio clip to generate features such as mfcc and mel spectrogram. they are features commonly used in the design of asr systems, particularly systems which use an acoustic model and furthermore these features aid their model performance. these signal features are passed through the m18 architecture @xcite . m18 is a popular deep convolutional neural network (cnn) used for classification with audio signal features. this cnn model has 17 convolutional+max pooling layers which is followed by global average pooling. l2 regularization of 1e -4 and batch normalization are added after each of the convolutional layers. numerical features sub-network: ali and renals (2018) black box models had two major components -text input and numerical features. these numerical features are important to the model as they contain information regarding the number of errors. for instance, in asr systems, there are errors if a user speaks too fast or too slow and this is directly reflected in the duration and word count features. the numerical features we have used are word count, grapheme count and duration. these features are concatenated and passed through a simple feed forward network which is used to upscale the numerical features fed into the model (from 3 to 32). bert: bi-directional encoder representations (bert) @xcite is a pre-trained unsupervised natural language processing model. it is a masked language model which has been trained on a large corpus including the entire wikipedia corpus. the transcription samples from the asr system, are passed through the associated tokenizer which gives a contextual representation for each word. it also adds 2 special tokens -the [cls] token at the beginning and the [sep] token at the end of the sentence. we have used the bert-large uncased variant. the large variant has 24 stacked transformer @xcite encoders. it gives an output of the shape (sequence length x 1024) of which only the 1024 shaped output corresponding to the [cls] token is used. in wer-bert, bert weights are fine tuned with rest of architecture during training. feed forward sub-network: this subnetwork is a deep fully connected network which is used to concatenate and process the features generated by the sub-networks predating it (bert, numerical sub-network and the signal sub-network). it has 4 hidden layers (512, 256, 128 and 64 neurons) followed by the output softmax layer. dropout regularization is added to prevent overfitting considering the large amount of parameters. to account for outputs from the eclectic sub-networks with disparate distributions, we further add layer normalization (ba et al., 2016) before concatenation. normalization is important to lessen the impact of any bias the network may learn towards one or the other representations. distance loss for ordinal classification: typical classification problems deal with classes which are mutually exclusive and independent such as sentiment prediction or whether an image is a cat or a dog. in such a setting, classification accuracy is the most important metric and there is no relation or relative ordering between the classes. however, e-wer in a classification setting is an ordinal classification problem @xcite . previous approaches which propose wer estimation as classification tasks ignore this idea @xcite @xcite . while the classification accuracy is important, it is more important that given a sample is misclassified, the predicted label is close to the true label. for instance, if the true label corresponds to the wer class of 0.1, a prediction of 0.2 and a prediction of 0.7 are treated the same in the current classification scenario. since we want the prediction to be as close as possible, if not exactly the same, we introduce a "distance" loss which is ground truth google cloud's speech-to-text true predicted transcription wer wer one historian says that an event was produced by napoleon's power another that it was produced by alexander's when is dorian says that an event was produced by napoleon's power another that it was produced by alexander's 16.7 16.5 rynch watched dispassionately before he caught the needler jerking it away from the prisoner the man eyed him steadily and his expression did not alter even when rynch swung the off world weapon to center its sights on the late owner wrench watch dispassionately before he caught a kneeler jerking it away from the prisoner the man i can steadily and his expression did not alter even when wrench swampy off world weapon to center its sights on the late owner 21.9 22.1 of acting a father's part to augustine until he was fairly launched in life he had a child of his own acting a father's part 2 augustine until he was fairly launched in life 42.8 42.7 supported by an honorable name how could she extricate herself from this labyrinth to whom would she apply to help her out of this painful situation debray to whom she had run with the first instinct of a woman towards the man she loves and who yet betrays her supported by an honorable name how could you extricate herself in this labyrinth to whom would she apply to help her out of this painful situation dubray to whom should run the first instinct of a woman towards the man she loves and who yep betrays her 14.3 14.4 seventeen twenty four 1724 100.0 30.7 saint james's seven st james 7 100.0 32.1 mamma says i am never within mama says i am never with him 50.0 13. 44 ## experiments and baselines for each of the experiments below, the training is repeated for 10 runs and we report the average performance of all the runs on the test set. for all refer to section a.1 of the appendix for tuning of the distance loss hyperparameter α the experiments, we use crossentropy as the loss function and m ae of w er as the evaluation metric. ## results and discussion comparing figure 4 and 5, we see that the wer-bert models much better in the lower and mid regions compared to the cnn balanced model. ## conclusion we propose wer-bert for automatic wer estimation. while bert is an effective model, addition of speech signal features boosts the performance. phrasing wer classification as a ordinal classification problem by training using a custom distance loss encodes the information regarding relative ordering of the wer classes into the training. finally, we propose a balanced paradigm for training wer estimation systems. training in a balanced setting allows proposed model to predict wer adequately even in regions where samples are scarce. furthermore, this balanced paradigm is independent of wer prediction model, asr system or the speech dataset, making it efficient and scalable. su-youn yoon, lei chen, and klaus zechner. 2010. predicting word accuracy for the automatic speech recognition of non-native speech. in eleventh annual conference of the international speech communication association.
| 8,510
|
35
| 2,025
|
Pensez: Moins de données, meilleur raisonnement – Repenser les LLM français
|
Les grands modèles linguistiques (LLM) ont démontré des capacités remarquables dans diverses tâches de traitement automatique du langage naturel. Cependant, l’obtention de performances élevées dans des domaines spécialisés tels que le raisonnement mathématique et les langues autres que l’anglais nécessite souvent un entraînement intensif. Cet article étudie l’affinage stratégique sur un petit ensemble de données bilingue de haute qualité, afin d’améliorer à la fois les capacités de raisonnement et la maîtrise de la langue française d’un LLM. Nous démontrons des améliorations du raisonnement mathématique en utilisant seulement 2000 échantillons soigneusement sélectionnés. Ces résultats remettent en question l’hypothèse dominante selon laquelle des ensembles de données massifs sont une condition préalable à de solides performances de raisonnement pour les LLM.
|
https://aclanthology.org/2025.jeptalnrecital-taln.35
|
## training reasoning model we fine-tuned the qwen2.5 7b instruct model on pensez training data. to guide the model in producing step-by-step reasoning, we incorporated special tokens, "<@xmath0, into the training data to mark these reasoning sequences. the training process leveraged deepspeed zero-3 (rasley et al., 2020) and flashattention2 (dao, 2023) to improve training efficiency and stability. furthermore, neftune (jain et al., 2023) was applied by adding noise to word embeddings during training, with the aim of enhancing model robustness and generalization. detailed hyper-parameters are provided in appendix b. ## evaluation setup to evaluate pensez 7b, we design an evaluation framework that assesses its reasoning capabilities and knowledge comprehension across english and french. this balanced approach ensures the model excels in complex problem-solving without sacrificing broad understanding, a critical consideration given its bilingual fine-tuning. below, we describe the benchmarks selected for english, french, and bilingual tasks, followed by the evaluation methodology. french benchmarks next, we evaluate performance in french to confirm linguistic parity. for reasoning, math hard lv5 (mohamad alhajar, 2024), a french-translated variant of the math500 dataset @xcite , features only level-5 difficulty competition math problems, probing the model's mathematical reasoning in a second language. for knowledge understanding, the french version of boolqa (clark et al., 2019) presents complex, non-factoid questions requiring entailmentlike inference, testing deeper comprehension beyond simple recall.
| 38,574
|
1
| 2,025
|
A rabic S ense: A Benchmark for Evaluating Commonsense Reasoning in A rabic with Large Language Models
|
Recent efforts in natural language processing (NLP) commonsense reasoning research have led to the development of numerous new datasets and benchmarks. However, these resources have predominantly been limited to English, leaving a gap in evaluating commonsense reasoning in other languages. In this paper, we introduce the ArabicSense Benchmark, which is designed to thoroughly evaluate the world-knowledge commonsense reasoning abilities of large language models (LLMs) in Arabic. This benchmark includes three main tasks: first, it tests whether a system can distinguish between natural language statements that make sense and those that do not; second, it requires a system to identify the most crucial reason why a nonsensical statement fails to make sense; and third, it involves generating explanations for why statements do not make sense. We evaluate several Arabic BERT-based models and causal LLMs on these tasks. Experimental results demonstrate improvements after fine-tuning on our dataset. For instance, AraBERT v2 achieved an 87% F1 score on the second task, while Gemma and Mistral-7b achieved F1 scores of 95.5% and 94.8%, respectively. For the generation task, LLaMA-3 achieved the best performance with a BERTScore F1 of 77.3%, closely followed by Mistral-7b at 77.1%. All codes and the benchmark will be made publicly available at https://github.com/.
|
https://aclanthology.org/2025.wacl-1.1
|
## arabicsense: a new benchmark dataset the aim of this work is twofold: to create a dataset for evaluating arabic commonsense reasoning in llms and to improve their performance in this area. to achieve this, we generate diverse, high-quality data specifically designed for training llms in arabic commonsense reasoning. this section outlines the methodology used to create the arabicsense dataset, followed by the human validation process and an analysis of the dataset. ## conclusion in this paper, we introduced arabicsense, the first comprehensive benchmark designed to evaluate the commonsense reasoning abilities of large language models (llms) in arabic. through the creation of three distinct tasks: commonsense validation (task a), commonsense explanation (task b), and commonsense explanation generation (task c), we
| 41,101
|
133
| 2,025
|
NLP - ADB ench: NLP Anomaly Detection Benchmark
|
Anomaly detection (AD) is an important machine learning task with applications in fraud detection, content moderation, and user behavior analysis. However, AD is relatively understudied in a natural language processing (NLP) context, limiting its effectiveness in detecting harmful content, phishing attempts, and spam reviews. We introduce NLP-ADBench, the most comprehensive NLP anomaly detection (NLP-AD) benchmark to date, which includes eight curated datasets and 19 state-of-the-art algorithms. These span 3 end-to-end methods and 16 two-step approaches that adapt classical, non-AD methods to language embeddings from BERT and OpenAI. Our empirical results show that no single model dominates across all datasets, indicating a need for automated model selection. Moreover, two-step methods with transformer-based embeddings consistently outperform specialized end-to-end approaches, with OpenAI embeddings outperforming those of BERT. We release NLP-ADBench at https://github.com/USC-FORTIS/NLP-ADBench, providing a unified framework for NLP-AD and supporting future investigations.
|
https://aclanthology.org/2025.findings-emnlp.133
|
## introduction anomaly detection (ad) is a fundamental area in machine learning with diverse applications in web systems, such as fraud detection, content moderation, and user behavior analysis @xcite . substantial progress has been achieved in ad for structured data such as tabular, graph, and time series @xcite @xcite , but its extension to natural language processing (nlp) remains relatively underexplored @xcite . this gap limits our ability to identify harmful content, phishing attempts, and spam reviews. for instance, detecting abusive or threatening language is crucial for ensuring that social media platforms and online forums remain safe environments for users @xcite . likewise, detecting anomalous product reviews or descriptions in e-commerce is important for preserving user trust and platform credibility @xcite . however, many standard ad methods are designed for numeric or categorical data and are not easily adapted to unstructured text @xcite . existing studies on nlp-specific ad are limited in both dataset variety and algorithmic range @xcite @xcite , leaving open questions about which approaches work best under different conditions. these gaps lead to a central research question: how can we systematically evaluate and compare diverse ad methods across real-world text datasets, and what insights can be gained to guide future development in nlp-based ad? our proposal and key contributions. we introduce nlp-adbench, the most comprehensive benchmark for nlp-ad tasks. nlp-adbench offers four major benefits compared to prior work @xcite : (i) eight real-world datasets covering a wide range of web use cases; (ii) 19 advanced methods that apply standard ad algorithms to language embeddings or use end-to-end neural architectures; (iii) detailed empirical findings that highlight new directions for nlp-ad; and (iv) fully open-source resources, including datasets, algorithm implementations, and more, aligns with the resources and evaluation track. key insights/takeaways (see details in §3). our comprehensive experiments reveal: (i) no single model dominates across all datasets, showing the need for model selection; (ii) transformer-based embeddings substantially boost two-step ad methods (e.g., lunar @xcite and lof @xcite ) relative to end-to-end approaches; (iii) high-dimensional embeddings (e.g., from openai) improve detection performance, but also raise computational overhead; and (iv) datasetspecific biases and human-centered anomaly definitions remain challenging for building robust and widely applicable nlp-ad systems. ## conclusion we present nlp-adbench, the most comprehensive benchmark for contextual nlp anomaly detection (nlp-ad), evaluating 19 state-of-the-art algorithms across 8 diverse datasets. our findings establish the superiority of two-step methods leveraging transformer-based embeddings, such as openai + lunar, over end-to-end approaches, demonstrating the power of hybrid strategies for handling complex nlp anomaly detection tasks. by combining advanced text embeddings with traditional anomaly detection methods, nlp-adbench provides a robust and flexible framework that sets a new standard for evaluating nlp-ad systems. additionally, we offer actionable insights into model performance, dataset variability, and embedding utilization, paving the way for future research.
| 36,810
|
10
| 2,019
|
Autism Speech Analysis using Acoustic Features
|
Autism speech has distinct acoustic patterns, different from normal speech. Analyzing acoustic features derived from the speech of children affected with autism spectrum disorder (ASD) can help its early detection. In this study, a comparative analysis of the discriminating acoustic characteristics is carried out between ASD affected and normal children speech, from speech production point of view. Datasets of English speech of children affected with ASD and normal children were recorded. Changes in the speech production characteristics are examined using the excitation source features F0 and strength of excitation (SoE), the vocal tract filter features formants (F1 to F5) and dominant frequencies (FD1, FD2), and the combined source-filter features signal energy and zero-crossing rate. Changes in the acoustic features are compared in the five vowels’ regions of the English language. Significant changes in few acoustic features are observed for ASD affected speech as compared to normal speech. The differences between the mean values of the formants and dominant frequencies, for ASD affected and normal children, are highest for vowel /i/. It indicates that ASD affected children have possibly more difficulty in speaking the words with vowel /i/. This study can be helpful towards developing systems for automatic detection of ASD.
|
https://aclanthology.org/2019.icon-1.10
|
## introduction asd is a pervasive developmental disorder, defined clinically by observing the abnormalities in three areas: communication, social reciprocity, and hyperfocus or reduced behavioral flexibility @xcite @xcite . study shows, at least 50% of the total population of asd tends to show atypical acoustic patterns in their speech, and it persists throughout the improvement of other language aspects @xcite @xcite . in fact, the exact characteristics of autism and its underlying mechanisms are also unclear @xcite . according to study, 1 in 150 individuals with autism was reported in 2002, which became 1 in 68 in 2014 @xcite . it is reported that there are tens of millions of individuals with asd worldwide, and it is affecting approximately 1.5% of our total population @xcite . communication impairments, abnormal voice quality and disturbances of prosody are some of the most important aspects among individuals with asd who speak @xcite . individuals with asd speak with distinctive acoustic patterns in their speech, and as a result they face social interaction deficits @xcite . the reason behind the language impairment in autism is the result of primary linguistic disorder with a focus on pragmatic impairments @xcite . besides, the speech signal of the children with asd is reported as improperly modulated, wooden, and dull @xcite . in fact, in many cases, a significant spoken language delay and repetitive language can also be encountered @xcite . in general, normal children start establishing their vocabularies at the age of two years, whereas the children with asd may not be able to do the same @xcite . previous studies mostly based on either speech prosody or unusual suprasegmental features of speech production of children with asd @xcite . like, in @xcite , authors had reported the segmental and suprasegmental speech features of individuals with high-functioning autism (hfa). also, some studies used objective measures to quantify speech related issues in autism @xcite . some of the most significant analyses based on pitch features of individuals with asd were reported in @xcite , @xcite , etc., where in each study authors had reported different result from others. for instance, in @xcite , authors had reported higher pitch value for asd children as compared with normal children. on the other hand, in @xcite , authors had reported lower pitch value for asd children as compared with normal children. besides, in the case of the intensity based analyses, some of the studies indicated no significant differences between asd and normal children @xcite . likewise, based on duration (syllable duration, utterance duration, etc.), voice patterns, speech rate, etc., researchers had done some significant analyses on individuals with asd @xcite @xcite . but, none of the previous studies had done only on english vowels, especially pronounced by non-native indian english speakers with asd. also, many robust speech features like dominant frequencies (fd1, fd2), strength of excitation (soe), etc., had not been considered in previous studies. therefore, in this study, we have considered all these mentioned points. this paper analyzed the autism speech, i.e., the speech signal of the children with asd, by differentiating them from the normal children. differences are made in terms of the speech production features of the asd and the normal children. here, only english vowels, i.e., /a/, /e/, /i/, /o/, and /u/ are taken into consideration, because of their relatively longer duration in the case of children with asd. also, the production of vowels sounds by an individual is not a random process; hence it is important to find characteristics of the speech production mechanism of children with asd during the pronunciation of vowels sounds. this study on analyzing the speech production characteristics of the children with asd has high importance, because it may play a vital role in improving the communication impairments associated with asd. in addition, current diagnostic criteria for asd do not include any atypical vocalizations @xcite . hence, this study can be utilized as a diagnostic marker to identify this study consists of four major steps. firstly, two speech signal datasets were collected, by recording the sound files of the asd and the normal children. secondly, unwanted signal parts were removed, and the speech signal files were arranged in two different databases for the asd and the normal children. thirdly, speech signal processing methods were applied on the collected datasets to extract the selected production features. finally, results were made by differentiating between the asd and the normal children in terms of their speech production features. the rest of the paper is organized as follows. details about the two collected datasets of the asd and the normal children are discussed in section 2. next, the signal processing methods and features used for analyses are discussed in section 3. section 4 presents key results and observations on results. then, section 5 discusses the analyses of observed results in speech production point of view. section 6 represents key contributions. lastly, section 7 presents conclusions, along with the scope of future work on this topic. ## speech datasets of asd and normal children two speech signal datasets in the english language were recorded for this study, where one dataset contains the speech samples of 13 children with asd, and another dataset contains the speech samples of 20 normal children. details of both the datasets are given in the age of 3 years when a child begins to show delays in developmental milestones @xcite . another reason was that the current study only focused on verbal children. besides, in the case of the children with asd, it was made sure by a well-experienced doctor and a psychologist that the children considered were diagnosed with asd. the children with asd considered for the data collection met the dsm-iv diagnostic criteria @xcite . furthermore, all the children with asd considered here had distinctive acoustic patterns in their speech, during the entire period of data collection. however, the normal children did not have any such issues and were living a normal life. speech samples were recorded every week (once or twice), for a period of over 1 year. recordings took place in a noise-free empty room, which did not have any object that could distract the children. also, the neutral emotional state of the children was affirmed during all the data collection sessions. the asd and the normal children were asked to name in english a set of 25 specifically selected daily life pictures, shown to them along with each picture's name in english on a laptop. the pictures consisted of animals, vegetables, flowers, and english numbers. all the children were asked to pronounce only the object's name as a word, presented to them in the form of a picture. the children's first response was confronted by asking them to pronounce the picture's name. then, we kept changing the pictures one by one, while the children named the object shown as a picture. each child was asked to name the same set of pictures over each of the recording sessions. five different pictures were selected for each of five english vowels, and the names of all the pictures were either in consonant-vowel-consonant (cvc) or consonant-vowel-vowel-consonant (cvvc) word format. the total utterances of 25 words by each child (5 vowels × 5 words) were recorded in each of the two such sessions, in a day. roland r-26 digital audio recorder was used with 48 khz sampling rate to record the speech samples. the distance of 25 cm was maintained between the recorder and the speaker's mouth. our collected datasets have immense impor-tance because of several reasons. firstly, all the children considered here were non-native indian english speakers. whereas, in previous studies like @xcite , @xcite , @xcite , @xcite , etc., authors had not considered non-native indian english speakers(children) with asd. secondly, in previous studies datasets were mostly collected from social interaction @xcite , constrained production @xcite and spontaneous production @xcite . but, here the datasets were recorded differently, as described earlier in this section. ## signal processing methods and features the production characteristics of speech signal of the asd and the normal children are differentiated by examining changes in the source features, vocal tract system features and combined sourcefilter features. the source features f0 and strength of excitation (soe), and the vocal tract filter features dominant frequencies (fd1, fd2) and first five formants (f1 to f5) are examined. the combined source-filter features signal energy (e) and zero-crossing rate (zcr) are also examined. here, for each speech feature, the mean (µ) or average values are computed. the mean values are computed for each english vowel by taking the average of all the calculated values of a particular speech feature, and this procedure is followed for each speaker. besides, the µ soe , µ e and µ zcr values are multiplied by 100, 1000, and 1000, respectively, for a better understanding. ## results and observations the obtained results indicate higher µ f 0 values for the children with asd as compared with the normal children, and this statement is true for all english vowels. besides, according to the tongue position, female children with asd have the highest µ f 0 value for mid-vowel /e/ and have the lowest µ f 0 value for low-vowel /a/ as compared with other english vowels. but, in the case of the normal female children, high-vowel /i/ gives the highest and mid-vowel /o/ gives the lowest µ f 0 values as compared with other english vowels. however, in the case of the male children with asd, such results have not been found. it is observed that male children with asd follow a similar µ f 0 trend with the normal male children for all english vowels. these results can be analyzed from like µ f 0 , in the case of µ e also, the children with asd have higher values for all the five english vowels as compared with the normal children. also, for all the five english vowels, the female children with asd have higher µ e values as compared with the male children with asd, but this is vice versa for the normal children. besides, in the case of the children with asd, the same vowel /e/ has the lowest µ e values for both male and female children, whereas this is not the same for the normal male and female children. likewise, in the case of the normal children, the same vowel /o/ has the highest µ e values for both male and female children, whereas this is not true for the male and female children with asd. these statements can be observed from µ e values in regarding µ soe , only front vowels /e/ and /i/ indicate lower values for the children with asd as compared with the normal children. but, in the case of mid and rear vowels, i.e., /a/, /o/, and /u/, µ soe indicate higher values for the children with asd than the normal children. besides, in the case of both the normal male and female children, the same vowel /i/ has the highest µ soe values as compared with other english vowels. but, this statement is not true in the case of the children with asd. again, in the case of both the male and female children with asd, the same vowel /a/ has the lowest µ soe values as compared with other english vowels, whereas this is not the case with the normal children. all these results can be observed from µ soe values, tabulated in the µ zcr have lower values for the children with asd as compared with the normal children, and it is true for all english vowels. this observation is graphically represented in figure 1(g ) and 1(h). also, in the case of the front and mid vowels, i.e., /a/, /e/, and /i/, the male children with asd have higher µ zcr values as compared with the female. but, it is vice versa in the case of the normal children. besides, in the case of both male and female children with asd, the same vowel /e/ has the lowest µ zcr values as compared with other english vowels, whereas this is not the case with the normal children. these results can be observed from µ zcr values, given in the µ f 2 values are higher for all english vowels in the case of the children with asd as compared with the normal children. also, the µ f 2 values for all the five english vowels of both the male and female children with asd follow a similar trend, whereas there is no such trend observed in the case of the normal children. besides, according to the tongue position, both the male and female children with asd have the highest µ f 2 values for the mid vowel /e/ as compared with the high and low vowels. but, in the case of the normal children, as compared with the mid and low vowels the high vowels /i/ and /u/ give the highest µ f 2 values for both the male and female children, respectively. all these results can be analyzed from µ f 2 values tabulated in as compared with the normal children, the children with asd have higher µ f 4 values for the front and mid vowels only. next, according to the tongue position, in the case of both the male and female children with asd, the µ f 4 gives the highest values for the mid vowel /o/ as compared with the high and low vowels. but, this is not the case for the normal children. the µ f 4 results can be analyzed from the figure 2 (g) and 2(h), also from the µ f 3 values, tabulated in the µ f 5 indicates higher values for all the five english vowels in the case of the children with asd as compared with the normal children, depicted in figure 2 (i) and 2(j). also, both the male and female normal children have the lowest µ f 5 values for the mid vowel /o/ as compared with the high and low vowels. but, this statement is not true in the case of the asd children. the µ f 5 values are tabulated in all the five english vowels have higher µ f d1 values for the children with asd as compared with the normal children, depicted in figure 3(a) and 3(b). according to the tongue position, both the male and female normal children have the lowest µ f d1 values for the high vowel /i/ as compared with the mid and low vowels. but, in the case of the asd children, as compared with the high and low vowels the mid vowels /e/ and /o/ indicate the lowest µ f d1 values for both the female and male, respectively. the µ f d2 results can be analyzed from in the case of µ f d2 , only the front vowel /e/ and /i/ have higher values for the children with asd as compared with the normal children, graphically shown in figure 3 (c) and 3(d). in addition, according to the tongue position, both the male and female normal children have the highest µ f d2 values for the low vowel /a/ as compared with the mid and high vowels. on the other hand, as compared with other english vowels the high vowel /u/ has the highest µ f d2 value for the male asd group and the mid vowel /e/ has the highest µ f d2 value for the female asd group. the µ f d2 values are tabulated in ## analyses of results this section describes the observed results in speech production point of view. firstly, the f0 which reveals the source characteristics of the speech production system, the result infers that in the case of all the five english vowels, the male and female children with asd have a higher vo-cal fold vibration rate than the normal male and female children. this statement is true for all the five english vowels. furthermore, in the case of female children with asd, mid-vowel /e/ has the highest and low-vowel /a/ has the lowest vocal fold vibration rate as compared with other english vowels. on the other hand, in the case of the normal female children, high-vowel /i/ has the highest and mid-vowel /o/ has the lowest vocal fold vibration rate as compared with other english vowels. these observations can be analyzed from figure 1 (a) and 1(b). in the case of e which gives the information about the combined source-system characteristics of the speech production system, the result implies that the children with asd have louder speech and put more vocalization effort than the normal children. also, in the case of all english vowels the female children with asd put more vocalization effort than the male children with asd, but this is vice versa in the case of the normal group. these results can be analyzed from µ e values graphically depicted in figure 1(c ) and 1(d). the observed soe result infers that in the case of the front vowels the strength of impulse-like excitation is lower during the glottal activity (vibration of vocal folds) of the children with asd as compared with the normal children. but, in the case of mid and rear vowels the strength of impulse-like excitation is higher for the asd children than the normal children. this result can be analyzed from figure 1(e ) and 1(f). the f1 result implies that in the case of all five english vowels, the children with asd have a lesser oral constriction in the front half of the oral section of the vocal tract as compared with the normal children. again, in terms of pharyngeal constriction, it can be stated that during the pronunciation of all the five english vowels the pharyngeal constriction is greater for the children with asd as compared with the normal children. the f1 observed result also implies that both the male and female children with asd have the greatest pharyngeal constriction for the low-vowel /a/ as compared with the mid and high vowels. but, the normal male and female children have the greatest pharyngeal constriction for the mid-vowel /o/ as compared with the high and low vowels. furthermore, during the pronunciation of all english vowels the children with asd increase their tongue higher than the normal children. because the f1 value increases with increasing the tongue position higher. the f1 values for all english vowels are graphically depicted in figure 2(a) and 2(b). the f2 result implies that in the case of all english vowels the back tongue constriction is lesser and the front tongue constriction is greater for the children with asd than the normal children. furthermore, it can be stated from the observed result that both the male and female children with asd have the least back tongue constriction and the greatest front tongue constriction for the mid vowel /e/ as compared with the high and low vowels. on the other hand, the normal male children have the least back tongue constriction and the greatest front tongue constriction for the highvowel /i/ as compared with the mid and low vowels, and the normal female children have the least back tongue constriction and the greatest front tongue constriction for the high-vowel /u/ as compared with the mid and low vowels. this observation can be analyzed from figure 2(c) and 2(d ). the f3 result implies that in the case of the children with asd lip-rounding is lesser during the pronunciation of all english vowels. hence, the constriction is least and as a result all english vowels give higher µ f 3 frequency values for the children with asd as compared with the normal children. the results are graphically depicted in figure 2 (e) and 2(f). also, the results of the first three formants (f1, f2 and f3) indicate that the length of the pharyngeal-oral tract is shorter in the case of the children with asd as compared with the normal children. because, the formants values of vowels are inversely proportional to the pharyngeal-oral tract, and here the children with asd have higher µ f 1 , µ f 2 and µ f 3 values for all english vowels as compared with the normal children. also, in terms of the lip-rounding, the f1, f2, f3 and f5 results imply that the children with asd have a lesser liprounding as compared with the normal group. in the case of formants frequencies and dominant frequencies, the differences between the asd and the normal children are highest for vowel /i/. it implies that asd children have probably more difficulty in pronouncing the words with vowel /i/. ## key contributions the key contributions of this study are as follows: ## conclusions the aim of this study is to analyze differences in various speech production features of the children with asd as compared with the normal children. only english vowels sounds are used in this study. an autism speech dataset and a normal childrens speech dataset are recorded separately for this research purpose. then, differences between the children with asd and the normal children are analyzed by observing the source characteristics (f0 and soe), system characteristics (dominant frequencies and formants), and combined characteristics (zcr and e). it is observed that there are significant differences between the asd and the normal children, in terms of their speech production characteristics in english vowels regions. in the case of most of the speech production features, the asd children have significantly higher values than the normal children. these acoustic characteristics of the children with asd can be used as markers to identify asd. but, we did not find any single speech feature that can be utilized as a diagnostic marker for asd. a small size of speech data for female asd children is a limitation of this study. in future studies, we will try to find a single speech feature that can be utilized as an acoustic marker to identify asd.
| 1,520
|
78
| 2,023
|
Sartipi-Sedighin at S em E val-2023 Task 2: Fine-grained Named Entity Recognition with Pre-trained Contextual Language Models and Data Augmentation from W ikipedia
|
This paper presents the system developed by the Sartipi-Sedighin team for SemEval 2023 Task 2, which is a shared task focused on multilingual complex named entity recognition (NER), or MultiCoNER II. The goal of this task is to identify and classify complex named entities (NEs) in text across multiple languages. To tackle the MultiCoNER II task, we leveraged pre-trained language models (PLMs) fine-tuned for each language included in the dataset. In addition, we also applied a data augmentation technique to increase the amount of training data available to our models. Specifically, we searched for relevant NEs that already existed in the training data within Wikipedia, and we added new instances of these entities to our training corpus. Our team achieved an overall F1 score of 61.25% in the English track and 71.79% in the multilingual track across all 13 tracks of the shared task that we submitted to.
|
https://aclanthology.org/2023.semeval-1.78
|
## introduction the multiconer 2023 task 2 was initiated with the purpose of developing ner systems that can accurately detect fine-grained nes across multiple languages. the shared task was organized into 13 tracks, with 12 monolingual tracks and one multilingual track, to facilitate a thorough evaluation of the participating systems @xcite . despite the inherent complexity and ambiguity of the dataset instances, the task presented two main features that are worth mentioning. the first feature was the identification of fine-grained nes, which required the systems to detect and classify a wide range of entities with varying levels of specificity. the second feature involved the augmentation of test data for some languages with simulated errors to increase the difficulty and realism of the task @xcite . these features posed significant challenges for the participating systems and necessitated the use of advanced nlp techniques. the work presented in this paper makes two main contributions to the field of ner. the overall architecture of the model used for finetuning can be seen in figure 1 . ## related work ner is a natural language processing (nlp) task that involves identifying and classifying nes in text, such as person names, organization names, location names, and others, into predefined categories @xcite . ner is widely used in many nlp applications, such as information extraction @xcite ), text summarization (khademi and fakhredanesh, 2020), and question answering @xcite ; mollá figure 3: number of nes for training, development, test, and augmentation set per fine-grained labels et al. @xcite . fine-grained ner is a more specific variant of ner that aims to recognize more detailed categories of nes @xcite . moreover, various ner datasets have been released in both coarse-grained @xcite @xcite and fine-grained @xcite @xcite domains. additionally, there exists an automatic translation of popular ner benchmarks, for cross-lingual ner evaluation @xcite . multiconer was initially introduced as part of semeval 2022 task 11 with the objective of developing multilingual (ner) systems capable of identifying coarse-grained entities. the competition featured a total of 13 tracks, comprising 11 monolingual tracks, one code-mixed track, and one multilingual track @xcite . the multiconer dataset is an extensive multilingual dataset for (ner) that includes three domains: wiki sentences, questions, and search queries. the dataset is designed to address modern ner challenges, including low-context scenarios, such as short and uncased text, complex entities like movie titles, and long-tail entity distributions @xcite . in its second iteration, multiconer 2023 aimed to build ner systems capable of identifying nes across 12 languages, including english (en), spanish (es), hindi (hi), bangla (bn), chinese (zh), swedish (sv), farsi (fa), french (fr), italian (it), portuguese (pt), ukrainian (uk), and german (de). the shared task was subdivided into 13 tracks, comprising 12 monolingual tracks and one multilingual track. two main features of this task are worthy of mention: firstly, the identification of fine-grained nes, such as symptom, politician, and writtenwork. secondly, for some languages, namely english, chinese, italian, spanish, german, french, portuguese, and swedish, the test data was augmented with simulated errors to increase the difficulty and realism of the task @xcite . @xcite presents several challenges that current datasets and models do not adequately address. these challenges include short-text inputs, long-tail entity distributions, emerging entity types, and complex entities that are linguistically difficult to parse. these challenges pose problems for current ner systems, which are primarily trained on news texts with long sentences that discuss multiple entities. to overcome these challenges, the authors build gazetteers that incorporate external knowledge and contextual information, which is represented using transformers such as bert. contextual features from bert and gazetteers are combined through a fusion process, and the resulting features are then fed into a conditional random field (crf) layer. this enables the model to incorporate both external knowledge from gazetteers and contextual information from bert to better handle the challenges. to extend these challenges to multilingual and code-mixed settings, @xcite have introduced two datasets: mlowner, a multilingual ner training dataset for short texts in six languages, and ember, a code-mixed ner dataset covering the same languages as mlowner. these datasets can assist in training models to recognize complex nes and provide a basis for evaluating the models' performance which is included in multiconer. augmentation in order to augment the dataset and fine-tune our ner models, we utilized the wikipedia python library 1 to generate additional instances for some of the shorter instances in the dataset. to accomplish this, we constructed sets of entities from the existing entities in each language, excluding instances labeled as "o". we then used the wikipedia library to search for these entities, which provided a corresponding paragraph for each entity. in order to segment these paragraphs into 1 https://github.com/goldsmith/wikipedia sentences, we leveraged stanza @xcite . for each paragraph, we selected one sentence containing the entity and positioned in the middle or end of the sentence, rather than the beginning. subsequently, we assigned the "o" tag to the other tokens in the sentence and labeled the corresponding fine-grained category for the entity. we followed this process for all languages, with the exception of bn where no sentence segmenter was available in stanza. our aim was to maintain consistency in approach across all languages. it is important to note that certain entities in wikipedia had multiple descriptions available, but we opted to utilize only one for the sake of simplicity. given the time-consuming nature of searching for each entity in wikipedia, we employed dask @xcite to expedite the search process. with the aid of dask, it took approximately two hours for each language to search all entities. the data presented in figure 5 include one primary instance for each language, along with an augmented version of that instance. these datasets are publicly available in this foot_0 git repository. the primary motivation behind this work is to increase the diversity of instances that are used for training. additionally, when a ne appears in a short sentence or with limited context, generating more instances of that ne with different contexts can help to provide a more comprehensive understanding of its meaning and usage. this can improve the model's ability to correctly identify and classify nes in a variety of different contexts. data statistics out of all the test sets, corrupted data was found in six languages, namely en, es, fr, it, pt, sv, model name lang bert-base-spanish-wwm-uncased @xcite ) es bert-base-german-uncased foot_1 de roberta-hindi foot_2 hi chinese-roberta-wwm-ext @xcite zh bert-base-swedish-cased @xcite sv bert-base-italian-xxl-uncased @xcite it bert-large-portuguese-cased (souza et al., 2020) pt bert-base-french-europeana-cased (schweter, 2020a) fr banglabert @xcite bn roberta-large-wechsel-ukrainian foot_3 uk deberta-v3-large @xcite en bert-base-parsbert-uncased @xcite fa xlm-roberta-large @xcite multi ## methodology in recent years, transformer-based models such as bert @xcite have revolutionized the field of nlp, resulting in significant improvements in ner performance. these models are pre-trained on massive amounts of text data, enabling them to capture complex patterns and relationships between words in the text. they generate highly contextualized embeddings for each token in a sentence, allowing them to understand the meaning of words in context. to leverage the power of these models, we fine-tuned a plm for each language on the training data. hyper-parameters: we used the same hyperparameters for all of our experiments. to train our model we used the hugging face @xcite trainer and all models were trained for 15 epochs and saved the best model according to lower validation loss. we set 32, 2e-5, and 0.01 for batch size, learning rate, and weight decay, respectively. fine-tuning during this phase, we utilized transformer-based encoders. the models used for fine-tuning in the evaluation phase are listed in tafigure 7: heat map of the base system which is trained on main training data coarse-grained classes are creative works (cw), location (loc), person (per), group (grp), product (prod), medical (med) ble 1. additionally, we also fine-tuned the roberta-large (liu et al., 2019) and bert-large-uncased (devlin et al., 2018) models during the practice phase, achieving f1 scores of 63.03% and 65.13%, respectively. however, the deberta-v3-large model yielded a higher f1 score of 65, leading us to choose this model for further analysis. ## results and analysis in this section, we present the official results as reported by the organizers. base model figure 7 presents an analysis of the performance of base systems, which were trained on the training data without any augmentation. the results show that recognizing artwork and musi-calwork proved to be more challenging within the creative work class. similarly, otherloc emerged as the most difficult entity to detect within the location class. in the person class, the names of scientists and otherpersons were found to be the most challenging entities, while humansettlement was more easily recognizable. moreover, subclasses such as privatecorp and aerospacemanufacture within the group class were particularly demanding, whereas sportgrp had the best f1 values. these findings highlight the categories and languages in which ner systems struggle to accurately detect named entities. figure 8 illustrates the discrepancies between base systems and augmentation. the data in this table indicates that increasing the quantity of data in each category can have either a positive effect or a negative one in terms of f1 score, depending on the language and sub-class. the negative impact of augmentation is depicted by the black color. overall, data augmentation had the most positive impact on sub-classes such as pri-vatecorp, symptoms, and scientists. moreover, in terms of languages, hindi and french exhibited the highest improvements due to data augmentation. ## conclusion in this work, we utilized (plms) to build a system for recognizing complex nes. to increase the number of training examples and improve the performance of the system, we applied a simple data augmentation technique. however, we observed that this approach led to mixed results, with improvements in some subclasses but a reverse effect in others. one possible reason for this outcome is that the augmentation technique involves assigning "o" tags to the rest of the tokens in a sentence, which may lead to some loss of information. furthermore, the augmented data may be more unbalanced than the original data, with some instances being increased more than others. to address this issue, it may be necessary to use more sophisticated augmentation techniques or balance the data more effectively to ensure that the model can learn from a representative set of examples.
| 26,356
|
6
| 2,023
|
Modelling the Reduplicating L ushootseed Morphology with an FST and LSTM
|
In this paper, we present an FST based approach for conducting morphological analysis, lemmatization and generation of Lushootseed words. Furthermore, we use the FST to generate training data for an LSTM based neural model and train this model to do morphological analysis. The neural model reaches a 71.9% accuracy on the test data. Furthermore, we discuss reduplication types in the Lushootseed language forms. The approach involves the use of both attested instances of reduplication and bare stems for applying a variety of reduplications to, as it is unclear just how much variation can be attributed to the individual speakers and authors of the source materials. That is, there may be areal factors that can be aligned with certain types of reduplication and their frequencies.
|
https://aclanthology.org/2023.americasnlp-1.6
|
## introduction a significant proportion of the world's languages face the threat of endangerment to varying degrees. this endangered status poses certain constraints on the extent to which modern nlp research can be conducted with such languages. this is due to the fact that many endangered languages lack extensive textual resources that are readily accessible online. furthermore, even with available resources, there is concern about the quality of the data, as it may be influenced by various factors such as the author's level of fluency, accuracy of spelling, and inconsistencies in character encoding at the most basic level (see @xcite . reduplication appears in many languages of the world @xcite . while full reduplication is observed as a repeated word form, partial reduplication is associated with extensive variety both regular and irregular. this paper focuses on a finitestate description of the partial reduplication patterns found in the lushootseed language forms (lut) and (slh). the most predominant forms of reduplication in lushootseed are distributive (distr) and diminutive (dim), which can, in fact, appear in tandem, but there are restrictions delimiting their use (see @xcite @xcite . in addition to distr and dim, however, we also find a third and slightly less frequent random or out of control distributive (oc) (see @xcite . the base of these three types of reduplication can be found in the initial two to three phonemes of the word root most often referred to with the notation c 1 vc 2 , but the authors of this paper will surround the vowel with parentheses to indicate the possibility of its absence: c 1 (v)c 2 and thus accommodate the radical cc mentioned in @xcite . the radical consist of simple and compound letters alike, e.g., qw , g w , λ', all of which add to the issues of facilitating the extensive variation in lushootseed reduplication. first, the concept of compound letters involved in regular reduplication segments is a very import part of finitestate description for lushootseed. although the 46 phonemes canonize the extensive alphabet, they create their own demands on the description. our facilitation of lushootseed reduplication with a finite-state machine foot_0 is based on the use of a five-place holder segement concatenated directly before the radical. we number these right-toleft away from the radical {p5}{p4}{p3}{p2}{p1} where the odd-numbered place holders represent consonants, and the even-numbered ones vowels. the system is set up so that the place holders {p3}{p2}{p1} are used with distr, dim and oc reduplication, whereas the more remote place holders {p5}{p4} are used to deal with distr + dim combinations. albeit, theory sees the distributive losing the third phoneme due to a principle of antigemination (see @xcite referencing also (hess 1967: 7) and @xcite . we have assumed the absence of geminates and therefore have left them out of the equation. perhaps, further studies will require their addition to our finite-state description of reduplication in permeating the lushootseed vocabulary. ## related work several different methods are currently in use to model morphology of endangered languages computationally. in this section, we will covers some of the existing rule-based, statistical and neural approaches. our method embraces the rule-based tradition because machine-learning based methods rely on a lot of annoated data we currently do not have for lushootseed. in the rule-based research, morphology has mainly been modelled using a finite-state transducer (fst) using one of several technologies such as hfst @xcite , openfst @xcite or foma @xcite . such an approach has been successful in describing languages of a variety of different morphological groups such as polysynthetic languages (e.g. plains cree @xcite , east cree @xcite and odawa @xcite ), agglutinative languages (e.g. komi-zyrian @xcite , san mateo huave (tyers and castro, 2023), skolt sami @xcite , sakha @xcite and erzya @xcite ) and fusional languages (e.g. akkadian @xcite and arabic @xcite ). for statistical approaches, @xcite has done research on english morphology by an approach that comprises two interrelated components, which are morphological rule learning and morphological analysis. the morphological rules are acquired by means of statistical learning from a list of words. on another line of work, @xcite has developed a machine learning technique that utilizes sequence labeling and kernel methods for training, which enables the model to effectively capture the non-linear associations between various aspects of the morphological features found in tamil. with the emergence of unimorph @xcite , which continues to include only partial morphological descriptions of each language, a great deal of neural based research has emerged to conduct morphological analysis. the typical models that are used are lstm @xcite and transformer (see @xcite ) based models. ## materials and methods the materials used for this paper come from the lushootseed dictionary of @xcite and language learning binders by zalmai zahir and peggy k w ipalq ahvakana (book 1 d z ix w first, book 2 d@g w i you, book 3 s.p@ì@d food, book 4 palpal house) as well as a binder of transcriptions to recordings from the university of washington archives received in 2003 on the muckleshoot reservation. the method involves a mnemonic descriptive approach, implemented for a decidely deterministic machine and human-friendly solution -if there is such a thing. to this end, we adhere to a threephoneme segment approach to lushootseed description and simply start with the labeling 123. here ‹1› indicates the first consonant of the radical (root), ‹2› the vowel (which seems to be absent/latent in at least a few roots), and ‹3› the second consonant. we then introduce a series of five ordered place holders to precede the root. the insertion of place holders is convenient in this finite-state description if they come before the root. although there are numerous segments of regular morphology, inserting a series of five place holders immediately before the root can be seen as just another step in regular concatenation. here it might be mentioned that theoretic distinctions between inflection and clitics do not come before consideration for orthographic practices (cf. @xcite . the five place holder, numbering away from the first three letters of the root is set so the odd numbers correlate with the consonants and the even numbers with the vowels. thus, {p3} correlates with k w , {p2} with a, and {p1} with t {p5} {p4} {p3} {p2} {p1} k w a t k w atač: k w atač 'climb' s‹k w atač: sk w atač 'mountain' s ‹ {p5}:0 {p4}:0 {p3}:k w {p2}:a {p1}:0 k w a t a č : sk w ak w @tač 'mountains' s ‹ {p5}:0 {p4}:0 {p3}:k w {p2}:a {p1}:0 k w a:0 t a č : sk w ak w tač 'hill' with this as a point of departure, we can then enumerate four predominant tendencies, one -total reduplication, one -partial to the left, two partial to the right. first, total reduplication is 123123, which is extremely regular and typically distributive in meaning. second, comes the diminutive with extensive variation: 1213, 12123, 1i13, 1i123, 1iq13. third, and less frequent in the materials are 123 ## fst models the finite-state description of lushootseed involves several layers of experience. it addresses issues involving orthography, morphophonology, concatenation and symmetric tagging for subsequent machine readability. the orthography, which is canonized by the language's reduplication patterns, uses lower-case letters with multiple diacritics, as no precomposed letters are available for nearly half of the alphabet. the concatenative morphology, which with the exception of the possessive person marking strategy, is symmetric but involves abbreviated or short-hand forms for some consecutive morphemes. the variation in multiple reduplication patterns appears to be partially monolectic or geographic in nature, but there is definitely also breathing room for variation in where individual derivations are used. in general, both preposed and postposed affixing is present, and, in particular, there is asymmetry in the possessive person marking strategy. for language-independent comparison, we use flag diacritics in our models, which allows us supersegmental concatenation and facilitates regular tagging practices for use in downstream language technology, even work with python libraries. ## current state presently the lexicon is extremely small. it contains 110 verbs and 283 nouns, which might explain the low coverage rate of 70%, i.e., 1822 unrecognized tokens out of a total of 6186 tokens in the test corpus. the two-level model has 31 rules governing reduplication copying patterns in the place holders and vowel loss or permutation in the root. the vowel system has be complemented by vowels with acute and grave accents, which might be useful in pedagogical use of the language model, and in work with language variation across the continuum of the language community. source the lexc continuation lexica number at 135. these continuation lexica provide coverage for regular nominal and verbal inflection, which utilizes a mutual set of morphology controlled partially with flag diacritics. ## neural extension no matter how extensive an fst transducer is, it still cannot cover the entire lexicon of a language. for this reason, we also experiment with training neural models to do morphological analysis based on the fst transducer described in this paper. the goal is not to replace the fst we have described in this paper, but to develop a neural "fallback" model that can be used when a word is not covered by the fst. we follow the approach suggested by hämäläinen et al. (2021), we use the code that has been made available in uralicnlp @xcite . this approach consists of querying the fst transducer for all the possible morphological forms for a given lemma. for a given input, the fst will thus produce all possible inflections and their morphological readings. we limit our data to nouns only, and we use a list of 214 lushootseed nouns to generate all the possible morphological forms for. this way, we produce a dataset consisting of around 756,000 inflectional form-morphological reading tuplets. this means that we have an average of 3536 inflectional forms for each lemma. we split this data into 70% training, 15% validation and 15% testing. the test data has words that are completely unseen to the model in the training data. this means that in the testing, the model needs to analyze based on lemmas and word forms it has not seen before even in a partial paradigm. for the model itself, we use a python library called opennmt @xcite and use it to train an lstm based recurrent neural network architecture with the default settings of the library. the task is defined as a character-level neural machine translation problem where each word form are split into characters separated by a white-space in the target side and the morphological readings produced by the fst are split into separate morphological tokens. examples of the training data can be seen in the overall accuracy of the model is 71.9%. this is measured by counting how many full morphological readings the model predicted correctly for each word form in the test corpus. the results per morphological tag can be seen in ## discussion and conclusions in order to further test the accuracy of our lushootseed description, more test data and descriptions of regular inflection will be needed. the challenge is to continue with the outline given for an inflectional complex (see @xcite and define what can actually be described as regular. more time will be required to model more recent reanalyses of the morphological complexes. this means we may need to establish whether a sixplaceholder segment is required to aptly describe lushootseed reduplication and put our description in line with a hypothesis of antigemination. the idea of describing morphological complexes as series of aligned clitics is very interesting (see @xcite . this will actually provide fuel for future work with syntax, since most of the semantic information is already present in the word roots where the clitics conglomerate.
| 20,624
|
1
| 2,025
|
Are We Paying Attention to Her? Investigating Gender Disambiguation and Attention in Machine Translation
|
While gender bias in modern Neural Machine Translation (NMT) systems has received much attention, the traditional evaluation metrics for these systems do not fully capture the extent to which models integrate contextual gender cues. We propose a novel evaluation metric called Minimal Pair Accuracy (MPA) which measures the reliance of models on gender cues for gender disambiguation. We evaluate a number of NMT models using this metric, we show that they ignore available gender cues in most cases in favour of (statistical) stereotypical gender interpretation. We further show that in anti-stereotypical cases, these models tend to more consistently take male gender cues into account while ignoring the female cues. Finally, we analyze the attention head weights in the encoder component of these models and show that while all models to some extent encode gender information, the male gender cues elicit a more diffused response compared to the more concentrated and specialized responses to female gender cues.
|
https://aclanthology.org/2025.gitt-1.1
|
## introduction the field of machine translation (mt) has undergone significant technological shifts over the past decades, moving from transparent rule-based systems to increasingly opaque probability-based ones such as statistical and neural mt. furthermore, the complexity and scale of current transformer-based @xcite architectures, which underpin both neural mt (nmt) and large language models (llms), are making it more challenging to trace back model decisions and understand the underlying processes. this growing opacity raises concerns for ai governance where transparency, fairness and risk mitigation are becoming increasingly important for a responsible deployment of mt technology. at the same time, research on (gender) bias in mt has been on the rise, reflecting more general tendencies in the field of natural language processing (nlp) @xcite @xcite . the increasing awareness has led to concerns related to the flaws, inconsistencies and biases that models inherit, propagate and potentially exacerbate -especially with the increasing integration of nlp tools in people's everyday lives @xcite . in response, ai governance policies are emerging worldwide, such as the european union's ai act (2024), aiming to regulate the development and deployment of ai systems to ensure ethical standards and mitigate potential risks. for mt specifically, the nature of the translation task itself further complicates matters due to crosslinguistic differences in gender representation and expression across languages, where social gender, linguistic gender and diverse cultural contexts intersect. ## bias statement we define gender bias in mt as the tendency of models to default to learned statistical associations rather than systematically relying on contextual information for gender disambiguation. we focus on cases where gender is unambiguously expressed in the source sentence -typically through pronouns referring to human entities -capturing one subtype of gender bias. ambiguous cases -lacking explicit gender cues -fall outside the scope of this paper. while our framework targets the english-italian (en-it) language pair, it is broadly applicable to any setting where gender must be explicitly marked in the target language. we particularly highlight stereotypical bias, for which models successfully generate feminine translations when the target word (i.e., the profession noun) is already associated with women (e.g. @xmath0, but struggle to override male defaults in anti-stereotypical contexts. this asymmetry suggests that gender disambiguation might be driven by learned priors rather than syntactic dependencies, reinforcing a male-as-norm bias @xcite . such bias can lead to both representational harm, by perpetuating traditional gender roles, and allocational harm, by systematically underrepresenting women in maledominated professions @xcite . our analysis only considers binary gender due to the constraints of the winomt dataset, which relies on u.s. labor statistics and morphological analysis tools that categorize gender along a binary axis. while we acknowledge that this is a major limitation and gender is not a binary construct, there is no standardized approach to systematically evaluate non-binary gender bias in mt. broader inclusivity challenges persist and underscore the need for future work to develop more inclusive methodologies that better reflect gender as a spectrum. ## related work research on gender bias in mt has largely focused on: analyzing mt output (e.g. rescigno et al. (2020); ramesh et al. (2021)...); rewriting into gendered (e.g. vanmassenhove et al. (2018); moryossef et al. (2019); habash et al. (2019) or neutral outputs (e.g. vanmassenhove et al. (2021a); sun et al. (2021)...); word-embedding debiasing techniques (e.g. hirasawa and komachi (2019); font and costa-jussà (2019)....), domain adaptation (e.g. saunders and byrne (2020)), counterfactual data augmentation (e.g. zmigrod et al. (2019)) and/or the development novel benchmarks and evaluation sets (e.g. stanovsky et al. (2019); luisa et al. (2020)....). given that several studies @xcite @xcite already offer a more comprehensive overview of broader discussions and research on (gender) bias in language technology, we specifically dedicate this related work section to the limited body of work focusing on the internal mechanisms underlying gender bias in (mt) models and interpretability techniques. mt-specific research on interpretability techniques has largely focused on linguistic competence through probing @xcite , or by analyzing contrastive translation @xcite @xcite @xcite . more recent work investigated how mt systems process intra-and intersentential context and whether their context us-age aligns with human expectations @xcite @xcite . despite high overall performance, these studies highlight how models often struggle to effectively leverage contextual information, either failing to integrate necessary information or attending to irrelevant tokens when resolving ambiguities @xcite , an interesting finding raising concerns about gender disambiguation which indeed could be driven by biased statistical patterns rather than reliance on relevant contextual cues. the problem of context integration is not only relevant to model decision-making but also affects how gender bias is evaluated. template-based evaluation frameworks, such as winomt @xcite , provide controlled settings to measure surface-level accuracy metrics, and have been widely used to quantify gender bias across different language pairs and mt systems @xcite @xcite . however, as these primarily rely on the alignment and morphosyntactic analysis of lexically genderambiguous words, they do not reveal whether models actively integrate contextual cues when making gender-related decisions. these limitations underscore the need for more nuanced evaluation methods. a promising avenue for investigating how gender cues influence model decisions is through the study of context mixing, i.e., the ability of transformer-based models to dynamically incorporate information from the broader context into token representations. this process is largely governed by the attention mechanism, which plays a central role in these models. while attention-based analyses have been criticized for their reliability @xcite , and more advanced interpretability methods have been introduced @xcite @xcite @xcite , attention weights remain a popular choice for analyzing model behavior due to their ability to provide direct insights into token interactions across layers and heads. as a matter of fact, they have been extensively leveraged to track token dependencies, revealing that specific attention heads may specialize in distinct linguistic functions @xcite @xcite @xcite @xcite @xcite . to the best of our knowledge, only the study by @xcite attempted to control gender through internal mechanisms in an mt setting. they explored this by probing and deactivating specific neurons associated with gender in an long short-term memory (lstm) architecture. their findings showed that gender-related properties are widely distributed across the network, making effectively controlling the output very difficult. ## experimental setup in order to examine the extent to which contextual gender cues contribute to the representation of profession nouns for different models, we analyzed how multiple state-of-the-art models (section 4.1) integrate contextual gender cues provided in the winomt challenge set in the gender disambiguation process (section 4.2). ## evaluating context integration in gender disambiguation in this section, we will first delve into the evaluation of contextual cue integration through our novel metric. next, in section 6, we continue with the analysis of the encoder attention head weights to investigate how gender cues are integrated into the target representations. ## investigating context integration through attention to gain further insight into how contextual gender information is encoded within transformer models, we further investigate the extent to which gender cues are integrated into the representation of target words. for example, if a model correctly translates both pro-s and anti-s examples in figure 3 , we expect the representation of the target word librarian to be heavily influenced by the gender cue she/he in the original sentence. more specifically, we are interested in analyzing whether the attention mechanism contributing to the input representation of the target word attends to the gender cue and, if so, whether there are specific attention layers and heads that specialize in encoding gender cues. ## discussion in this section, we reflect on the key findings from our two-fold analysis, their implications, as well as potential avenues for future research. ## conclusion in this work, we examined how transformer-based nmt models integrate contextual gender cues and uncovered systematic biases and asymmetries in their processing mechanisms. taken together, our findings reinforce previous calls for greater caution when interpreting benchmark scores for gender accuracy in mt @xcite . surface-level improvements, such as higher gender accuracy, can still obscure deeper biases in how and under which conditions these forms indeed appear. more nuanced and comprehensive analyses are needed to to determine whether current systems truly leverage gender-specific cues or merely reinforce statistical stereotypes in subtler ways. without a more careful consideration of when, why and how certain patterns emerge, we risk misinterpreting progress and overlooking specific persistent and more structural biases in mt. ultimately, understanding how gender is encoded in translation models is a crucial component to ensure more fairness, accountability, and transparency in ai systems.
| 38,291
|
2
| 2,024
|
PROC 2 PDDL : Open-Domain Planning Representations from Texts
|
Planning in a text-based environment continues to be a significant challenge for AI systems. Recent approaches have utilized language models to predict planning domain definitions (e.g., PDDL) but have only been evaluated in closed-domain simulated environments. To address this, we present Proc2PDDL, the first dataset containing open-domain procedural texts paired with expert-annotated PDDL representations. Using this dataset, we evaluate the task of predicting domain actions (parameters, preconditions, and effects). We experiment with various large language models (LLMs) and prompting mechanisms, including a novel instruction inspired by the zone of proximal development (ZPD), which reconstructs the task as incremental basic skills. Our results demonstrate that Proc2PDDL is highly challenging for end-to-end LLMs, with GPT-3.5’s success rate close to 0% and GPT-4o’s 38%. With ZPD instructions, GPT-4o’s success rate increases to 45%, outperforming regular chain-of-thought prompting’s 34%. Our analysis systematically examines both syntactic and semantic errors, providing insights into the strengths and weaknesses of language models in generating domain-specific programs.
|
https://aclanthology.org/2024.nlrse-1.2
|
## introduction planning is the task of finding a sequence of actions to achieve a goal in a given environment @xcite . in real life, the environment is often described with natural language texts. to enable text-based, automated planning, recent work has used language models (lms) to generate plans @xcite . however, this approach is found to fall short with regard to both performance and interpretability @xcite . alternatively, another recent line of worked has instead used lms to translate the natural language description of environments to planning domain definition language (pddl) @xcite . this symbolic representation can then be solved by a planner in a plan @xcite @xcite @xcite . despite of the success of such a neurosymbolic method, all the above work has only been evaluated in closed-domains simulated environments such as a household (e.g., alfred @xcite )) or discrete object placement (e.g., blocksworld @xcite ) (as shown in to enable open-domain, text-based planning, we propose proc2pddl, a dataset to evaluate models' ability to generate pddl given procedural texts. proc2pddl consists of 27 pairs of open-domain procedures and pddl representations. each pddl representation include a domain file df that models the types, predicates, and actions, and a problem file pf that models the entities, initial states, and goal states, as illustrated in figure 1 . because proc2pddl is not bound 1 13 figure 2 : our formulation of the df action prediction task is as follows: given a natural language procedure text and a domain file header, a language model (lm) follows zone of proximal development (zpd) instructions in three sequential skills to predict domain actions, including parameters, preconditions, and effects. during evaluation, the predicted df is compared to a gold reference and used to solve corresponding pfs. to any simulation, the pddl representations are manually annotated by experts trained on this task to ensure validity, resulting in 27 domain files and 95 problem files. using this dataset, we study the task of action modeling @xcite formulated as follows. the input is some relevant natural language texts and the header of a df (i.e., types, predicates, and names of actions). based on a zpd instruction, the output is the domain actions in the df (i.e., parameters, preconditions, and effects). during evaluation, the predicted df is 1) compared to a ground-truth df as intrinsic evaluation, and 2) provided to a pddl solver with ground-truth pfs for the existence and correctness of plans as extrinsic evaluation. our system is delineated in figure 2 . in this formulation, our assumption of the df header is necessary to ensure the consistency of semantics between the df and the pf for evaluation. it is also empirically motivated; for example, a kitchen robot may have access to the types like 'ingredients' and predicates like 'diced' via some information extraction system given descriptive texts, but it may still need to predict, for "swinging a knife", the precondition that it is only safe to do so to the 'ingredients' and the effect that they will become 'diced'. through our experiment, we show that the task of action modeling in proc2pddl is highly challenging to state-of-the-art lms, where gpt-3.5 almost fails completely, gpt-4 can only generate exactly matching dfs 16% of the time and solvable pfs 33% of the time, and gpt-4o demonstrate 18% dfs accuracy and 37% pfs solving rate. by devising a zpd instruction that prompt lms to modularly generate pddl through extractioninference-translation approach, we improve action #df datasets ours 27 proc2pddl (wong et al., 2023) 2 minecraft, alfred (lyu et al., 2023) 1 saycan (xie et al., 2023) 2 blocksworld, alfred (liu et al., 2023) 7 blocksworld, etc. (huang et al., 2023) 1 tabletop (huang et al., 2022) 1 virtualhome (silver et al., 2022) 18 blocksworld, etc. (valmeekam et al., 2022) 2 blocksworld, logistics in our analysis, the syntactic errors indicate lms' weakness in generating low-resource and domainspecific programming languages @xcite like pddl, while the semantic errors suggest lms' inaccuracies to reason about actions and environments. ## task formulation the task of predicting a planning domain definition in a text-based environment can be seen as translating natural language texts to pddl symbolic language, which consists of a domain file (df) and one or more problem files (pfs). a df defines all actions in the environment: traditionally, the task of text-based pddl generation involves predicting pf based on text t, where a successfully generated pf can be solved by the predefined df. in this paper, we address an alternative formulation, action modeling (a), in which the generated df, given text t and the domain header h foot_0 , is capable of producing plans for pfs. ## dataset we introduce the proc2pddl dataset of 27 different t-df-pfs tuples, drawing procedural texts from wikihow articles of various topics (see appendix a). a class of graduate students in a u.s. university with prior knowledge of pddl are each given a wikihow article and annotate a df and multiple corresponding pfs from the article, each with a gold plan to solve it. on average, there are 13.33 defined actions in a df and 8.07 instantiated actions in a gold plan. in this work, all our data is used for evaluation, as all our methods are without task specific model training. some sample data of proc2pddl can be found in appendix b. ## methodology we first introduce a novel prompt design option, zpd, and then discuss the choices of text format (t), which can range from 10 to 2,000 tokens and influence the selection of lms. ## evaluation and analysis now that a model generates the parameters, preconditions, and effects for each action, we have a complete df. we evaluate it in two ways (figure 2 ). intrinsically, we semantically compare the predicted a with the ground-truth provided by our proc2pddl and report an action-wide accuracy. equivalence of two action definitions does not depend on the naming of variables nor on the order within conjunctions (detailed in appendix e). extrinsically, to measure actions' coherence, a bfsbased pddl solver foot_2 attempts to solve ground-truth pfs with the predicted df and a success rate is reported. an unsolved pf is caused by (1.) no plan can be found, or (2.) the solver runs for more than 30 seconds, or (3.) the solver returns an error (usually a syntax error in the generated pddl). the intrinsic and extrinsic results are shown in ## conclusion we present proc2pddl, the first open-domain dataset that juxtaposes natural language and planning domain definition language. our experiments show that zpd instructions facilitate lms' performance, while still find it challenging to translate the precondition and effects of actions. we hope our instruction design, evaluations and dataset help future progress towards integrating the best of lm and formal planning.
| 34,786
|
14
| 2,023
|
SAE - NTM : Sentence-Aware Encoder for Neural Topic Modeling
|
Incorporating external knowledge, such as pre-trained language models (PLMs), into neural topic modeling has achieved great success in recent years. However, employing PLMs for topic modeling generally ignores the maximum sequence length of PLMs and the interaction between external knowledge and bag-of-words (BOW). To this end, we propose a sentence-aware encoder for neural topic modeling, which adopts fine-grained sentence embeddings as external knowledge to entirely utilize the semantic information of input documents. We introduce sentence-aware attention for document representation, where BOW enables the model to attend on topical sentences that convey topic-related cues. Experiments on three benchmark datasets show that our framework outperforms other state-of-the-art neural topic models in topic coherence. Further, we demonstrate that the proposed approach can yield better latent document-topic features through improvement on the document classification.
|
https://aclanthology.org/2023.codi-1.14
|
## introduction topic models have been widely used to identify human-interpretable topics and learn text representations, which have been applied for various tasks in natural language processing (nlp) such as information retrieval @xcite , summarization @xcite , and semantic similarity detection @xcite . a typical topic models is based on the latent dirichlet allocation (lda) @xcite and bayesian inference. however, to avoid the complex and expensive iterative inference of conventional topic models, topic modeling with the deep neural network has been the leading research direction in this field @xcite @xcite . neural topic models (ntms) usually exploit the bow representation as input, disregarding the syntactic and semantic relationships among the words in a document, thus leading to relatively inferior quality of topics. recently, pre-trained language models (plms) @xcite demonstrate their strong ability to capture sentential coherence by achieving state-of-the-art performance on many natural language processing tasks. therefore, several approaches have been proposed to incorporate external knowledge into topic models to address the limitations of bow. a typical method to take external knowledge as additional features @xcite concentrates the outputs of plms with bow data. another way @xcite is to distill the knowledge of the teacher plms to generate a smoothed pseudo-document, which guides the training of a student topic model. however, there are still limitations to the above approaches. firstly, the document-level sequences are too long to be modeled, since the token-level sequence in the context is usually considered as input to the plms. extracting the document-level semantic embedding with plms as external knowledge ignores the restriction on sequence length, which loses massive semantic information from input text. secondly, the difference in learning objectives between ntms and plms makes it challenging to incorporate external knowledge. the encoder of ntms is designed to handle the sparse bow data, unable to take into account the dense contextual document embedding from plms. to address these limitations, we build upon the framework of variational autoencoders (vae) @xcite and propose a sentence-aware encoder for incorporating external semantic knowledge into topic models. the proposed approach integrates the advantages of ntms and plms as encoders. specifically, the encoder of the topic model is responsible for processing document-level bow data like most ntms, while the plms is used to encode sentence- architecture of sae-ntm. the sentence-aware encoder deals with the bow data x i and sentence sequences {s i 1 , • • • , s i m } of the i th document, while variational inference reconstructs the bow data x rec i from document representation d i . to summarize, the main contributions of this paper are as followed: (1) we propose a novel framework sae-ntm: sentences-aware encoder for neural topic modeling which leverages the cross-attention for incorporating external semantic knowledge in a sentence-aware manner. (2) quantitative and qualitative experiments demonstrate that our proposed approach significantly outperforms the existing state-of-the-art topic models in topic coherence. (3) we show that the bow-guided attention yields practical latent document-topic features, achieving better performance on the document classification task. ## experiments in this section, we design empirical experiments to answer the following questions of concern in topic modeling. first, how effectively does sae-ntm perform quantitatively and qualitatively in terms of topic quality? second, how does sae-ntm perform in automated document-topic inference for downstream tasks? besides, more details about the impact of external knowledge on topic modeling can be found in appendix a. ## conclusions in this paper, we propose a sentence-aware encoder for neural topic modeling framework: sae-ntm to incorporate external knowledge into neural topic models. the proposed method can capture document information by performing attention on sequential sentences in a bag-of-words guided manner. extensive experiments have shown that our framework can achieve state-of-the-art performance in topic coherence and encode better latent document-topic features. in the future, we would like to explore the possibility of integrating our approach with neural topic models built on other frameworks, such as generative adversarial training @xcite .
| 21,194
|
10
| 2,024
|
Identification du locuteur : ouvrir la boîte noire
|
L’explicabilité des systèmes relevant du deep learning est devenue un enjeu central ces dernières années, dans le droit européen comme le domaine criminalistique. L’approche BA-LR introduit en identification du locuteur un nouveau paradigme de modélisation : elle fait émerger automatiquement les attributs partagés par un groupe de locuteurs et qui sous-entendent la discrimination de ceux-ci. Le score produit est décomposable au niveau des attributs, ce qui augmente significativement l’explicabilité de la méthode. Cette étude propose de compléter la caractérisation des attributs obtenus par le BA-LR, à l’aide de paramètres de qualité de voix. L’analyse suggère que plusieurs attributs utilisent les types de phonation pour regrouper les locuteurs, ceux-ci encodant des informations humainement perceptibles. Cet article pose ainsi des bases pour l’analyse acoustique des attributs, qui permettra à terme d’utiliser le BA-LR dans le cadre du profilage vocal.
|
https://aclanthology.org/2024.jeptalnrecital-jep.10
|
## introduction la reconnaissance automatique du locuteur consiste à reconnaître ou vérifier l'identité d'une personne à partir d'un échantillon de sa voix. la comparaison de voix s'inscrit dans ce champ et détermine si deux enregistrements de parole ont été produits par le même locuteur, ou deux locuteurs différents. les systèmes state of the art (état de l'art) de reconnaissance du locuteur sont basés sur l'apprentissage d'un modèle deep learning (apprentissage profond), appris sur de grandes bases données de locuteurs @xcite , @xcite ). leurs performances sont excellentes @xcite , mais ils ne fournissent aucun élément d'information permettant d'expliquer leur score @xcite . l'explicabilité est pourtant un enjeu central pour la vérification du locuteur, par exemple dans une optique criminalistique pour la vérification du locuteur @xcite , ben amor et al. (2023)) ou de manière générale, pour toutes les activités dîtes « high risk » dans le cadre de l'ai @xcite . en réponse à cette limitation, l'approche ba-lr, a été récemment proposée @xcite . elle représente un enregistrement audio par la présence ou l'absence d'attributs de voix dans celui-ci. les attributs sont issus d'un ensemble fermé déterminé automatiquement (bottom-up) à partir d'une approche de deep learning appliquée sur une base de données de plus d'un million d'enregistrements. ba-lr propose comme score un rapport de vraisemblance (lr) entre la probabilité pour qu'un seul locuteur ait prononcé les deux enregistrements, versus l'hypothèse inverse. ce score n'est basé que sur la présence (activation) ou l'absence des attributs dans les deux fichiers et sur les caractéristiques des attributs. ce paradigme favorise l'explicabilité intrinsèque car la participation de chaque attribut à la décision est connue et est issue des caractéristiques de celui-ci, apprises durant l'entraînement (rareté et fiabilité d'extraction). caractériser la nature des informations encodées par ces attributs découverts par un système automatique est important dans le cadre de cette démarche. nous conjecturons que la qualité de voix fait partie des paramètres pris en compte par le système pour définir les attributs. en effet, kreiman @xcite , @xcite ) définit la qualité de voix comme la façon « dont les locuteurs projettent leur identité -leurs caractéristiques physiques, psychologiques, et sociales -au monde ». la qualité de voix peut être décomposée en différents corrélats acoustiques et perceptuels @xcite , et est liée à des paramètres linguistiques tels que la nasalité ou encore le type de phonation @xcite . les types de phonation qualifient les différents positionnements possibles de la glotte pendant la phonation. on y compte la voix modale, mais aussi les voix craquée (présence de vibrations voisées irrégulières) et soufflée (présence importante de bruit dans le signal), ainsi que tendue et relâchée @xcite . les types de phonation peuvent être liés à des variations d'ordres divers : le sexe est un facteur important -on retrouve souvent plus de souffle dans la voix des femmes du fait de la fermeture incomplète (glottal chink) de leur plis vocaux @xcite . la langue parlée par un locuteur (benoist-lucy & pillot-loiseau, 2013) et son appartenance à une communauté sociale ou géographique sont d'autres influences impactant le type de phonation, comme le cas de jeunes femmes étasuniennes utilisant la voix craquée @xcite . cette étude s'axe autour de deux enjeux. le premier est la caractérisation d'attributs de la voix discriminants au sens du locuteur, par des paramètres de la qualité de la voix, ici les types de phonation. le second est le développement d'une méthodologie cohérente pour l'étude des attributs découverts par un processus automatique. la corrélation entre les différents attributs extraits par le ba-lr et les types de phonation est étudiée, suivie d'une analyse révélant les paramètres acoustiques pris en compte par le système automatique. les liens avec d'autres attributs et le sexe des locuteurs sont également analysés. ## conclusion les résultats de cette étude montrent que plusieurs attributs du ba-lr sont corrélés à des paramètres de qualité de voix, ici les types de phonation craqué et soufflé (section 3). le sexe est également une information discriminante pour trois des huit attributs étudiés, ainsi que la prototypicalité homme/femme des voix pour un d'entre eux (sous-section 3.3). les attributs interagissent entre eux, et de nombreux paramètres sont à prendre en compte afin de comprendre leurs conditions d'activation. ces résultats encouragent à procéder à l'annotation d'autres caractéristiques perceptibles dans les enregistrements, notamment au niveau de la qualité de voix, afin de caractériser d'autres attributs. utiliser le ba-lr sur un autre corpus, multi-sessions, afin d'en comparer les résultats avec ptsvox, est la prochaine piste d'étude. la méthodologie suivie dans cet article a permis d'établir une interaction entre des attributs extraits à partir d'un système automatique et des paramètres de qualité de voix. cette démarche confère une plus grande explicabilité au système, utile dans un cadre judiciaire. la meilleure compréhension des attributs sous un angle perceptuel permet à la fois d'évaluer la proximité entre la perception humaine et le réseau de neurones derrière le ba-lr, mais aussi de documenter les paramètres de qualité de voix utilisés par cet outil : cela le rend utilisable dans des tâches de profilage du locuteur, la qualité de voix pouvant apporter des renseignements physiques et culturelles sur le locuteur étudié. cela permettrait à terme l'extraction automatique d'un profil de locuteur à partir d'un simple enregistrement sonore.
| 33,565
|
261
| 2,022
|
The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems
|
Conversational agents have come increasingly closer to human competence in open-domain dialogue settings; however, such models can reflect insensitive, hurtful, or entirely incoherent viewpoints that erode a user’s trust in the moral integrity of the system. Moral deviations are difficult to mitigate because moral judgments are not universal, and there may be multiple competing judgments that apply to a situation simultaneously. In this work, we introduce a new resource, not to authoritatively resolve moral ambiguities, but instead to facilitate systematic understanding of the intuitions, values and moral judgments reflected in the utterances of dialogue systems. The Moral Integrity Corpus, MIC, is such a resource, which captures the moral assumptions of 38k prompt-reply pairs, using 99k distinct Rules of Thumb (RoTs). Each RoT reflects a particular moral conviction that can explain why a chatbot’s reply may appear acceptable or problematic. We further organize RoTs with a set of 9 moral and social attributes and benchmark performance for attribute classification. Most importantly, we show that current neural language models can automatically generate new RoTs that reasonably describe previously unseen interactions, but they still struggle with certain scenarios. Our findings suggest that MIC will be a useful resource for understanding and language models’ implicit moral assumptions and flexibly benchmarking the integrity of conversational agents. To download the data, see https://github.com/GT-SALT/mic
|
https://aclanthology.org/2022.acl-long.261
|
## related work there is a long-standing interest in the moral responsibility of ai @xcite @xcite @xcite . work in human-computer interaction (hci) reveals that, before users feel they can trust a conversational agent, they will often probe it to identify the limitations which bound its abilities, competence @xcite , and apparent integrity @xcite @xcite . it is reasonable to expect adversarial probes and strategically-chosen questions @xcite , which can prompt toxic or immoral behaviors, even in "detoxified" models that were trained on carefully sanitized inputs @xcite . there are a number of promising methods for keeping chatbots safe, including attribute conditioning @xcite , safety classifiers @xcite , controlled language generation @xcite @xcite , and reinforcement learning @xcite @xcite . the moral integrity corpus can help facilitate each of these efforts. specifically, our data can help train safety classifiers, provide alternative responses (via the revised response), fit the "steering" distribution in a controlled generation , or train penalty models in a policy gradient rl approach. because our dataset makes moral judgments explicit via interpretable rules of thumb (rot), this resource can guide more flexible solutions that can accommodate different moral viewpoints. our present formalism builds on social-chem-101 @xcite which has 292k rules of thumb, targeting the morality of narrative situations and the specific actions of characters in a story (e.g., rocstories; @xcite . other recent collections of moral judgments are also based on narrative text, such as moral sto-ries @xcite and ethics @xcite . we, on the other hand, focus on minimal chit-chat-style conversations, with social chatbot reply to an open-ended prompt. related efforts focus more on classification tasks, like choosing between two moral alternatives @xcite , reflecting value judgments, or parsing stories about conflict and trying identifying the character in each story who is most worthy of blame (scruples; @xcite . most recently, @xcite combined the social-chem-101, moral stories, ethics, and scruples datasets, together with the social bias infer-ence corpus @xcite , to train a single commonsense moral model, known as delphi. delphi is designed to produce universal moral judgments (e.g., it is bad) concerning hypothetical narrative situations (e.g., killing a bear to save your child). @xcite and others have criticized this approach as overly reductive and misleading, assigning global authority to the prescriptive normative judgments of a single ai. our approach differs in important ways. firstly, our approach carries different ethical assumptions than those of delphi (see also section 7). the moral integrity corpusis a collection of rots designed, not to support authoritative moral judgments, but rather to facilitate descriptive explanations of the moral assumptions that already exist implicitly in foundation models. in future work, these explanations may be used to guide chatbot moderation systems that are sensitive to ideological and political difference. secondly, our contributions focus on the dialogue setting, which presents unique challenges (section 6.2) and has previously been overlooked. ## moral annotation framework the primary goal of this work is to provide a resource that allows researchers to systematically observe the moral assumptions of open-domain dialogue systems. a dialogue trajectory may be long and complex @xcite , thus here we focus on a minimal dialogue unit: a simple tuple with an opinion question for a prompt, and the chatbot's response to that prompt. in order to model the inferences that humans would make about "right and wrong" in previously unseen conversations, we gather a large and foundationally diverse collection of moral judgments about the chatbot's responses. we use the "rule of thumb" (rot) formalism introduced in @xcite to describe the moral content of a chatbot's response and further categorize rots according to their underlying moral foundations @xcite , their global consensus, and violation severity. in so doing, we extend the social-chem-101 @xcite framework to a conversational setting. ## the moral integrity corpus the moral integrity corpus is designed for benchmarking the integrity of chatbot responses to both natural and adversarial prompts. we train mturk workers to annotate prompt-reply tuples: an open-ended query and an ai-generated response to that query. in the following sections, we detail the data collection process. ## models the moral integrity corpus allows us to build models that automatically describe a chatbot's moral assumptions. if we can generate normative rules and also categorize those rules by severity, consensus, and moral foundations, future studies can combine these skills to build a moral reasoning and moderation system that is sensitive to ideological and political difference. let (q, a, r, b r ) be a single annotation tuple in the mic for prompt q and chatbot reply a, with an rot annotation r, and an attribute breakdown b r . using the question and answer, we fine-tune language models to generate a relevant rot (section 5.1). then we train separate transformer-based classifiers to predict the attributes b r for a given rot r (section 5.2). we use the same 80-10-10 split for train-dev-test in all experiments and ensure that no prompt-reply pair is contained in multiple splits. ## discussion and conclusion this work introduces mic , the moral in-tegrity corpus, which is a large-scale resource for understanding the moral assumptions and bench-marking the normative social commonsense reasoning of conversational agents, particularly in open-domain "chit chat" settings. mic contains 38k chatbot replies to human-authored prompts, and these replies are annotated with a total of 99k rules of thumb (rots) that determine what may be seen as right or wrong about the reply. with 114k total prompt-reply pairs, we have only 15k duplicate rots (or 13%), suggesting that this is a rich and challenging task. we train moral transformers to automatically generate new rots that describe previously unseen human-chatbot interactions, and we find that our best models make judgments that can be nearly indistinguishable from human annotations in terms of quality, fluency, and relevance. however, even the best-performing model still generates irrelevant rots nearly 28% of the time. this suggests that the proposed task is not yet solved and that mic will be a useful resource for training moral conversational agents. in future work, we will use the moral integrity corpus to train penalty models in a policy gradient reinforcement learning approach for demoting immoral generations. other work can also use mic to train safety classifiers and guide controllable language generation systems towards ethical behaviors. these models can then guide a moderation system that is sensitive to ideological and political differences. limitations any collection of moral judgments will reflect the annotators' worldviews. mturk workers generally tend to be less religious, more educated, and more likely to be unemployed than the general population @xcite . we limited our collection to english-speaking workers living in the 21st century united states, and at this time, these u.s. workers were most likely male, in their early 20s or 30s, and married, with at least one child @xcite . future studies can extend our framework to other cultures and geographic regions. additionally, our human prompts come from reddit, which is skewed towards younger or middle-aged males @xcite . furthermore, we recognize that even regionally-localized judgments may shift with context over time, and a potentially shifting target demands adaptable moral agents. despite this limitation, it is clear that plausible moral judgments are bounded by the data available in the conversation, and we argue that, with respect to moral foundations theory, our data is representative. if we consider the marijuana example from section 3.1, we see an appeal to care/harm regarding substances, a judgment on liberty or free personal choice, and appeals to authority or civil law. although the relative weights assigned to each consideration may shift, we would not expect time to drastically change the elemental factors or available data involved in reasoning about the decision to smoke.
| 13,062
|
449
| 2,023
|
End-to-End Single-Channel Speaker-Turn Aware Conversational Speech Translation
|
Conventional speech-to-text translation (ST) systems are trained on single-speaker utterances, and they may not generalize to real-life scenarios where the audio contains conversations by multiple speakers. In this paper, we tackle single-channel multi-speaker conversational ST with an end-to-end and multi-task training model, named Speaker-Turn Aware Conversational Speech Translation, that combines automatic speech recognition, speech translation and speaker turn detection using special tokens in a serialized labeling format. We run experiments on the Fisher-CALLHOME corpus, which we adapted by merging the two single-speaker channels into one multi-speaker channel, thus representing the more realistic and challenging scenario with multi-speaker turns and cross-talk. Experimental results across single- and multi-speaker conditions and against conventional ST systems, show that our model outperforms the reference systems on the multi-speaker condition, while attaining comparable performance on the single-speaker condition. We release scripts for data processing and model training.
|
https://aclanthology.org/2023.emnlp-main.449
|
## introduction speech translation (st) has seen wide adoption in commercial products and the research community @xcite due to its effectiveness in bridging language barriers. st aims to translate audio of source languages into text of the target languages. this problem was tackled by a cascaded approach that pipelines automatic speech recognition (asr) and machine translation (mt) over the last few decades @xcite @xcite . however, end-to-end speech translation (e2e-st) systems @xcite figure 1 : a two-speaker multi-turn conversational segment. previous work focuses on separated channels without considering cross-talks and speaker-turns (top). stac-st targets a more challenging scenario where multiple speakers converse with occasional cross-talks due to merged channels (bottom). despite significant recent advances in e2e-st @xcite , most st systems to date have focused on translating isolated speech utterances from monologue speech @xcite , read speech @xcite or prompted speech @xcite . being trained on single-turn utterances, these systems may lack the ability to handle real-life scenarios in which multiple speakers converse, and sometime overlap, in the same audio channel @xcite . in this work, we tackle the more challenging task of multi-speaker conversational st. we refer to it as multi-turn & multi-speaker (mt-ms), as opposed to single-turn, which most st systems implicitly assume. this is illustrated in figure 1 , where a "conversation" between two speakers recorded with separate channels (top) becomes more difficult to translate if the channels are merged (bottom), due to the introduction of speaker-turns and cross-talks. in particular, st with cross-talks and speaker-turns is difficult because speech content of different sentences is mixed up or switched. while mt-ms speech has been studied in asr @xcite , to the best of our knowledge, this is the first paper that investigates it in end-to-end st. we tackle mt-ms st with an approach we named speaker-turn aware conversational speech translation (stac-st). stac-st is a multi-task training framework that combines asr, st and speaker-turn detection using special tokens in a serialized labeling format. it is inspired by a recent speech foundation model, whisper @xcite , which jointly trains asr, x-to-english st, voice activity detection, and language identification with 680k hours of speech data using labeling-based multitask learning. our contributions are as follows: ## speaker-turn aware conversational speech translation (stac-st) this section describes our end-to-end multi-task learning model for multi-turn multi-speaker conversational st. ## experimental setup this section introduces the datasets and metrics we used for evaluation, as well as architecture and training details of stac-st. ## conclusions in this work, we present stac-st, an end-to-end system designed for single-channel multi-turn & multi-speaker speech translation that uses a multitask training framework to leverage both asr and st datasets. we demonstrate that stac-st generalizes to both standard pre-segmented st benchmarks and multi-turn conversational st, the latter being a more challenging scenario. stac-st is also shown to learn the task of speaker change detection, which helps multi-speaker st and asr. we investigate different aspects of stac-st, including the impact of model and data size, automatic segmentation for long-form conversational st, zero-shot multi-turn & multi-speaker st with-out specific training data. overall, this work sheds light on future work towards more robust conversational st systems that can handle speaker-turns and cross-talks. limitations 1. our primary test sets, fisher and call-home, have narrowly one translation direction (@xmath0. the only other public conversational st dataset we are aware of is mslt @xcite , but it only contains independent utterances, which is far from representing a realistic mt-ms use case. we call for more publicly available longform conversational st data under a friendly license. 2. due to the same limitation of publicly available datasets, we do only explore conversations between two speakers. 3. we segment the test sets based on human annotations. despite being the best choice for the mt-ms data in our study ( §5.3.1), it is not a realistic scenario for testing. we leave improving segmentation on noisy long-form conversational audio as future work. 4. we segment long-form audio files into up to 30s pieces following @xcite , but we do not use the preceding segments as context. we focus on improving translation quality of conversations by speaker-turn and cross-talk detection, yet using the context information could also help. in addition, within each mt-ms segment, the inter-utterance context could have already been leveraged @xcite . we leave analysis of the interand intra-segment context as future work. 5. we only test the transformer architecture as we focus on solving a challenging mt-ms st task with multi-task learning, which is orthogonal to the architecture choice. we leave exploring other architecture options, such as conformer @xcite , hyperconformer @xcite or conmer @xcite as future work.
| 22,225
|
18
| 2,024
|
Benchmarking Low-Resource Machine Translation Systems
|
Assessing the performance of machine translation systems is of critical value, especially to languages with lower resource availability.Due to the large evaluation effort required by the translation task, studies often compare new systems against single systems or commercial solutions. Consequently, determining the best-performing system for specific languages is often unclear. This work benchmarks publicly available translation systems across 4 datasets and 26 languages, including low-resource languages. We consider both effectiveness and efficiency in our evaluation.Our results are made public through BENG—a FAIR benchmarking platform for Natural Language Generation tasks.
|
https://aclanthology.org/2024.loresmt-1.18
|
## introduction the machine translation (mt) task is increasingly relevant in today's connected world as accessibility enables knowledge transfer. hence, mt systems are recognized as prime tools in the natural language processing (nlp) domain @xcite . in recent years, neural machine translation (nmt) @xcite has led the field as it achieves state-of-the-art performance for many language pairs @xcite . however, nmt systems can become computationally demanding and the abundance of new systems also complicates a cross-system comparison. as a result, newly-released systems often compare their performance against single systems @xcite . furthermore, recent system analyses also focus on assessing the capability of commercial translation solutions @xcite . to the best of our knowledge, no work exclusively considers open-source translation systems. thus, leading to a lack of clarity when determining the best-performing and when identifying shortcomings among existing translation systems, an especially critical task for low-resource languages (lrls). while the translation task is vital to progress in general, it is still largely unfeasible to the 7, 000+ languages in the world. 1 from these, only close to 2, 500 are represented in the nlp field, with 88% considered to be lowresource. lrls have a minimal resource availability that causes them to be largely untouched by the benefits of language technology @xcite . with our work, we aim to contribute to a more complete picture of the current state of the art of machine translation with a focus on lrls. we compare four open-source nmt systems-libretranslate foot_1 , opus mt @xcite ), nllb @xcite , and mbart50 @xcite )-on four parallel machine-translation benchmark datasets-opus100 @xcite , europarl @xcite , iwslt2017 @xcite , and flores-200 @xcite . our evaluation comprises data from 26 different languages. our results suggest that using languages with lower resource availability does not necessarily translate to lower system performance. however, we did observe more substantial variations in the systems' performance for these languages. our analysis also showed that libretranslate had the highest token throughput among the evaluated systems. some systems showed proficiency in certain languages, while others performed better according to a certain dataset. our experiments are shared via beng @xcite , an open-source benchmarking platform that improves the accessibility of experiment results according to the fair data principles @xcite . 3 ## preliminaries and related work machine translation (mt) is the process of translating from a source language into a target language autonomously, i.e., without human intervention @xcite . this can be achieved through different approaches. @xcite divide mt techniques into ruleand corpus-based approaches. corpus-based approaches can be further divided into example-based, statistical, and, more recently, neural approaches. in this work, we evaluate approaches of the latter category with a focus on low-resource languages. we describe both further within this section, along with relevant mt tools and platforms. ## conclusion we compared four open-source nmt systems on high and low-resource languages regarding their effectiveness and efficiency, filling a gap in the literature that focused on the evaluation of single systems or the comparison of commercial solutions. our experiments show that open-source systems can perform well on lrls, showcasing the nlp community's efforts in bridging the gap. however, the performance of the systems in these languages remains variable. assessing the impact of the domain and genre of the training datasets on the translation quality remains a question for future work. despite the existence of numerous evaluation frameworks for mt, we used beng to share the evaluation data via a common space and hope that it boosts comparability across systems and datasets. the influence of language families and writing systems on the translation consistency of these systems requires further investigation.
| 33,797
|
98
| 2,021
|
Learning to Answer Psychological Questionnaire for Personality Detection
|
Existing text-based personality detection research mostly relies on data-driven approaches to implicitly capture personality cues in online posts, lacking the guidance of psychological knowledge. Psychological questionnaire, which contains a series of dedicated questions highly related to personality traits, plays a critical role in self-report personality assessment. We argue that the posts created by a user contain critical contents that could help answer the questions in a questionnaire, resulting in an assessment of his personality by linking the texts and the questionnaire. To this end, we propose a new model named Psychological Questionnaire enhanced Network (PQ-Net) to guide personality detection by tracking critical information in texts with a questionnaire. Specifically, PQ-Net contains two streams: a context stream to encode each piece of text into a contextual text representation, and a questionnaire stream to capture relevant information in the contextual text representation to generate potential answer representations for a questionnaire. The potential answer representations are used to enhance the contextual text representation and to benefit personality prediction. Experimental results on two datasets demonstrate the superiority of PQ-Net in capturing useful cues from the posts for personality detection.
|
https://aclanthology.org/2021.findings-emnlp.98
|
## introduction as a psychological conception, personality aims to explain human behaviors in terms of a few stable and measurable individual characteristics @xcite . the study of personality is fundamental to psychology, and personality detection @xcite has benefited many applications such as dialogue systems @xcite , recommendation systems @xcite , and suicide risk assessment @xcite . canonical approaches to personality test are * corresponding author. recent years have witnessed an increasing interest in automatically identifying one's personality traits based on her/his social media posts @xcite @xcite . to encode the input posts and obtain their context representations, most of these methods employ deep learning models such as lstms @xcite , cnns @xcite and pre-trained language models (ptms) @xcite . they generally rely on the models to capture potential personality cues implicitly from the texts in a data-driven manner, without any guidance of psychological domain knowledge. as a result, the performance of these models is largely limited by the availability of training data and the learning capability of models. we observe from real data that the posts created by a user contain some critical contents that could help answer the questions in a questionnaire. as the example shows in figure 1 , there are a set of posts from a user and a question "are you usually a good mixer with groups of people or rather quiet and reserved?" from an mbti @xcite questionnaire. the question is also associated with two choices "quite and reserved." and "a good mixer.", which are intended to investigate whether the user's personality trait is introversive (the former) or extroversive (the latter). from the posts, we can see that the contents "always been very reserved", "need time alone" and "better to be single" strongly indicate that the user's personality trait is introversive. therefore, we argue that it is possible to utilize the questionnaire, which contains questions that are highly related to personality traits, to guide a model to capture critical information in the posts for personality detection. for this purpose, we propose a new model named psychological questionnaire enhanced network (pq-net) for text-based personality detection. specifically, pq-net consists of two streams: a context stream and a questionnaire stream. for the context stream, a ptm-based encoder is employed to encode each post and create its contextual representation. for the questionnaire stream, it first encodes each question by a question encoder and each candidate answer by a choice encoder, and then employs a cross-attention mechanism with supervision to enable the model to learn a potential answer representation for each question by choosing the correct answer based on the post representations. we then concatenate the post representations and the potential answer representations to predict the user's personality traits. under the guidance of the questionnaire, our pq-net is able to capture personality-related cues from the posts in an explicit manner rather than learning them implicitly. extensive experiments on the kaggle and pandora datasets show that pq-net consistently outperforms existing competitors with superior performance. further analyses also demonstrate that the questionnaire and the two-streams structure all play a crucial role in pq-net, and that the user representations enhanced by pq-net are more inductive and distinguishable in comparison to the baselines. lastly, we show that the cues obtained by pq-net are more interpretable for personality detection. the contributions of this paper are threefold: ## experiments in this section, we first introduce the details of the personality benchmarks, questionnaire and baseline models adopted in our study, and then report and discuss our experimental results. ## related work in recent years, numerous efforts have been devoted to automatically detecting one's personality from his/her online texts @xcite @xcite . the early works rely on hand-crafted features @xcite @xcite ; amirhosseini and kazemian, 2020), which include various psycholinguistic features extracted by liwc and statistical features extracted by bag-of-words models @xcite . nevertheless, feature engineering-based methods are limited by their capability in extracting many useful implicit features @xcite . meanwhile, deep neural networks have been applied to personality detection by implicitly extracting features from the texts @xcite . for example, @xcite and @xcite applied lstms to encode each post with the glove embeddings @xcite employed bert to encode each post. moreover, hierarchical structures were also applied to merge the texts into a user representation. for example, @xcite first encoded each post via a gated recurrent unit (gru) @xcite with word attention, and then passed the encodings to a second gru with post attention to aggregate the posts. @xcite designed an inception @xcite based attrcnn module to encode each post and then applied a convolutional neural network (cnn) @xcite interact between posts. despite numerous success, deep neural-network solutions rely on a data-driven fashion and lack the guidance of psychological domain knowledge. ## conclusion in this paper, we proposed a psychological questionnaire enhanced network (pq-net) for personality detection. pq-net aims to track personalityrelated cues from online posts in an explicit manner by considering the connections between the posts and a psychological questionnaire. specifically, pq-net comprises a context stream and a questionnaire stream. the former encodes each post to obtain its contextual representation, and the latter learns to capture critical information in the posts to result in a potential answer representation for each question in the questionnaire. finally, the potential answer representations are used to enhance the contextual post representations to predict the personality traits. experimental results on two benchmarks show that pq-net outperforms the baselines significantly. besides, further studies and analyses demonstrate that the representations enhanced by pq-net are more inductive and distinguishable, providing an interpretability for the personality detection process. data and have not illegally detected any user privacy information. we realize that the evaluation results in this paper may have certain ethical risks and thus require that they should be used in a strictly controlled manner, subject to the approval of the institutional review board. anyone who uses our work to secretly infer people's personality characteristics is strictly prohibited. tal yarkoni. 2010. personality in 100,000 words: a large-scale analysis of personality and word use among bloggers. journal of research in personality, 44(3):363-373. yin zhang, rong jin, and zhi-hua zhou. 2010. understanding bag-of-words model: a statistical framework. international journal of machine learning and cybernetics, 1(1-4):43-52. yinhe zheng, guanyi chen, minlie huang, song liu, and xuan zhu. 2019. personalized dialogue generation with diversified traits. arxiv preprint arxiv:1901.09672.
| 9,648
|
23
| 2,024
|
Optimizing LLM Based Retrieval Augmented Generation Pipelines in the Financial Domain
|
Retrieval Augmented Generation (RAG) is a prominent approach in real-word applications for grounding large language model (LLM) generations in up to date and domain-specific knowledge. However, there is a lack of systematic investigations of the impact of each component (retrieval quality, prompts, generation models) on the generation quality of a RAG pipeline in real world scenarios. In this study, we benchmark 6 LLMs in 15 retrieval scenarios exploring 9 prompts over 2 real world financial domain datasets. We thoroughly discuss the impact of each component in RAG pipeline on answer generation quality and formulate specific recommendations for the design of RAG systems.
|
https://aclanthology.org/2024.naacl-industry.23
|
## introduction recent years have seen tremendous improvement in the ability of large language models (llm) such as @xcite and llama-2 @xcite to address users' questions/queries in diverse domains (medical questions, math problems, code assistants etc). despite llms acquiring immense parametric world knowledge during the pre-training, when adapting to real-world applications, their lack of customized domain-specific knowledge or knowledge of recent events @xcite , frequently results in outdated responses or baseless responses not grounded in the user's domain of interest, also termed hallucinations @xcite @xcite . hallucinations contribute to a lack of trust with users, and this unreliability is one of the biggest hindrances in the responsible deployment of llm based systems for critical business applications in the financial domain. retrieval augmented generation (rag) is the current go-to approach to connect llms to 1 equal contribution. live/updated information sources. existing works @xcite @xcite show rag can reduce hallucinations and improve answer quality, without the need for highly expensive and sometimes brittle domainspecific fine-tuning. given a user query, a typical rag system (figure 1 ) employs a retriever system to fetch a list of documents likely relevant to the query from an information source (retrieval). the documents are then fed into the context of the llm, with users' query / conversation history, and specific instructions / prompts on how to generate a response "grounded" in retrieved information @xcite . while there is growing number of proposals @xcite to improve rag systems (see the survey from @xcite ), very few studies @xcite systematically investigate the impact of each component (retriever, prompts, models) on answer generation quality and interactions among these various components. our goal of this paper is to evaluate the efficacy and limits of rag pipelines for question answering (q&a) systems in the highly specialized financial domain. in this study, we benchmark llms' answer generation quality and explore the following aspects: (i) comparing different generative llms as answer generation models against each other and baseline (purely extractive) models; (ii) examining how various llms handle differences in the quality of information retrieval; (iii) exploring the impact of varying prompts on answer quality of rag pipelines. in line with our objectives, we curated two datasets from the banking sector featuring real user queries. these datasets were used to design test scenarios that mimic the retrieval of information at varying levels of quality. additionally, we crafted prompts with distinct characteristics (e.g., level of detail in instructions, requirements for citations, user queries retriever retrievers at different quality levels (simulation) external data gold doc presence rate retrieve only (default) + gold docs (oracle) retrieved order shuffled order) retrieved order gold doc first gold doc last display order retrieved docs top 3, 5, 20 number of retrieval docs answer generation simple verbose citation quoting ... prompting gpt-4 gpt-35-turbo llama-2-13b llama-2-13b-chat ## experiment framework this section introduces the design of the evaluation framework (see figure 1 ). to summarize, we ran 1620 experiments to assess 6 llms in 15 retrieval conditions using 9 prompts over 2 datasets for 2 performance aspects. ## conclusion we conducted thorough investigation of the influence of several components of a rag pipeline on the overall generation quality. based on our findings, we find retrieval optimization is an important part of the rag pipeline design, even with high quality llms like gpt-4. firstly, we recommend prioritizing retrieval recall, while tuning retrieval systems, as llms exhibit pseudo-helpfulness when relevant gold document(s) are not retrieved. additionally, improving precision of retrieval, either by using re-rankers or fine-tuned retrievers, will likely improve performance as they improve the gold document(s) rank in the retrieved list. we also find rag systems to be sensitive to the presence of distractors in the context. we find a big delta between vendor llms (openai) and smaller scale open-source alternatives in our experiments, with respect to sensitivity to prompting, overall quality, and instruction following on a domain specific use cases. finally, for smaller-sized llama-2 models we recommend simple instructions as they often fail to follow longer or more complicated instructions.
| 34,554
|
4
| 2,025
|
Beyond Paraphrasing: Analyzing Summarization Abstractiveness and Reasoning
|
While there have been many studies analyzing the ability of LLMs to solve problems through reasoning, their application of reasoning in summarization remains largely unexamined. This study explores whether reasoning is essential to summarization by investigating three questions: (1) Do humans frequently use reasoning to generate new summary content? (2) Do summarization models exhibit the same reasoning patterns as humans? (3) Should summarization models integrate more complex reasoning abilities? Our findings reveal that while human summaries often contain reasoning-based information, system-generated summaries rarely contain this same information. This suggests that models struggle to effectively apply reasoning, even when it could improve summary quality. We advocate for the development of models that incorporate deeper reasoning and abstractiveness, and we release our annotated data to support future research.
|
https://aclanthology.org/2025.newsum-main.4
|
## introduction in recent decades, the amount of textual information available has grown exponentially, creating a pressing need for automatic systems that can process this information and derive meaningful conclusions from it. recent advances in large language models (llms) have shown remarkable progress in handling tasks that appear to require reasoningnamely, deriving conclusions not explicitly stated in the text. for instance, llms have demonstrated strong performance in question answering tasks that involve background knowledge and inference @xcite . yet, despite these advances, the role of reasoning in generic summarization remains largely underexplored. a key question arises: can and should automatic summaries incorporate new conclusions that go beyond the information explicitly present in the source? traditionally, research in automatic summarization has focused on information selection and paraphrasing @xcite @xcite . one widely used quality measure to human-like summaries has been abstractiveness-the degree to which a summary uses its "own words" rather than copying source text. with the emergence of llms, summarization systems have achieved substantial gains not only in content selection but also in producing highly fluent and abstractive outputs comparable to humanwritten summaries @xcite . these advances invite a deeper investigation into the next frontier: can automatic summaries perform reasoning, deriving conclusions like humans do? in principle, the ability to reason during summarization could enhance content focus and informativeness, enabling the generation of summaries that emphasize the most salient insights rather than merely restating information. to investigate this, we outline several research questions: (1) do humans rely on reasoning to create summary content, and if so, how often? (2) do models employ the same reasoning as humans, or do they bypass it? (3) should we aim to incorporate reasoning abilities in summarization modeling? to address these questions, we began with a manual annotation of human-generated summaries. we identified three common operations that humans perform when rewriting selected text for summaries (will be defined clearly in section 3): paraphrasing, generalization, and drawing conclusions. the latter two operations, generalization and conclusion, change the semantic meaning from the source to the summary and are considered to require reasoning. 1 we manually classified matching text spans between summaries and source texts according to these levels of abstractiveness and found that approximately 25% of human summary spans involve such reasoning. however, our manual evaluation of systemgenerated summaries revealed a different pattern. despite high overall evaluation scores with respect to reference summaries, these systems predominantly matched the reference with paraphrased text spans. crucially, reference spans that require reasoning to extract important information were underrepresented in systems summaries. in other words, while these models seem to display reasoning abilities in other tasks, they tend to avoid using them correctly in summarization, where output is often deemed acceptable without it, despite the fact that this omission can reduce the quality of the summary. this analysis highlights the importance of reasoning in summarization and calls for the development of new models that better integrate reasoning and different levels of abstraction. to facilitate further research, we are releasing the manual annotations and data. 2 2 related work ## abstractiveness levels there are a few ways in which source information can be utilized in the process of summarization. the simplest approach is to copy the information directly and reword or restructure it to fit the rest of the summary. generalizing certain parts of the source material can help compress information even further by reducing the amount of detail and level of specificity. sometimes, it is necessary to add entirely new information drawn from the source through reasonable conclusions to avoid relying on reader inference. we use these observations regarding the various uses of source information to define abstractiveness levels. first, we define a span-level matching between the information in the summary and its corresponding evidence from the source. having these matching pairs allow to analyze the abstractiveness level performed in the summary. following @xcite the spans are standalone facts that are usually formed into a proposition, where the source span entails the summary span. we also required tight matching, where each source token that adds additional information that the summary is not based on, is omitted. given these pairs, the abstractiveness levels are defined as follows: paraphrase. bi-directional entailment between the summary span and the document span. that is, both sides share the same information generalization. the summary and document spans are event-coreferred foot_1 . the summary span does not explicitly mention specific details but instead uses broader terms that encompass those details. as a result, while the source span entails the summary span, the summary span does not fully entail the source span. conclusion. the summary span adds new information that is not mentioned explicitly in the source but derived from it. accordingly, while the source span entails the summary span, the summary does not entail the source in full, and they are not eventcoreferred. the examples in ## annotation process in order to understand how often different levels of abstractiveness appear in human written summaries, we annotate reference summaries from the news and review domains. this annotation was performed manually by an expert annotator. ## system-generated summaries we observed a substantial presence of generalization and conclusion actions in human-written reference summaries, raising questions about whether automatic summarization systems are capable of generating similar types of information. however, due to the lack of available alignment data between system-generated summaries and their corresponding source documents, data required for our annotation process, we were unable to apply the same fine-grained annotation procedure to system summaries as we did for human references. instead, we analyzed how well system-generated summaries align with reference summaries across different abstractiveness levels. our findings reveal that system outputs tend to match paraphrasebased reference spans far more frequently than generation-or conclusion-based spans. in other words, the high similarity scores that system summaries achieve relative to reference summaries primarily stem from their ability to produce effective paraphrases, rather than from generating novel conclusions or inferred content akin to those written by humans. to examine this phenomenon, we conducted an abstractiveness-aware manual evaluation inspired by the pyramid method @xcite . we selected 10 topics from multinews, 10 from fewsum, and 6 from duc 2004. sys building on these matches, we computed a recall score for each abstractiveness level, reflecting the proportion of reference spans of a given type that were successfully reproduced by the system. as shown in these findings suggest that current summarization models excel at reproducing paraphrased content but struggle to incorporate reasoning-based or conclusion-oriented information that match the reference summary, particularly when such reasoning is not explicitly required by the input. it is important to note that, in a few cases, models did select the same source information as the reference summary did, but because they employed a different level of abstraction, their output was too distant from the reference to be considered a match. examples of this phenomenon are provided in ## conclusion in this work, we analyzed different abstraction levels in summarization, and found that while humans use reasoning to derive information which improves the focus and clarity of summaries, models are still lagging behind. we release our data and annotation to facilitate research in this direction and the development of summarization models that incorporate better reasoning abilities.
| 40,018
|
702
| 2,023
|
Multijugate Dual Learning for Low-Resource Task-Oriented Dialogue System
|
Dialogue data in real scenarios tend to be sparsely available, rendering data-starved end-to-end dialogue systems trained inadequately. We discover that data utilization efficiency in low-resource scenarios can be enhanced by mining alignment information uncertain utterance and deterministic dialogue state. Therefore, we innovatively implement dual learning in task-oriented dialogues to exploit the correlation of heterogeneous data. In addition, the one-to-one duality is converted into a multijugate duality to reduce the influence of spurious correlations in dual training for generalization. Without introducing additional parameters, our method could be implemented in arbitrary networks. Extensive empirical analyses demonstrate that our proposed method improves the effectiveness of end-to-end task-oriented dialogue systems under multiple benchmarks and obtains state-of-the-art results in low-resource scenarios.
|
https://aclanthology.org/2023.findings-acl.702
|
## introduction with the emergence of dialogue data @xcite , and the evolution of pre-trained language models @xcite , end-to-end task-oriented dialogue (tod) systems @xcite @xcite gradually replaced the previous modular cascading dialogue systems @xcite . the end-to-end tod system adopts a uniform training objective, preventing the error propagation problem in pipelined dialogue systems @xcite . nonetheless, the end-to-end paradigm requires more training data to perform better @xcite . meanwhile, tod data is enormously expensive to annotate @xcite as it simultaneously contains dialogue state tracking, dialogue action prediction, and response generation. it is also expensive to annotate large amounts of complicated dialogue data for ## conclusion we propose a novel multijugate dual learning for task-oriented dialogues in low-resource scenarios. exploiting the duality between deterministic dialogue states and uncertain utterances enables the entity alignment information in heterogeneous data to be fully exploited. meanwhile, paraphraseenhanced multijugate dual learning alleviates the spurious correlation of shallow pattern statistics. experiments on several tod datasets show that the proposed method achieves state-of-the-art results in both end-to-end response generation and dialogue state tracking in low-resource scenarios.
| 23,891
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.