Datasets:
id
stringlengths 1
4
| year
int64 2.01k
2.03k
| title
stringlengths 12
519
| abstract
stringlengths 7
12.7k
| pdf_url
stringlengths 36
61
| content
stringlengths 7
46.5k
| __index_level_0__
int64 0
41.4k
|
|---|---|---|---|---|---|---|
41
| 2,021
|
Automatic Detection and Classification of Mental Illnesses from General Social Media Texts
|
Mental health is getting more and more attention recently, depression being a very common illness nowadays, but also other disorders like anxiety, obsessive-compulsive disorders, feeding disorders, autism, or attention-deficit/hyperactivity disorders. The huge amount of data from social media and the recent advances of deep learning models provide valuable means to automatically detecting mental disorders from plain text. In this article, we experiment with state-of-the-art methods on the SMHD mental health conditions dataset from Reddit (Cohan et al., 2018). Our contribution is threefold: using a dataset consisting of more illnesses than most studies, focusing on general text rather than mental health support groups and classification by posts rather than individuals or groups. For the automatic classification of the diseases, we employ three deep learning models: BERT, RoBERTa and XLNET. We double the baseline established by Cohan et al. (2018), on just a sample of their dataset. We improve the results obtained by Jiang et al. (2020) on post-level classification. The accuracy obtained by the eating disorder classifier is the highest due to the pregnant presence of discussions related to calories, diets, recipes etc., whereas depression had the lowest F1 score, probably because depression is more difficult to identify in linguistic acts.
|
https://aclanthology.org/2021.ranlp-1.41
|
## introduction an analysis performed by @xcite estimates that approximately 10% of the world's population is living with a mental illness. the global burden of disease @xcite states that depression is a very common illness and there are more than 264 million people affected by it. at its worst, the illness can lead to suicide and it is the second highest cause of death for people between 15 and 29. between 76% and 85% of the potentially diagnosed people, do not benefit from any treatment for their illness due to living in impoverished areas and not having access to mental care. it is difficult to discuss about digital solutions in the context of isolated areas with low data availability and limited access to professional help. social stigma is another obstacle present regardless of age, gender or race, which makes early intervention difficult. persons facing difficulties often avoid discussing their issues from various reasons. however, researchers working with machine learning algorithms can draw plenty of expertise from the unstructured data roaming the world wide web. the advent of social media platforms brings up an influx of large quantities of various types of unstructured textual data. the continuous advancements made in the field of machine learning enable the possibility to analyse such volumes of data efficiently. experiments in this interdisciplinary domain managed to bring up useful input for mental health practitioners, sociolinguists, computer scientist and other researchers in the field. @xcite perform one of the most influential quantitative studies, which reveals the way patterns of parts of speech, as labelled by liwc founders, correlate with types of personalities and types of mental illnesses. the classes and the psychological dimensions mapped together served as a start for many projects including the prediction of dark triad personality traits by @xcite and the risk of selfharm by @xcite . research in the area is conducted mainly on texts from mental health support groups, on just a few illnesses and some groups of individuals. our main research questions for this article are if and to what extent it is possible to detect and classify mental illnesses from general texts, if there ## related work nlp researchers have shown an increased interest in the area at the intersection of machine learning and psychiatry in the last years. social media is an indispensable resource for research. yet, the particularities of the online setting rise a range of challenges. as there are not any standards established for using social data, practitioners from many fields pointed to the dangers of using such data without a clear framework. @xcite address the issue of "biases, methodological pitfalls, and ethical boundaries" -discussing the problems often left unaddressed by researchers working with this kind of data. @xcite analyse not only the ethical dilemma revolving around this type of studies, but also their feasibility and the integration of the social component into the compound of a socio-technical system. when it comes to detecting mental illnesses from social media data, we have many examples at hand, which often look at data coming from those reddit communities, which are support groups for people struggling with an illness or another. most articles look at a single illness in comparison to a control group: @xcite and @xcite schizophrenia. our goal is to detect a wide range of mental illnesses using deep learning techniques, which seem like the best candidates for this task. @xcite employ deep learning methods similar to ours, but we concentrate on obtaining better results by training the models on individual posts rather than posts grouped by users, which might not work as expected. for example, if a user produced few contributions or has a fresh account, they would probably have few posts available. on the other hand, some types of user are the observing type and rarely contribute to discussions. one aspect worth mentioning is the nature of the data used in many classification tasks. texts containing explicit content and linguistic cues pertaining to the properties of a certain illness are often used. @xcite al. (2020) and thorstad et al. (2019) perform automatic text classification by their author's mental illnesses, with good results, on texts that specifically discussed these conditions on dedicated forums. nevertheless, these classifications are of little help in finding risk population, when looking at general text, which does not include mental illness topics. among the few researchers who report using datasets containing general discussions coming from people who self-reported their diagnosis in one of the support communities are @xcite and @xcite . the results are favorable and leave room for improvement. we believe it is important to experiment further for a better understanding of the ways in which mental illnesses can be detected in earlier stages and how even general discussions contain traces of how mental illnesses manifest themselves in language. in addition, this is a direction worthy of exploration because the persons asking for guidance represent a very small and idiosyncratic part of the population battling with mental illnesses, thus early mental illness detection from general text might be of a real help. ## data we used the smhd dataset introduced by @xcite . this dataset contains non-explicit texts: a large-scale resource for exploring online language usage for multiple mental health conditions. they test some classification algorithms, but no deep learning. also, employed liwc categories for classification. these categories include standard linguistic dimensionspro-nouns, articles, present tense, future tense; psychological processes -positive emotions, negative emotions, anger, anxiety; personal concerns -work, achievements. the smhd dataset contains texts extracted from reddit's general discussion communities grouped on users and illnesses. individuals diagnosed with a mental illness were detected by searching for self-reports in the dedicated support groups. the dataset features multiple illnesses, which are present in the psychiatric taxonomy dsm-5 (american psychiatric association, 2013). as stated by the authors of the dataset, "six conditions are top-level dsm-5 disorders: schizophrenia spectrum disorders (schizophrenia), bipolar disorders (bipolar), depressive disorders (depression), anxiety disorders (anxiety), obsessive-compulsive disorders (ocd) and feeding and eating disorders (eating). the three other conditions are one rank lower: post-traumatic stress disorder (ptsd) is classified under trauma-and stress-related disorders, and autism spectrum disorders (autism) and attention-deficit/hyperactivity disorder (adhd) under neurodevelopmental disorders". the opposing group of users is the control one, whose members are selected based on having no posts in the support groups and at least 50 posts on reddit. the complete dataset contains 20,406 diagnosed users and 335,952 control users. the texts do not contain any terms related to mental health, neither the diagnosed groups, nor the control ones. our experiments will use just a selection of each group of illnesses to speed up the computation process. the models are not user centered and will learn from each individual post. selecting data based on a fixed number of users was not suitable for our tasks due to the imbalance at the user level when it comes to the number of comments and posts available. therefore, we selected randomly 50,000 posts for each group of users. the numbers shown in tables 1 and 2 might reflect certain particularities about an illness, how the diagnosed users communicate in the online environment. this variation depends also on how the users engage, whether they create posts or comment on somebody else's and on the format adopted by each community -if pictures are posted often, then the comments are on the shorter side, if story telling is the center of the community, people engage with the purpose of telling their opinion or a similar story, hence the lengthier texts. the authors of the dataset conducted a linguistic analysis based on liwc categories. several differences were observed between the diagnosed groups and the control users. @xcite and @xcite underline that pronounced usage of firstperson singular with most conditions is consistent with the theory that illness drives one towards selffocus. an interesting finding underlining the bias of the dataset towards the predominantly male demographic is the female references that point to discussions about relationships and love related issues with the bipolar, depression and anxiety groups. reddit does not impose a very strict post limit hence we have diverse lengths. however, the deep learning models we used impose a limit for training. the smhd has already undergone preprocessing, but we needed more cleaning. we remove any posts shorter than 4 tokens. very short texts are often noise like thankful comments or very short approval phrases, which would confuse the model and do not contain significant meaning the next section will look at another data related problem, namely the ethics and biases of working with social media data. ## ethics and biases reddit represents a social media application whose users are part of communities and engage in discussions. each social media network represents a cluster of people who are defined by certain characteristics. the hootsuite yearly report @xcite shows that more than 60% of the reddit users are males aged 18 to 34. accordingly, studies show that there is a tendency in males to display less emotionally charged input due to the social stigma in the offline world. @xcite find that men often avoid seeking professional help or talking about their problems. concealing their emotional state in real life is a strategy in order to avoid prejudices and is not something specific for the female population. ireland and mehl's (2014) research conducted in the psychology area proves that manifestations of negative emotions are muted across many settings and situations. alternatively, @xcite and @xcite demonstrate that people tend to discuss personal things in anonymous spaces and share unpopular opinions. in this situation, reddit represents a good source of data for a population, which is underrepresented in clinical studies. @xcite prove that some platforms might be more attractive for a demographic than others. behavioral biases imply that users of a platform display a particular behavior, observable in how they interact with each other or what type of content they create. one such bias is the way in which users seek and share information. de @xcite discovered that users diagnosed with one illness behave differently in this aspect from the others. nevertheless, we cannot claim that this is representative for all the individuals diagnosed with a mental illness. there are certain biases plaguing the studies based on social media, which should be at least mentioned for awareness. here, we consider the population bias a positive fact, which enables studies targeting the young adult and adult males. however, this bias does not affect much our dataset, because the data collected comes from neutral communities where a variety of topics is discussed. ## discriminative features we run a naïve bayes classifier in order to find out the most informative features from each category in our dataset. we used the classifier implemented in the scikit-learn library by @xcite to get a top of n most informative words by scores. our experiment includes the 9 illnesses as labels, and the control group. the top n words can be seen in ## classification methods identifying significant differences between our groups was the main drive for training classifiers. we trained 3 different models based on the transformers architecture to see how each performs binary classification between a diagnosed group and a control one. we obtained state-of-the-bertforsequenceclassification by @xcite . in order to setup this model, we experimented using different hyperparameters, loss functions, batch sizes and number of epochs. the authors of bert recommend using it with the following specifications: our machine needed smaller batch sizes to be able to train the model, so we used 3. we established a learning rate (adam 𝜀) of 1e-5 for the adamw loss function implemented by @xcite . we trained the model for 3 epochs only, because we noticed overfitting starting with the 4th epoch. the second method we used is xlnet, which is another method for pre-training language representations introduced by @xcite . xlnet was meant to overcome the limitations imposed by bert with its autoregressive model and does so by outperforming it on 20 tasks as shown by @xcite . for this method, we have a different formatted input and there is no limit for the length of the input texts. however, the input arrays need to be of the same size. this is addressed by padding the inputs that do not meet the size of the longest sequence. padding means simply adding 0s until the length is met. for this classifier we had to limit the length of sequences to 126 due to computational resources. the optimum batch size was 8. the loss function we used was adamw with the same hyperparameters as for bert. we trained this model for 4 epochs. with a training set of approximately 100000 texts, we get a number of 50000 training steps. the last model we used, roberta implemented by @xcite , is facebook ai's training method and it promises to improve on bert. the researchers involved in implementing roberta prove that bert was undertrained and there is still a long way to go in terms of design choices and the way in which the improvements are reported. we did not use the full size of our dataset due to its large size and subsequent long training times. finetuning roberta implies loading the weights of the pretrained model, in our case, the robertaforsequenceclassification model. we use a sequence length of 256 and a batch size of 8. the loss function used here is adamw with adam 𝜀 of 2e-5. ## results we obtained the results using 50,000 posts for each group alike. the compound of 100,000 posts for each binary classifier was split in 80,000 for training and 20,000 for testing. we trained our models with different hyperparameters until we reached the optimum ones detailed in no. of testing posts 20,000 20,000 20,000 we used a naïve bayes classifier to discover the most important features for each group of users. our results add to the group of articles showing good prospects for this field. an encouraging finding is the sufficiency of focusing on general text rather than mental health support groups and classification by posts rather than individuals or groups. another takeaway is the sufficiency of post-level classification and avenue to improve this approach in future work by paying attention to contextual cues such as time, events, entailment of posts or any other possible triggers that might help the earlier detection of a mental illness. further experimentation with different setups and data that are more diverse is also required. this would benefit our research and increase the possibility of future integration of automated tools, which could assist clinicians in the earlier detection of mental health issues.
| 11,477
|
534
| 2,023
|
Beyond Candidates : Adaptive Dialogue Agent Utilizing Persona and Knowledge
|
To build ultimate dialogue agents, previous studies suggest models that ground both persona and knowledge. However, applying the dialogue system directly to the usual conversation is still limited because the system requires a complete sentence-formed persona and knowledge candidate sets from the given dataset. In contrast to the dialogue setting in the dataset, humans utilize semantic concepts in their minds rather than a set of pre-defined candidate sentences. Following this manner of human dialogue, we suggest an adaptive dialogue system that is applicable to situations where complete sentence-formed candidates are not given. Our model generates consistent and relevant persona descriptions and identifies relevant knowledge for engaging and knowledgeable responses, even with fragmentary information. We show that our model outperforms previous baselines that utilize persona and knowledge candidate sentences and conduct the human evaluation on the machine-generated responses. In addition, we conduct ablation studies to demonstrate the effectiveness of each component of our model. Furthermore, we apply our model to other dialogue datasets that only ground knowledge or persona to showcase its adaptability. Our code is available at https://github.com/dlawjddn803/BeCand.
|
https://aclanthology.org/2023.findings-emnlp.534
|
## introduction in usual conversations, humans utilize the semantic concept in their minds in terms of the dialogue topic and the preference of the interlocutor. with the semantic-level of concepts, humans communicate each other by aggregating the concepts to convey knowledgeable and empathetic responses @xcite . it implies that people converse by adaptively reorganizing and retrieving additional information with their semantic concepts, encompassing knowledge and persona, not by relying on pre-defined sources @xcite @xcite . it seems that @xcite and @xcite adhere to this human-like approach on the conversation by referring to persona and knowledge. however, it neglects the humans' semantic concept reconstruction and retrieval capability by requiring pre-defined candidate sets to ground as in figure 1 @xcite . as knowledge and persona candidates for the agents are not given in usual conversation, the dependency on the candidates eventually limits their applicability to candidate-free situations as depicted in figure 1 (a). to build the dialogue agents adaptive to the candidate-agnostic situation, two branches of studies are conducted. in knowledge-grounded conversation, the knowledgeable agents employ the non-parametric memory-based retrieval to overcome candidate-agnostic situations @xcite . similarly, personaaware dialogue agents consider the out-of-persona situations by extending persona sentences from a few persona concept @xcite @xcite . even though both streams of research focus on the candidate-agnostic conversational situation, they only leverage a single source for grounding, rather than utilizing both persona and knowledge, simultaneously. in this paper, we propose a dialogue agent utilizing persona and knowledge that is adaptive to the candidate-free situation. to this end, our method consists of 1) a knowledge-retriever 2) a concept-based persona generator, 3) a dialoguepersona aligner, and 4) a response generator. when the knowledge concept is given, a knowledge retriever finds the relevant knowledge from the knowledge base. our concept-based persona generator then produces complete sentences with fragmentary persona concepts. the generated persona descriptions are then validated based on the persona aligner regarding both consistency and relevancy. the validated persona descriptions are used as the input of the response generator. experimental results show that our candidatefree model outperforms other baselines. also, we show that the concept-based persona generator and persona aligner boost the performance of the dialogue agents with the ablation studies. we conduct the human evaluation of our model's responses, and the result implies that our method is effective in building a persona-knowledge dialogue agent without candidate sentences. moreover, we demonstrate that our method is capable of utilizing other dialogue datasets grounding single source, such as personachat @xcite or wizardof-wikipedia (wow) @xcite , and shows the adaptiveness of our proposed model. in qualitative results, it is shown that the generated responses are comparable to the ground truth answers without the given candidates. ## method we propose adaptive dialogue agents that generate the responses without the persona and knowledge candidates. to this end, we assume that the knowledge and persona concepts are only given to the agent for knowledgeable and engaging responses. first, 1) knowledge retriever retrieves the relevant paragraphs with the knowledge concept, and 2) concept-based persona generator produces the persona descriptions with the given short persona concepts. then, 3) persona aligner decides whether the generated persona descriptions are relevant to the dialogue history and whether the sentences are consistent with the previous dialogue history. afterward, 4) response generator provides knowledgeable and engaging responses with the predicted knowledge paragraphs and persona descriptions. ## conclusions in this paper, we introduced an adaptive dialogue agent utilizing persona and knowledge without the given candidates from the dataset. due to the absence of knowledge candidates, the knowledge retriever retrieves the relevant paragraphs with the knowledge concept from the knowledge base. also, the concept-based persona generator outputs the persona descriptions with the fragmentary persona concepts from retrieve-and-generate architecture. the generated persona descriptions are then validated through a persona aligner regarding relevancy and consistency. from experiments, we showed that our method is effective even though the persona concept and knowledge concept are given with the dialogue. we also presented the ablation studies on each component of our model. moreover, we conducted the human evaluation to show the improved quality of the responses of our models and it is also shown in qualitative results. to show its applicability and adaptiveness, we denoted the experimental results of our method on focus, wow, and personachat datasets.
| 24,623
|
218
| 2,022
|
You can’t pick your neighbors, or can you? When and How to Rely on Retrieval in the k NN - LM
|
Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs. One such approach, the kNN-LM, interpolates any existing LM’s predictions with the output of a k-nearest neighbors model and requires no additional training. In this paper, we explore the importance of lexical and semantic matching in the context of items retrieved by kNN-LM. We find two trends: (1) the presence of large overlapping n-grams between the datastore and evaluation set plays an important factor in strong performance, even when the datastore is derived from the training data; and (2) the kNN-LM is most beneficial when retrieved items have high semantic similarity with the query. Based on our analysis, we define a new formulation of the kNN-LM that uses retrieval quality to assign the interpolation coefficient. We empirically measure the effectiveness of our approach on two English language modeling datasets, Wikitext-103 and PG-19. Our re-formulation of the kNN-LM is beneficial in both cases, and leads to nearly 4% improvement in perplexity on the Wikitext-103 test set.
|
https://aclanthology.org/2022.findings-emnlp.218
|
## introduction recently, a new class of language models (lms) that are augmented with retrieval capabilities have led to substantial improvements over standard neural lms @xcite @xcite @xcite . furthermore, lms with retrieval warrant investigation as they provide benefits for many tasks @xcite . these approaches generally involve a backbone neural lm that interacts with a retrieval component of varying complexity to find relevant documents. in this work, we analyze and improve a specific and simple type of retrieval-enhanced language model, the knn-lm originally proposed by @xcite . the knn-lm is non-parametric -it works by retrieving instances from an external datastore at each decoding timestep, and it improves language model performance without requiring additional training. in essence, the knn-lm interpolates a base lm's predicted probability distribution of the next word with a distribution formed by retrieving vectors similar to the current hidden state. knn-lm includes two tunable hyperparameters: the number of items to retrieve (k) and an interpolation coefficient (λ). the method's effectiveness depends crucially on source and size of the retrieval datastore: it is most effective when using a very large datastore with orders of magnitude more tokens than seen in the training corpus, but @xcite also observe improvements with smaller datastores. modern neural models have massive capacity to memorize their training data @xcite . nonetheless, simply using an lm's training corpus as the source for the datastore works well for knn-lm, as test perplexity on the wikitext-103 dataset decreases substantially from 18.65 to 16.12. however, it remains unclear how and why the knn-lm achieves these improvements. which types of tokens and contexts does it improve most on? as an effort to answer this question and motivate new more effective methods to enhance lms with retrieval we analyze the knn-lm's behavior with respect to parts of speech, semantic similarity between context and retrievals, and lexical overlap. among others, our analysis reveals the knn-lm is helpful beyond factual knowledge (i.e. proper nouns), and improves perplexity across many word types, so it would be difficult to extend knn-lm using syntactic information alone. on the other hand, we find the performance of the knn-lm highly correlates with lexical similarity between the context and retrieved items, although this is somewhat domain specific and does not fully explain its strong performance. semantic similarity is nearly as accurate a predictor of knn-lm performance as lexical similarity, making it a strong candidate to extend the knn-lm. based on our analysis, we devise a simple scheme to extend the knn-lm following the intuition that when retrieval quality is high (measured by semantic similarity), then the model should rely more heavily on the knn-based prediction. since retrieval in the knn-lm is latent, we use semantic similarity as a proxy to measure retrieval relevance. concretely, our method is an adaptive version of knn-lm that assigns the interpolation coefficient according to retrieval quality (see figure 1 ). while it introduces new hyperparameters, we show that the additional hyperparameter tuning comes at negligible cost. importantly, our empirical results demonstrate that our newly introduced re-formulation of knn-lm is beneficial for both encylopedic text and book data, and leads to an improvement of nearly 4% perplexity over the the vanilla knn-lm, measured on the english language modeling wikitext-103 test set. broadly, we hope our insights and methods helps to facilitate future development of retrieval-augmented lms. ## language modeling with knn-lm the knn-lm improves over a base language model by explicitly memorizing the lm's training data. it stores exact sentences from the training data in its datastore that can be accessed during language model inference to produce a k-nearest neighbor next word distribution that is interpolated with the base model's prediction. interpolation is preferred for similar reasons as approximate matrix factorization in collaborative filtering -the universe of text patterns is sparse and lossless compression of the training data alone is not sufficient to model new patterns. in this section, we explain the specifics of the knn-lm's inner workings in order to guide our analysis. ## a new formulation for knn-lm in the previous section, we analysed when knn-lm is most helpful. we use this information to design a new formulation of knn-lm that can exploit this behavior. the original knn-lm uses the same interpolation coefficient (λ) for every example, which may not be desirable. as our analysis reveals, we can predict when the knn-lm is most beneficial, which naturally leads us to a new formulation with an adaptive λ: ## experiments and results to measure the importance of retrieval quality in the knn-lm, we evaluate our approach ( §4) on two english language modeling datasets. the first is the wikitext-103 corpus @xcite used by @xcite . the second is pg-19 @xcite , which we include because it consists of books and is thematically distinct from the encyclopedic documents in wikitext-103. ## discussion in previous sections we use observations of knn-lm to motivate our new approach that adapts the interpolation coefficient to retrieval quality. here we analyze results with our new method to see how they compare with baselines and deepen our understand of retrieval-enhanced language modeling. 6.1 can we adapt to lexical similarity? the original knn-lm has similar performance when its results are stratified by either semantic or lexical similarity ( §3.1), but in our new formulation we adaptive the coefficient only according to semantic similarity. what if we use lexical similarity instead? we explore this possible alternative and report the results for wikitext-103 in in general, we find that both semantic and lexical similarity foot_6 yield similar results when used to bucket queries. for the best setting, when @xmath0, the learned vectors work better, reflecting recent findings that dense vectors outperform sparse representations for various retrieval-related tasks @xcite . hence, throughout this paper we adapt the coefficient using semantic similarity and @xmath1. interestingly, for lower values of k the bagof-words representation has an edge over semantic similarity. perhaps this suggests lexical similarity is more precise, and if retrieving many items is costly, then adapting the coefficient according to lexical similarity might be particularly helpful. ## related work we extend the knn-lm by adapting the interpolation coefficient to retrieval quality (measured by semantic similarity). adaptret @xcite models the interpolation coefficient as a function of the query. this is convenient, since one can skip retrieval if the coefficient is below a threshold, although requires training a separate adaptor network. crucially, their coefficient predictions are based solely on query features, and does not take into account whether retrieval is successful. our approach incorporates the quality of retrieval, and improves language modeling results. it is simple and effective, and only needs lightweight hyperparameter tuning without any additional training. retomaton @xcite provides an alternative means to bypass retrieval. they build a graph over the datastore, and at each time step they either retrieve like the original knn-lm or re-use the previously retrieved neighbors to traverse the graph. this is more efficient than adaptret, providing better results at lower cost. both adaptret and retomaton are designed with efficiency in mind. they rely on approximate distance using product quantization and perform about as well as the exact distance version of the knn-lm. we improve upon knn-lm by about 4% perplexity. there are many recent works that use retrieval components for language tasks besides language modeling, such as question answering @xcite @xcite , dialogue generation @xcite , conversational search @xcite , semantic parsing @xcite , data augmentation @xcite , and machine translation @xcite @xcite . there are alternatives to knn-lm that incorporate document structure @xcite , but their experimental setup is not comparable with ours. in our baselines we only consider models matching the original knn-lm backbone, although alternative architectures show promise for retrieval-enhanced language modeling @xcite @xcite . scaling the datastore @xcite or model size @xcite have shown to effectively improve language modeling. alternatively, text generation may be improved through more advanced ranking @xcite or decoding @xcite algorithms. researchers have explored fundamental extensions to knn that are agnostic to language data. @xcite spatially partition the datastore, adapting the value of k for each region. keeping k fixed, @xcite instead adapt the shape of the neighborhood based on local information. ## conclusion in this paper, we have proposed a novel and effective re-formulation of the knn-lm. our approach adapts the interpolation coefficient to the quality of retrieved documents measured by semantic similarity. we motivate our approach through extensive analysis, which also provides insights on the types of tokens and contexts knn-lm is most helpful for. importantly, we empirically demonstrate the effectiveness of our approach through experiments on two domains, wikitext-103 (encyclopedic text) and pg-19 (book data), and outperform the original knn-lm by 4% test perplexity on the wikitext-103 language modeling corpus.
| 16,675
|
161
| 2,020
|
HERO : Hierarchical Encoder for V ideo+ L anguage Omni-representation Pre-training
|
We present HERO, a novel framework for large-scale video+language omni-representation learning. HERO encodes multimodal inputs in a hierarchical structure, where local context of a video frame is captured by a Cross-modal Transformer via multimodal fusion, and global video context is captured by a Temporal Transformer. In addition to standard Masked Language Modeling (MLM) and Masked Frame Modeling (MFM) objectives, we design two new pre-training tasks: (i) Video-Subtitle Matching (VSM), where the model predicts both global and local temporal alignment; and (ii) Frame Order Modeling (FOM), where the model predicts the right order of shuffled video frames. HERO is jointly trained on HowTo100M and large-scale TV datasets to gain deep understanding of complex social dynamics with multi-character interactions. Comprehensive experiments demonstrate that HERO achieves new state of the art on multiple benchmarks over Text-based Video/Video-moment Retrieval, Video Question Answering (QA), Video-and-language Inference and Video Captioning tasks across different domains. We also introduce two new challenging benchmarks How2QA and How2R for Video QA and Retrieval, collected from diverse video content over multimodalities.
|
https://aclanthology.org/2020.emnlp-main.161
|
## introduction inspired by bert @xcite , largescale multimodal pre-training has prevailed in the realm of vision-and-language research @xcite @xcite . there are many early players in the area, including vilbert @xcite , lxmert @xcite , uniter @xcite , vl-bert @xcite and unicoder-vl @xcite . however, most large-scale pre-trained models are tailored for static images, not dynamic videos. videobert @xcite is the first to apply bert to learn joint embedding for videotext pairs. but since only discrete tokens are used to represent video frames, rich video frame features are not fully utilized. to remedy this, cbt @xcite proposes to use a contrastive loss, but mainly for video representation learning alone, with text input only considered as side information. univilm @xcite takes a step further and considers both understanding and generation tasks. several constraints inherently limit the success of existing models. (i) most model designs are direct adaptation of bert, taking simple concatenation of subtitle sentences and visual frames as input, while losing the temporal alignment between video and text modalities. (ii) pre-training tasks are directly borrowed from image+text pre-training methods, without exploiting the sequential nature of videos. (iii) compared to diverse image domains investigated in existing work, video datasets used in current models are restricted to cooking or narrated instructional videos @xcite , excluding video sources that contain dynamic scenes and complex social interactions. to tackle these challenges, we present a new video-and-language large-scale pre-training framework -hero (hierarchical encoder for omnirepresentation learning). as illustrated in figure 1 , hero takes as input a sequence of video clip frames and their accompanying subtitle sentences. 2 instead of adopting a flat bert-like encoder, hero encodes multimodal inputs in a hierarchical fashion, with (i) a cross-modal transformer to fuse a subtitle sentence and its accompanying local video frames, followed by (ii) a temporal transformer to obtain a sequentially contextualized embedding for each video frame, using all the surrounding frames as global context. the proposed hierarchical model first absorbs visual and textual local context on frame level, which is then transferred to a global video-level temporal context. experiments show that this novel model design achieves better performance than a flat bert-like architecture. four pre-training tasks are designed for hero: (i) masked language modeling (mlm); (ii) masked frame modeling (mfm); (iii) video-subtitle matching (vsm); and (iv) frame order modeling (fom). compared to prior work, the key novelty is vsm and fom, which encourage explicit temporal alignment between multimodalities as well as full-scale exploitation of the sequential nature of video input. in vsm, the model considers not only global alignment (predicting whether a subtitle matches the input video clip), but also local temporal alignment (retrieving the moment where the subtitle should be localized in the video clip). in fom, we randomly select and shuffle a subset of video frames, and the model is trained to restore their original order. extensive ablation studies demonstrate that both vsm and fom play a critical role in video+language pre-training. to empower the model with richer knowledge beyond instructional videos used in prior work, we jointly train hero with both howto100m (narrated instructional videos) @xcite and a large-scale tv dataset (containing tv episodes spanning across different genres) @xcite @xcite . compared to factual descriptions in howto100m, the tv dataset contains more complex plots that require comprehensive interpretation of human emotions, social dynamics and causal relations of events, making it a valuable supplement to howto100m and a closer approximation to real-life scenarios. existing pre-trained models are evaluated on youcook2 @xcite and msr-vtt @xcite datasets. youcook2 focuses on cooking videos only, and the captions in msr-vtt are very simple. to evaluate our model on more challenging benchmarks, we collect two new datasets on video-moment retrieval and question answering, how2r and how2qa. in addition, we evaluate hero on popular retrieval and qa tasks such as tvr @xcite and tvqa @xcite , where hero outperforms existing models by a large margin. we further demonstrate the generalizability of our model by adapting it to (i) diverse downstream tasks: video-and-language inference and video captioning tasks, achieving new state of the art on violin @xcite and tvc @xcite benchmarks; (ii) different video types: single-channel videos (video-only) and multi-channel videos (video + subtitle), reporting superior performance over existing state of the art on didemo (anne hendricks et al., 2017a) and msr-vtt. our main contributions are summarized as follows. (i) we present hero, a hierarchical transformer-based model for video+language representation learning. (ii) we propose new pretraining tasks vsm and fom, which complement mlm and mrm objectives by better capturing temporal alignment between multimodalities in both global and local contexts. (iii) different from previous work that mainly relies on howto100m, we include additional video datasets for pre-training, encouraging the model to learn from richer and more divserse visual content. (iv) we collect two new datasets based on howto100m for video-moment retrieval/qa, and will release the new benchmarks to foster future study. hero achieves new state of the art across all the evaluated tasks. ## related work since the birth of bert @xcite , there has been continuing advancement in language model pre-training, such as xlnet @xcite , roberta @xcite , albert @xcite , unilm @xcite , and t5 @xcite , which epitomizes the superb power of large-scale pre-training. satellited around bert, there is parallel growing interest in model compression @xcite and extension to generation tasks @xcite . branching out from language processing to multimodal, subsequent studies also emerge in vi-sion+language space. prominent work includes vilbert @xcite , lxmert @xcite , vl-bert @xcite , unicoder-vl @xcite , b2t2 @xcite , uniter @xcite and villa @xcite . a detailed review can be found in appendix a.7. contrast to the boom in image+text area, pretraining for video+language is still in its infancy. so far, videobert @xcite , cbt @xcite , mil-nce @xcite , act-bert @xcite and univilm @xcite are the only existing work exploring this space, covering downstream tasks from textbased video retrieval @xcite and video question answering @xcite to video captioning @xcite . in this paper, we aim to propel video+language omni-representation learning in four dimensions: (i) better model architecture design; (ii) better pretraining task design; (iii) diversification of training corpora; and (iv) new high-quality benchmarks for downstream evaluation. ## hierarchical video+language encoder in this section, we explain the proposed hero architecture and the four pre-training tasks in detail. ## experiments in this section, we describe comprehensive experiments on downstream tasks and provide ablation studies for in-depth analysis of different pretraining settings. to validate the effectiveness of hero, we evaluate on a wide variety of downstream tasks, including text-based video/ video-moment retrieval, video question answering, video-and-language inference, and video captioning. we consider 6 existing benchmarks: tvr @xcite , tvqa @xcite , violin @xcite , tvc @xcite , didemo (anne hendricks et al., 2017a), and msr-vtt @xcite . detailed descriptions and evaluation metrics on each task can be found in appendix a.6. ## conclusion in this paper, we present a hierarchical encoder for video+language omni-representation pre-training. our hero model presents a hierarchical architecture, consisting of cross-modal transformer and temporal transformer for multi-modal fusion. novel pre-training tasks are proposed to capture temporal alignment both locally and globally. pretrained on two large-scale video datasets, hero exceeds state of the art by a significant margin when transferred to multiple video-and-language tasks. two new datasets on text-based video-moment retrieval and video qa are introduced to serve as additional benchmarks for downstream evaluation. we consider extension of our model to other videoand-language tasks as future work, as well as developing more well-designed pre-training tasks. downstream task pre-training video ret. moment ret. 18 video moment ret. 18 r@1 r@10 r@100 r@1 r@10 r@100 r@1 r@10 r@100 pre-training greatly lifts hero performance on violin by approximately +2.9%. however, hero, without pre-training, presents worse performance than the sota baseline. unlike multistream, which leverages fine-grained region-level features, our results are reported on global framelevel features. therefore, it may be difficult for hero to capture the inconsistency between hypothesis and video content. for example, changes of hypotheses about region-level attributes (color, shape, and etc.) may result in different conclusions. extending hero for region-level video representations could be an interesting future direction. hero is also extensible to generation task: multi-modal video captioning. our results on tvc show that hero with pre-training surpasses mmt by a large margin. although pre-training is only applied to the encoder, it significantly improves hero performance on tvc across all metrics. when no pre-training is applied, hero is slightly inferior to the sota baseline. our hypothesis is that tvc has short video context (with video length of 9-second on average) but our model is designed for long video representation learning (tvr/tvqa with video length of 76-second on average). how to design pre-training tasks for mmt on tvc or including decoder pre-training for hero are left for future works.
| 3,882
|
939
| 2,023
|
Prompting with Pseudo-Code Instructions
|
Prompting with natural language instructions has recently emerged as a popular method of harnessing the capabilities of large language models (LLM). Given the inherent ambiguity present in natural language, it is intuitive to consider the possible advantages of prompting with less ambiguous prompt styles, like pseudo-code. In this paper, we explore if prompting via pseudo-code instructions helps improve the performance of pre-trained language models. We manually create a dataset of pseudo-code prompts for 132 different tasks spanning classification, QA, and generative language tasks, sourced from the Super-NaturalInstructions dataset. Using these prompts along with their counterparts in natural language, we study their performance on two LLM families - BLOOM, CodeGen. Our experiments show that using pseudo-code instructions leads to better results, with an average increase (absolute) of 7-16 points in F1 scores for classification tasks and an improvement (relative) of 12-38% in aggregate ROUGE-L scores across all tasks. We include detailed ablation studies which indicate that code comments, docstrings, and the structural clues encoded in pseudo-code all contribute towards the improvement in performance. To the best of our knowledge, our work is the first to demonstrate how pseudo-code prompts can be helpful in improving the performance of pre-trained LMs.
|
https://aclanthology.org/2023.emnlp-main.939
|
## introduction prompting with natural language instructions has recently emerged as a popular method of harnessing the capabilities of large language models. in addition to fine-tuning, models are often fine-tuned using instructions on a large collection of datasets listing 1 an example pseudo-code instruction for the task from @xcite . a successful model is expected to use the provided pseudo-code instructions and output responses to a pool of evaluation instances. 1 def generate_sentiment(sentence: str) -> str: 2 """for the given sentence, the task is to 3 predict the sentiment. for positive 4 sentiment return "positive" else return 5 "negative". to help improve the ability of lms to follow instructions and performance on unseen tasks @xcite . however, natural language instructions can be ambiguous and under-specified, and therefore have multiple interpretations -including detailed instructions may not always be beneficial, as it can add to the complexity of reasoning for models. this has led to the growing body of work around 'prompt-engineering' where specialized prompting strategies are developed for different domains and task types @xcite @xcite @xcite . in addition, inference-time prompting strategies that specifically aid multi-step reasoning have also been found to be helpful -e.g: the inclusion of chainof-thought reasoning in few-shot settings results in improved performance over standard prompts @xcite , the infamous "let's think stepby-step"-prompt for boosting 0-shot performance @xcite . input: q, k, and v : input matrices. ## related work finetuning large language models on instruction datasets can enhance their performance and even their ability to generalize to unseen tasks @xcite . many aspects of instruction finetuning such as the number of tasks, model size, and finetuning on chain-of-thought data have been found to be useful @xcite . consequently, significant efforts have been invested in manually creating instruction datasets, as well as using existing generative models to train and evaluate language models @xcite @xcite . the instructions available in instruction tuning datasets are mostly in natural language, but have been applied for both natural language tasks and programming tasks. but alternatives to natural language instructions such as programming language code, pseudo-code, symbols (maccartney and manning, 2007) etc. have not been thoroughly explored even for programming tasks. compared to natural language, code or pseudo-code has less ambiguity due to its inherent nature of using functions or steps that contribute towards accomplishing a task. this makes them a natural choice for specifying instructions. recently, few works (marvinai; @xcite have explored code and pseudocode as inputs. unlike contemporaneous work by @xcite we find that pseudo-code instructions indeed provide better performance over nl instructions on a wide variety of tasks. ## dataset the super-naturalinstructions dataset @xcite comprises 1, 616 diverse nlp tasks, and each task contains the task instruction, positive/negative examples, and instances. we sampled a mixture of 132 tasks that did not require multilingual capabilities and re-wrote instructions for a subset of this dataset using python constructs. note that we borrow python constructs only to express our prompts in pseudo-code and our prompts do not result in executable python code. further, we do not include any additional steps/instructions that were not present in the original natural language instructions. all task instructions follow the schema as described in listing 1. the schema consists of the following elements. function prototype: this defines the prototype of the main pseudo-code function. the function names are descriptive and summarize the task to be performed. they also include all variables passed as input along with their data types and return type. we follow the pep 8 foot_2 style guidelines for writing the pseudo-code and use strongly typed prototypes. we avoid declaring global variables whenever possible and pass them as arguments to a method. to the extent possible, we also avoid the use of classes and enumerations. line number 1 in listing 1 provides an example function prototype for a sentiment classification task. ## evaluation in order to study if instruction specification via pseudo-code results in improved performance over baseline nl english instructions, we choose to experiment with bloom @xcite , code-gen @xcite ) models. our choice of models is motivated by the fact that these models have not been instruction-fine-tuned on the natural instructions dataset. in addition, they have both been trained on code and natural language data. the bloom models are trained on the roots corpus @xcite consisting of 46 natural and 13 programming languages. on the other hand, the codegen models are trained on the pile corpus @xcite ), google's publicly available bigquery and bigpython datasets @xcite . the bloom models have been trained on a mixture of natural language and code simultaneously. as for the codegen models we utilize, they were initially trained on natural language and subsequently received additional ## conclusion and future work in this paper we presented our work on prompting with pseudo-code instructions. we created a collection of pseudo-code instructions comprising of 132 nlp tasks from the super-naturalinstructions dataset @xcite . we evaluated the performance of the following families of models -codegen and bloom at different model sizes and found that prompting all models with pseudo-code instructions results in significant gains as compared to prompting with nl instructions. our work opens up multiple directions of future work. it is interesting to observe that not only do pseudo-code instructions help when used with code models, they also work better on models designed for natural language tasks. in addition, the fact that code mod-els used in our experiments perform better than nl models, even when prompted with natural language instructions, suggests that it could be useful to explore instruction tuning of code models instead of pure nl models for nl applications. based on the findings of this paper it may also be useful to consider the effects of instruction fine-tuning with pseudo-code instructions as opposed to nl instructions. another aspect worth studying is how traditional chain-of-thought may compare with pseudo-code prompts -how would reasoning enabled by pseudocode instructions compare with chain-of-thought reasoning with and without fine-tuning? further, pseudo-code instructions may not only be used as direct inputs to a model, but they could also be used to create intermediate responses that a model needs to generate prior to returning a response.
| 22,714
|
17
| 2,024
|
Can Rule-Based Insights Enhance LLM s for Radiology Report Classification? Introducing the R ad P rompt Methodology.
|
Developing imaging models capable of detecting pathologies from chest X-rays can be cost and time-prohibitive for large datasets as it requires supervision to attain state-of-the-art performance. Instead, labels extracted from radiology reports may serve as distant supervision since these are routinely generated as part of clinical practice. Despite their widespread use, current rule-based methods for label extraction rely on extensive rule sets that are limited in their robustness to syntactic variability. To alleviate these limitations, we introduce RadPert, a rule-based system that integrates an uncertainty-aware information schema with a streamlined set of rules, enhancing performance. Additionally, we have developed RadPrompt, a multi-turn prompting strategy that leverages RadPert to bolster the zero-shot predictive capabilities of large language models, achieving a statistically significant improvement in weighted average F1 score over GPT-4 Turbo. Most notably, RadPrompt surpasses both its underlying models, showcasing the synergistic potential of LLMs with rule-based models. We have evaluated our methods on two English Corpora: the MIMIC-CXR gold-standard test set and a gold-standard dataset collected from the Cambridge University Hospitals.
|
https://aclanthology.org/2024.bionlp-1.17
|
## introduction supervised deep learning for medical imaging classification has accomplished significant milestones. in the chest x-ray (cxr) domain, such models have exhibited predictive capabilities on par with expert physicians @xcite and are being utilized in collaborative * equal contribution. annotating medical images, however, is expensive and arduous: it requires a committee of expert radiologists to resolve the inherently high degree of annotator variance and subjectivity @xcite . this issue is particularly problematic considering the global shortage of radiologists @xcite @xcite . instead, we often have access to a form of distant supervision: the radiology report. radiology reports are semi-structured free-text interpretations of an x-ray image and are generated as a routine part of clinical practice to communicate findings. in the past, rule-based models @xcite have been used to extract structured labels from radiology reports in various imaging datasets, including chestx-ray14 @xcite , chexpert @xcite , mimic-cxr @xcite and brax @xcite . however, those rule-based methods are often based on elementary techniques and, thus, exhibit limited robustness to syntactic variation. naturally, supervised deep learning models offer superior performance through their robustness to syntactic variability @xcite ). in contrast, large language models (llms) represent a significant improvement over rule-based models in an unsupervised setting and have achieved impressive performance in the field of radiology @xcite @xcite . in this paper, we present radpert, a rule-based model built on the radgraph knowledge graph @xcite . radpert leverages entity-level uncertainty labels from radgraph, reducing the need for a comprehensive rule set and enhancing its resilience to syntactic variations. we have evaluated radpert internally on mimic-cxr and externally on a dataset collected from the cambridge university hospitals (cuh). radpert surpasses chexpert, the former rule-based state-of-the-art (sota), by achieving statistically significant improvement in weighted average f1 score. furthermore, we explore the collaborative potential of llms with rule-based models through radprompt. radprompt is a multi-turn prompting strategy that employs radpert as an implicit means of encoding medical knowledge (figure 1 ). in fact, radprompt, based on gpt-4 turbo, manages to outperform both its underlying models in a zero-shot setting. ## related work numerous natural language processing methods have been developed to derive structured predictions from radiology reports @xcite @xcite @xcite . many of those approaches are designed for the multitask classification of radiology reports, written in english, into labels representing prevalent pathologies from cxrs. each such label can exhibit one of four output classes: null, positive, negative and uncertain. chexpert @xcite , the rule-based sota, follows an approach based on regular expression matching and the universal dependency graph (udg) of a radiology report. due to the rudimentary regular expression matching, however, chexpert is sensitive to syntactic variation. thus, multiple over-generalized rules are used in an attempt to alleviate these shortcomings. furthermore, the udg is a type of information extraction that does not explicitly identify negation and uncertainty. therefore, its ability to detect uncertainty in complex phrases is hampered despite the extensive rule set. extensions of chexpert have been developed for brazilian portuguese @xcite and german @xcite . chexbert @xcite is a semi-supervised model pretrained on automatically extracted labels from the chexpert model, fine-tuned on manually annotated reports, and evaluated on 687 mimic-cxr goldstandard test set reports. however, the published model weights 1 of chexbert differ from the original model. this discrepancy complicates compar-isons on the mimic-cxr dataset as the published model is fine-tuned on unspecified mimic-cxr manually annotated reports, which can potentially overlap with the mimic-cxr gold-standard test set. recent work has also explored the adoption of llms for radiology report classification. @xcite examine the zero and few-shot capabilities of llms. however, they mainly treat the task as a binary classification for each pathology. namely, for multitask classification, they only report the few-shot results on an unpublished institutional dataset. chex-gpt @xcite utilizes zero-shot gpt-4 labels as a distant supervision to fine-tune a bert-based model. nonetheless, they also simplify the task into binary classification. alternative approaches to the classification of chest x-rays (cxrs) explore moving away from the distantly supervised paradigm of training unimodal vision models on classifying structured labels extracted from radiology reports. in lieu of structured prediction, vision-language (vl) models are trained to align the embedding representations of cxrs with the representations of the corresponding radiology reports via self-supervised contrastive learning objectives @xcite @xcite @xcite . this alignment task is transformed into cxr classification through the cosine similarity of cxr embeddings to the embeddings of textual prompts representing the existence or absence of pathologies. however, vision models trained with the structured prediction paradigm outperform vl models such as chexzero @xcite , even when the latter utilizes an expert-annotated validation set for selecting optimal classification thresholds. in this paper, we will focus on improving the unsupervised sota for the multitask classification of radiology reports. ## limitations while this study demonstrates promising improvements in radiology report classification using the radprompt methodology, several limitations must be considered. radpert and radprompt are exclusively developed and tested for the english language. the study also centers around a list of pathologies typical of chest x-rays. as such, the extension of our methodologies to other languages, types of medical imaging, and additional pathologies was not verified. furthermore, previous studies have highlighted discrepancies between labels from radiology report annotations and those from the corresponding imaging study annotations @xcite . the source of such inconsistencies includes incomplete radiology report impressions, hierarchical relationships within labels, and the undeniable uncertainty of the task. in future work, we aim to study this effect within the cuh test set. due to ethical considerations, we are currently unable to perform inference for the cuh test set through third-party apis. thus, we have not evaluated radprompt externally for sota llms. we expect to overcome this limitation after the planned release of the cuh dataset. additionally, we cannot estimate the computational cost and carbon footprint for gpt-4-based radprompt due to a lack of specific metrics. in the appendix, we provide carbon footprint estimates for the llama-2-based radprompt, which is significantly higher than radpert and chexpert. nonetheless, radpert delivers performance comparable to gpt-4 while operating on a commercial cpu with minimal carbon emissions, underscoring its benefits in resource-limited environments. finally, there is an inherent degree of ambiguity in classifying radiology reports, especially as it pertains to the uncertainty labels. we aim to extend current datasets with labels from multiple annota- ## conclusions this paper introduced radpert, a rule-based system enhanced by the radgraph information schema, demonstrating significant improvements in the classification of radiology reports. by leveraging entitylevel uncertainty labels, radpert reduces reliance on comprehensive rule sets. our evaluations show that radpert surpasses chexpert, the previous rulebased sota, by achieving an 8.0% (95% ci: 5.5%, 10.8%) increase in f1 score, with confidence intervals strongly supporting this improvement. further extending the application of radpert, we developed radprompt, a multi-turn prompting strategy that utilizes insights from radpert to enhance the zero-shot prediction capabilities of large language models. radprompt demonstrated a 2.1% (95% ci: 0.3%, 4.1%) improvement in f1 score over gpt-4 turbo, indicating its potential to refine predictions in clinical settings. these results highlight the growing synergy between structured rule-based systems and large language models, offering a promising direction for future research in biomedical natural language processing. as we continue to refine these tools, future work will focus on expanding the existing datasets and addressing the discrepancies between goldstandard image labels and those extracted from radiology reports.
| 28,491
|
70
| 2,023
|
Multi-Modal Knowledge Graph Transformer Framework for Multi-Modal Entity Alignment
|
Multi-Modal Entity Alignment (MMEA) is a critical task that aims to identify equivalent entity pairs across multi-modal knowledge graphs (MMKGs). However, this task faces challenges due to the presence of different types of information, including neighboring entities, multi-modal attributes, and entity types. Directly incorporating the above information (e.g., concatenation or attention) can lead to an unaligned information space. To address these challenges, we propose a novel MMEA transformer, called Meaformer, that hierarchically introduces neighbor features, multi-modal attributes, and entity types to enhance the alignment task. Taking advantage of the transformer’s ability to better integrate multiple information, we design a hierarchical modifiable self-attention block in a transformer encoder to preserve the unique semantics of different information. Furthermore, we design two entity-type prefix injection methods to redintegrate entity-type information using type prefixes, which help to restrict the global information of entities not present in the MMKGs.
|
https://aclanthology.org/2023.findings-emnlp.70
|
## introduction multi-modal entity alignment (mmea) is a challenging task that aims to identify equivalent entity pairs across multiple knowledge graphs that feature different modalities of attributes, such as text and images. to accomplish this task, sophisticated models are required to effectively leverage information from different modalities and accurately align entities. this task is essential for various applications, such as cross-lingual information retrieval, question answering @xcite , and recommendation systems @xcite . mmea @xcite @xcite is challenging due to the heterogeneity of mmkgs (e.g., different neighbors, multi-modal attributes, distinct types), which makes it difficult to learn rich knowledge representations. previous approaches such as poe @xcite concatenated all modality features to create composite entity representations but failed to capture interactions among heterogeneous modalities. more recent works @xcite designed multi-modal fusion modules to better integrate attributes and entities, but still did not fully exploit the potential interactions among modalities. these methods also ignored inter-modality dependencies between entity pairs, which could lead to incorrect alignment. generally speaking, although mmkgs offer rich attributes and neighboring entities that could be useful for multi-mdoal entity alignment, current methods have limitations in (i) ignoring the differentiation and personalization of the aggregation of heterogeneous neighbors and modalities leading to the misalignment of cross-modal semantics, and (ii) lacking the use of entity heterogeneity resulting in the non-discriminative representations of different meaning/types of entities. therefore, the major challenge of mmea task is how to perform differentiated and personalized aggregation of heterogeneous information of the neighbors, modalities, and types. although such information is beneficial to entity alignment, directly fusing will lead to misalignment of the information space, as illustrated in figure 1 . firstly, notable disparities between different modalities make direct alignment a challenging task. for example, both the visual attribute of entity ruby in mmkg1 and the neighbor information of the entity ruby in mmkg2 contain similar semantics of programming, but data heterogeneity may impede effective utilization of this information. secondly, complex relationships between entities require a thorough understanding and modeling of contextual information and semantic associations. entities such as the ruby, the perl, and the entity larry wall possess unique attributes, and their inter-relationships are non-trivial, necessitating accurate modeling based on contextual information and semantic associations. furthermore, the existence of multiple meanings for entities further exacerbates the challenge of distinguishing between two entities, such as in the case of the ruby, which has different meanings in the mmkg1 and mmkg3 where it may be categorized as a jewelry entity or a programming language entity, respectively. to overcome the aforementioned challenges, we propose a novel multi-modal entity alignment transformer named moalign foot_0 . our framework hierarchically introduces neighbor, multimodal attribute, and entity types to enhance the alignment task. we leverage the transformer architecture, which is known for its ability to process heterogeneous data, to handle this complex task. moreover, to enable targeted learning on different modalities, we design a hierarchical modifiable self-attention block in the transformer encoder, which builds associations of task-related intra-modal features through the layered introduction. additionally, we introduce positional encoding to model entity representation from both structure and semantics simultaneously. furthermore, we integrate entitytype information using an entity-type prefix, which helps to restrict the global information of entities that are not present in the multi-modal knowledge graphs. this prefix enables better filtering out of unsuitable candidates and further enriches entity representations. to comprehensively evaluate the effectiveness of our proposed approach, we design training objectives for both entity and context evaluation. our extensive experiments on benchmark datasets demonstrate that our approach outperforms strong competitors and achieves excellent entity alignment performance. our contributions can be summarized as follows. multi-modal entity alignment task. multimodal entity alignment @xcite @xcite aims to determine if two entities from different multimodal knowledge graphs refer to the same realworld entity. this involves calculating the similarity between pairs of entities, known as alignment seeds. the goal is to learn entity representations from two multi-modal knowledge graphs ## framework this section introduces our proposed framework moalign. as shown in figure 2 , we introduce positional encoding to simultaneously model entity representation from both modality and structure. to hierarchically introduce neighbor and multi-modal attributes, we design a hierarchical modifiable selfattention block. this block builds associations of task-related intra-modal features through the layered introduction. furthermore, for integrating entity-type information, we design a prefix-injected self-attention mechanism, which helps to restrict the global information of entities not present in the mmkgs. additionally, moalign also design training objectives for both entity and context evaluation to comprehensively assess the effectiveness. ## conclusion this paper proposes a novel mmea framework. it incorporates cross-modal alignment knowledge using a two-stage transformer encoder to better capture complex inter-modality dependencies and semantic relationships. it includes a mmkg transformer encoder that uses self-attention mechanisms to establish associations between intra-modal features relevant to the task. our experiments show that our approach outperforms competitors.
| 24,158
|
21
| 2,023
|
Triple-Hybrid Energy-based Model Makes Better Calibrated Natural Language Understanding Models
|
Though pre-trained language models achieve notable success in many applications, it’s usually controversial for over-confident predictions. Specifically, the in-distribution (ID) miscalibration and out-of-distribution (OOD) detection are main concerns. Recently, some works based on energy-based models (EBM) have shown great improvements on both ID calibration and OOD detection for images. However, it’s rarely explored in natural language understanding tasks due to the non-differentiability of text data which makes it more difficult for EBM training. In this paper, we first propose a triple-hybrid EBM which combines the benefits of classifier, conditional generative model and marginal generative model altogether. Furthermore, we leverage contrastive learning to approximately train the proposed model, which circumvents the non-differentiability issue of text data. Extensive experiments have been done on GLUE and six other multiclass datasets in various domains. Our model outperforms previous methods in terms of ID calibration and OOD detection by a large margin while maintaining competitive accuracy.
|
https://aclanthology.org/2023.eacl-main.21
|
## introduction since many industrial applications involve safety -critical domains such as healthcare @xcite @xcite @xcite , anticipating credit card defaults @xcite and selfdriving @xcite , it's essential for machine learning systems to provide not only accurate but also well-calibrated predictions @xcite , which can help to decide whether it can be trusted. however, models achieving high accuracy usually lead to overconfidence and miscalibration @xcite @xcite . this motivates an interesting and important area that attempts to achieve a better trade-off between accuracy and calibration. in addition to id calibration, it's more important for machine learning models to produce high uncertainty when ood data is observed, rather than to produce wrong yet wildly confident predictions. related works. to overcome the problem of miscalibration, numerous methods have been proposed. the natural way is post-hoc calibration that transforms the output of the original network into calibrated confidence scores while maintaining the network's accuracy @xcite @xcite . the second method to mitigate miscalibration is to add regularizations during training such as label smoothing @xcite , mixup @xcite . @xcite and @xcite further conveys that the aforementioned methods can be applied to improve the calibration of pre-trained language models on nlu tasks. the third way is to design a specific loss function to minimize the discrepancy between accuracy and confidence. for example, @xcite lately propose the id and ood regularizer to leverage the relationship between accuracy and uncertainty, and it obtains a significant improvement over previous methods in id calibration and ood detection. energy-based models. in another line of work, joint ebm (jem; @xcite has been shown great improvements on id calibration and ood detection for images without explicit calibration correction mechanism. the core idea is to reinterpret a joint distribution p θ (x, y) from a neural classifier p θ (y|x) in the perspective of ebms and jointly optimize the marginal distribution p θ (x) and a neural classifier p θ (y|x). @xcite further investigate the ood detection performance with different training approaches for p θ (x) such as stochastic gradient langevin dynamics (sgld; @xcite , sliced-score-matching (ssm; @xcite and variational entropy regularized approximate maximum likelihood (vera; @xcite . besides, @xcite propose an implicit generative models based on ebms (igebm) and apply sgld to optimize p θ (x|y). it performs significantly better ood detection than other generative models. however, as shown by @xcite , the accuracy of igebm has dropped dramatically to 49.1% on cifar10 while standard finetuning can achieve 95.8% accuracy. this result indicates that different loglikelihood factorization leads to great gaps in accuracy, id calibration and ood detection. moreover, these training methods such as sgld, ssm, vera need to calculate the gradients about inputs, the none differentiability of text data limits the application of these methods on both calibration and ood detection for nlu tasks. recently, @xcite proposes a joint training of classifier p θ (y|x) and marginal distribution p θ (x) based on residual ebm @xcite for nlu tasks. different from jem, their model is more flexible by designing various energy functions for marginal distribution without any restriction on joint distribution p θ (x, y). to estimate the parameters of marginal distribution p θ (x), they propose to apply noise contrastive estimation (nce; @xcite to train the energy model by discriminating the real data and the fake data generated by a noise distribution. to make the noise distribution as close as possible to the data distribution, they finetune a task-specific gpt-2 @xcite . though it achieves improvements on id calibration, it's often resource-intensive compared to previous methods to finetune gpt-2 @xcite . moreover, the quality and quantity of fake samples generated by noise distribution has great impacts for nce training @xcite . ## triple-hybrid energy-based model motivation. many works @xcite @xcite have shown that ebms could significantly reduce the expected calibration error and improve outof-distribution detection for image classification. specifically, the jem proposed in @xcite factorizes the joint distribution log p θ (x, y) into log p θ (x) + log p θ (y|x), where log p θ (y|x) is to maintain the classification performance and log p θ (x) is the generative term which contributes to better calibration and out-of-distribution detection. on the contrary, the igebm proposed in @xcite factorizes the joint distribution log p θ (x, y) into log p θ (y) + log p θ (x|y) for implicit generation and surprisingly find that it achieves better ood performance. however, lack of p θ (y|x) leads to terrible classification performance. it's shown in @xcite that the classification accuracy dropped dramatically to 49.1% on the cifar10 dataset, while the accuracy is 92.9% by jem. on the other hand, liu and abbeel (2020) proposed a hybrid discriminative-generative energybased model (hdge) for both classification and generation. the loss function consists of a discriminative conditional log-likelihood log p θ (y|x) and a generative conditional log-likelihood log p θ (x|y). compared to igebm, it includes log p θ (y|x) and thus achieves better classification performance. compared to jem, it includes the conditional generative model, rather than the marginal generative model. in other words, jem targets to reduce the energy for data from the population p θ (x), while hdge aims at reducing the energy for compatible pair (x, y). this motivates us to combine the benefits of both conditional and marginal generative model for better calibration and ood detection. ## experiments in this section, we conduct thorough experiments to investigate the empirical peformance of our proposed methods. we first introduce the criteria for id calibration and ood detection. ## conclusion in our work, we propose a triple-hybrid ebm with combination of classifier, conditional generative model and marginal generative model into a unified framework called them. to train ebms effectively and efficiently, we leaverage contrastive learning to approximate the log-likelihood of ebms with negligible computational resources. extensive experiments demonstrates that our model outperforms the state-of-art methods in terms of id calibration and ood detection with competitive accuracy. we further apply contrastive learning to jem and igebm without considering the generation ability to obtain jem(cl) and igebm(cl) respectively. compared to jem(cl) and hdge(cl), our model is more robust to the hyper-parameters of contrastive learning including the temperature and size of memory bank in terms of id calibration and ood detection.
| 21,405
|
39
| 2,023
|
UMUT eam and SINAI at S em E val-2023 Task 9: Multilingual Tweet Intimacy Analysis using Multilingual Large Language Models and Data Augmentation
|
This work presents the participation of the UMUTeam and the SINAI research groups in the SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis. The goal of this task is to predict the intimacy of a set of tweets in 10 languages: English, Spanish, Italian, Portuguese, French, Chinese, Hindi, Arabic, Dutch and Korean, of which, the last 4 are not in the training data. Our approach to address this task is based on data augmentation and the use of three multilingual Large Language Models (multilingual BERT, XLM and mDeBERTA) by ensemble learning. Our team ranked 30th out of 45 participants. Our best results were achieved with two unseen languages: Korean (16th) and Hindi (19th).
|
https://aclanthology.org/2023.semeval-1.39
|
## introduction in natural language processing (nlp), intimacy can be described as how people communicate their perception and willingness to share personal data and emotions to their audience @xcite . the semeval 2023 task 9, entitled multilingual tweet intimacy analysis (mtia) @xcite , consists of a regression task in which the participants should rate in a score from 1 to 5 the intimacy of short documents written in 10 languages: english, spanish, italian, portuguese, french, chinese, hindi, arabic, dutch and korean. this task was co-organized by university of michigan and snap inc. there are two main challenges concerning this task. on the one hand, the training dataset provided to the participants does not cover all the evaluated languages, but only six of them: english, spanish, italian, portuguese, french, and chinese. however, the evaluation is conducted in those six languages plus hindi, arabic, dutch and korean. on the other hand, participants were only allowed to submit a unique run, which hinders the shared task. our strategy to solve the mtia challenge consists of an ensemble learning composed of three multilingual large language models (llm): multilingual bert @xcite , xlm @xcite , and mdeberta @xcite . besides, we use data augmentation incorporating to the training the dataset suggested by the organizers and provided in the work of @xcite , with more than two thousand english questions from reddit and other sources and annotated with intimacy scores in the range @xcite . our participation achieved modest results in the task, reaching the 30th position in the leader-board, with a pearson's r of 0.53. the best result is achieved by lazybob, with a pearson's r of 0.62. as commented above, as the participants were only allowed to submit a unique run, the analysis of our proposal is mainly based on a custom validation split. additional resources concerning our participation can be found at https://github.com/ nlp-umuteam/semeval-2023-mtia . ## background the organisers of the task provided the participants with the novel mint dataset @xcite , whose original training split consists of 9491 tweets rated with an intimacy score. the tweets were compiled between 2018 and 2022. to obtain tweets in different languages, the authors combined language filters in twitter with language detectors models such as fasttext @xcite . next, the authors created clusters of tweets of each language and several annotators rated the tweets in a scale from 1 (not intimate at all) to 5 (very intimate). as it can be observed, in the histogram plotted in figure 1 , most of the samples are rated with low scores. regarding the six languages involved during the training, these are almost balanced, with 1596 documents written in portuguese and chinese, 1592 in spanish, 1588 in french, 1587 in english and 1532 in italian. an example of the dataset is the spanish text "necesito paz mental" 1 , rated with an intimacy score of 2.8. in figure 2 the rounding label distribution is shown. the majority of labels are between 2 and 3 and with fewer instances of labels near to 0 or 5. the participants of the task were encouraged to use the dataset provided in pei and jurgens (2020); which contains english sentences with an intimacy score between -1 and 1. ## system overview our pipeline for solving the mtia 2023 shared task is depicted in figure 3 . in a nutshell, it can be described as follows. first, we clean and preprocess the mtia dataset and keep a small portion of the training split to create a custom validation 1 in english: i need peace of mind split. second, we perform a data augmentation stage applying google translate to the dataset of @xcite . third, we evaluate three multi-lingual llms and one model based on linguistic features. forth, we build an ensemble learning model that averages the predictions of the three llms to send our final predictions to the organizers of the task. concerning the data cleaning stage, we strip hyperlinks, hashtags, mentions and white space characters. regarding the dataset splitter step, we reserve a 20% of the tweets from the training split for custom validation purposes. next, we enlarge the training dataset proposed by incorporating the dataset provided in @xcite . this dataset contains sentences written in english. we use google translate to translate these sentences to spanish, italian, portuguese, french, hindi, arabic, dutch and korean. this way, we could incorporate 21573 new sentences to the training. as this dataset is rated in rank from -1 to 1, we translate the ratings to a scale from 1 to 5, maintaining the ratio. besides, it is worth noting that none of these new instances are used for custom validation. ## experimental setup during the evaluation phase, apart from the multilingual llms, we evaluate the usage of linguistic features from umutextstats @xcite . the linguistic features from umu-textstats have been evaluated in several nlp tasks, such as author profiling @xcite , satire identification (garcía-díaz and valencia-garcía, 2022), and hate-speech detection @xcite . umutextstats is designed for the spanish language, but it has a subset of language-independent features. these features are stylometric features and features related to named entity recognition (ner) and part-of-speech (pos). to extract these features, umutextstats relies on stanza @xcite to extract some features related to pos and ner. however, not all the languages involved in the mtia shared task have models per stanza, so the linguistic features were not useful for some of the languages involved on this shared task. accordingly, we decided not to include the linguistic features (lf) in the final submission. however, we use the linguistic features to make an analysis of the spanish split of the dataset and we observe a correlation with misspelled words with intimacy followed by morphological features related to proper and common nouns, personal pronouns in first, second person, and third person. we also identify a correlation with stylometric clues concerning the length of the tweets and with the usage of hyperboles, proper from figurative language. these results are depicted in figure 4 . next, the regression neural network architecture is described. for each llm we conduct an hyperparameter optimization stage consisting of training 10 models for each llm evaluating different parameters, including the learning rate, the number of epochs for traning, the warm up steps and the weight decay. the results of the best model for each llm are depicted in finally, we conduct another hyperparameter op-timization stage using keras. we follow this step to be consistent with the lfs and the llms. the results of this experiment are reported in for each feature set, we evaluate 55 models changing the neural network architecture (its number of neurons and hidden layers), the dropout, the batch size and the activation function. we can observe that the best models for the lf and for mbert are complex neural networks with 5 and 8 hidden layers respectively. the lf neural network has brick size (all layers have the same number of neural networks) but mbert has a diamond shape (the inner layers have much more neurons). all models benefit for a strong dropout mechanism and most of them also benefit from large batch sizes. ## conclusion despite the fact that our results are limited, we are very pleased with our participation. first, because this is the first time we participated in a shared task concerning intimacy. second, because the mtia shared task was challenging as we could only send one result and because there are four unseen languages during testing. our proposal based on ensemble learning on three multilingual llm reached position 30th in the official leaderboard from a total of 45 participants. our best results are achieved with two unseen languages: korean (16th) and hindi(19th). after the evaluation of our results, we consider that there are several ways in which we could have improved our results. first, we should have conducted an in-deep analysis of the dataset. however, this was not easy for us because we are not fluent speakers of many of these languages, so we can miss important aspects related to the context. second, it is possible that the data augmentation process was not beneficial for the performance of our model, as the translations could be less accurate in some languages or it is possible that cultural and background differences are not well represented in the dataset. however, we consider that we could have translated all sentences into a common language (spanish or english, for instance) and could include features related to topics to our model. we will explore this path in future multilingual shared tasks. three, our models could be biased to our custom validation split. in this sense, we will incorporate to our pipeline a nested-cross validation evaluation. fourth, our ablation analysis is limited, as we only consider the data augmentation step. however, we need to conduct more experiments in order to gain understanding of other modules such as the preprocessing module.
| 26,317
|
28
| 2,024
|
S ea LLM s - Large Language Models for S outheast A sia
|
Despite the remarkable achievements of large language models (LLMs) in various tasks, there remains a linguistic bias that favors high-resource languages, such as English, often at the expense of low-resource and regional languages. To address this imbalance, we introduce SeaLLMs, an innovative series of language models that specifically focuses on Southeast Asian (SEA) languages. SeaLLMs are built upon popular English-centric models through continued pre-training with an extended vocabulary, specialized instruction and alignment tuning to better capture the intricacies of regional languages. This allows them to respect and reflect local cultural norms, customs, stylistic preferences, and legal considerations. Our comprehensive evaluation demonstrates that SeaLLM models exhibit superior performance across a wide spectrum of linguistic tasks and assistant-style instruction-following capabilities relative to comparable open-source models. Moreover, they outperform ChatGPT-3.5 in non-Latin languages, such as Thai, Khmer, Lao, and Burmese, by large margins while remaining lightweight and cost-effective to operate.
|
https://aclanthology.org/2024.acl-demos.28
|
## introduction the advent of large language models (llms) has radically transformed the field of natural language processing, demonstrating remarkable abilities in text generation, comprehension, and decision-making tasks @xcite @xcite @xcite @xcite . while the proficiencies of these models are extraordinary, the majority of existing llms embody a linguistic hierarchy overwhelmingly dominated by english @xcite @xcite . this dominance undermines the multilingual capability of such models, with particularly prejudicial outcomes for lower-resource and regional languages, where data scarcity and tokenization challenges lead to disproportionately poor model performance. this linguistic disparity not only impedes access to state-ofthe-art ai technologies for non-english-speaking populations but also risks cultural homogenization and the loss of linguistic diversity. while hyperpolyglot models exist @xcite @xcite , they may pay a high cost for high-resource language performance while lacking in multilingual instruction-following abilities. recognizing the urgent need to democratize ai and empower linguistically diverse regions, we introduce seallms foot_0 , a suite of specialized language models optimized for southeast asian languages foot_1 . these languages, while rich and diverse, often lack the extensive dataset support available for more widely spoken languages, resulting in a stark performance gap in existing llm applications. as a long-term continuous effort, as of this writing, seallms come in three versions @xcite . seallm-13b-v1, which was pre-trained from llama-2-13b, eclipses the performance of most available open-source llms in a comprehensive array of tasks including world knowledge assessments, language comprehension, and generative capabilities in sea languages. for english and alike, seallms do not only preserve, but also demonstrate enhanced performance in tasks that were part of the original llama training set. when evaluated on multilingual instructionfollowing tasks with gpt-4 as a judge @xcite , seallm-13b-v1 outperforms chatgpt-3.5 by large margins in less-represented languages such as khmer, lao or burmese. meanwhile, seallm-7b-v2, which was pre-trained from mistral-7b @xcite , demonstrates better performances in math and commonsense reasoning than comparable baselines, surpassing chatgpt-3.5 in reasoning for common sea languages, while being much smaller in sizes. later, seallm-7b-v2.5, which was further pre-trained from gemma-7b @xcite , shows significant improvements in sea languages over seallm-7b-v2. figure 2 illustrates the four-stage training process of seallms. in the first stage, detailed in section 2.3, we conduct continuous pre-training from the foundational models @xcite with an extended vocabulary tailored for sea languages. next, we fine-tune the model in a novel hybrid paradigm with a mixture of multilingual pre-training data and englishdominant instruction fine-tuning data (section 3.2). the following stage subsequently fine-tunes the model on a balanced and custom-built multilingual sft dataset. finally, we conduct self-preferencing alignment optimization using the seallm model itself, without relying on human annotators or more powerful llms (openai, 2023b). ## conclusion in conclusion, our research presents a substantial advance in the development of equitable and culturally aware ai with the creation of seallms, a specialized suite of language models attuned to the linguistic and cultural landscapes of southeast asia. through rigorous pre-training enhancements and culturally tailored fine-tuning processes, seallms have demonstrated exceptional proficiency in language understanding and generation tasks, challenging the performance of dominant players such as chatgpt-3.5, particularly in sea languages. the models' attunement to local norms and legal stipulations-validated by human evaluations-establishes seallms as not only a technical breakthrough but a socially responsive innovation, poised to democratize access to high-quality ai language tools across linguistically diverse regions. this work lays a foundation for further research into language models that respect and uphold the rich tapestry of human languages and cultures, ulti-mately driving the ai community towards a more inclusive future.
| 28,143
|
7
| 2,022
|
USST ’s System for A uto S im T rans 2022
|
This paper describes our submitted text-to-text Simultaneous translation (ST) system, which won the second place in the Chinese→English streaming translation task of AutoSimTrans 2022. Our baseline system is a BPE-based Transformer model trained with the PaddlePaddle framework. In our experiments, we employ data synthesis and ensemble approaches to enhance the base model. In order to bridge the gap between general domain and spoken domain, we select in-domain data from general corpus and mixed then with spoken corpus for mixed fine tuning. Finally, we adopt fixed wait-k policy to transfer our full-sentence translation model to simultaneous translation model. Experiments on the development data show that our system outperforms than the baseline system.
|
https://aclanthology.org/2022.autosimtrans-1.7
|
## introduction simultaneous translation @xcite consists in generating a translation before the source speaker finishes speaking. it is widely used in many real-time scenarios such as international conferences, business negotiations and legal proceedings. the challenge of simultaneous machine translation is to find a read-write policy that balances translation quality and latency. the translation quality will decline if the machine translation system reads insufficient source information. when reading wider source text, latency will increase. recent read-write policies can be divided into two categories: fixed policies such as wait-k @xcite , wait-if* @xcite , and adaptive policies such as mocha @xcite , milk @xcite and mu @xcite . fixed policies are simple to implement, but they neglect contextual information, which might result in quality reduction. dynamic policies are more flexible, they can learn from data to achieve better quality/latency trade-offs, but accordingly difficult to train. in our system, we train a transformer @xcite with a deep encoder @xcite as baseline for abtaining rich source representations, besides we initialize the model with the method mentioned in deepnet @xcite in order to stabilize the training of the deeper model. at the pre-training stage, we firstly pretrain our model on a large general corpus, then we utilize data synthesis methods such as self-training and back-translation to improve model quality. during the fine-tuning phase, we first apply finetuning on a small spoken corpus. for better domain adaptation, we adopt mixed fine-tuning @xcite , which trains on a mixed dataset that includes a subsampled general corpus and an upsampled spoken corpus. thirdly, we propose a method called "in-domain mixed fine-tuning", which further improve the bleu score than mixed finetuning. specifically, inspired by in-domain data filtering @xcite , we mixed upsampled spoken data with selected in-domain data from general corpus rather than random subsampled. in the final stage, we employ the wait-k policy to convert the full-sentence translation model into a prefix-to-prefix architecture that predicts target words with only the source sentence's prefixes. after waiting for k-1 source subwords, the system reads a source subword and then predicts a target subword alternately @xmath0. an example of wait 1 is shown in figure 1 . the contributions of this paper are as follows: ## data we participate in the chinese-english streaming transcription track , where each sentence is broken into lines whose length is incremented by one word until the sentence is completed. an example is shown in similar to @xcite , we preprocess the data as follows: see ## experiments our system is implemented with the paddlepaddle foot_6 framework, and our experiments are carried out on ai studio foot_7 with 4 nvidia v100 gpu each of which has 32 gb memory. (we also benchmarked our code against fairseq foot_8 , see appendix a) ## conclusion in this paper we describe our chinese-to-english simultaneous translation system, which uses a deep transformer to improve translation quality and adopts wait-k policy @xcite to reduce latency. besides, for better domain adaption, we combined mixed fine-tuning @xcite with in-domain data filtering @xcite and proposed a new domain adaption method called "in-domain mixed fine-tuning", which is empirically more effective than fine-tuning and mixed fine-tuning. in our future work, we plan to validate the effective of our proposed in-domain mixed fine-tuning on more datasets, while investigating some novel domain adaption methods. we also plan to research on some dynamic read-write policies in order to better balance quality and latency for simultaneous translation tasks.
| 13,653
|
37
| 2,024
|
Listen Again and Choose the Right Answer: A New Paradigm for Automatic Speech Recognition with Large Language Models
|
Recent advances in large language models (LLMs) have promoted generative error correction (GER) for automatic speech recognition (ASR), which aims to predict the ground-truth transcription from the decoded N-best hypotheses. Thanks to the strong language generation ability of LLMs and rich information in the N-best list, GER shows great effectiveness in enhancing ASR results. However, it still suffers from two limitations: 1) LLMs are unaware of the source speech during GER, which may lead to results that are grammatically correct but violate the source speech content, 2) N-best hypotheses usually only vary in a few tokens, making it redundant to send all of them for GER, which could confuse LLM about which tokens to focus on and thus lead to increased miscorrection. In this paper, we propose ClozeGER, a new paradigm for ASR generative error correction. First, we introduce a multimodal LLM (i.e., SpeechGPT) to receive source speech as extra input to improve the fidelity of correction output. Then, we reformat GER as a cloze test with logits calibration to remove the input information redundancy and simplify GER with clear instructions. Experiments show that ClozeGER achieves a new breakthrough over vanilla GER on 9 popular ASR datasets.
|
https://aclanthology.org/2024.findings-acl.37
|
## introduction recent advances in large language models (llms) have attracted a surge of research interest thanks to their remarkable language generation and reasoning ability @xcite @xcite , which achieve a wide range of success on natural language processing (nlp) tasks @xcite @xcite . powered by llms, latest work @xcite proposes a generative error correction @xcite (ger) benchmark 1 for automatic speech @xcite . left: violate source speech, llm removes the word "think" in first two hypotheses as it rarely appears at the beginning of a sentence and followed by a subject according to grammar, but this actually happens in the source speech. right: information redundancy in n-best hypotheses input, there is only one difference between n-best candidates, making it redundant to send all of them for ger, which confuses llm about which tokens to focus on for correction. recognition (asr), and they release a hyporadise dataset foot_0 that contains over 332k pairs of decoded n-best hypotheses and ground-truth transcription in various asr domains. it has shown great effectiveness in learning the mapping from hypotheses to transcription by parameter-efficient llm finetuning @xcite , which significantly enhances the asr result and outperforms typical lm rescoring methods @xcite . however, ger paradigm is also observed to suffer from two limitations. first, llms are unaware of the source speech during ger process, which could lead to results that do not match the source speech content. for example, as shown in fig. 1 (left), the source speech reads the word "think" at the beginning and followed by "he", which is correctly recognized by the 1-best hypothesis. then during the ger process, llm removes the word "think", as this structure of verb plus noun at the beginning of a sentence is not rigorous according to grammar. however, this is not expected as it violates the source speech content. second, we observe that n-best hypotheses usually only vary in a few tokens. for example, as shown in fig. 1 (right), all the tokens in candidates are the same except "enjoys"/"enjoy"/"joins". in this case, it would be information redundant to leverage all of the hypotheses for predicting the ground-truth transcription, which could confuse the llms about which tokens to focus on for correction and thus lead to sub-optimal ger performance. motivated by the above observations, we propose clozeger, a new paradigm for asr generative error correction. first, we introduce a popular multimodal llm, speechgpt @xcite , to receive source speech as an extra input to the ger paradigm. with the powerful cross-modal ability of speechgpt, we can now constrain ger to comply with the source speech while correcting the errors in decoded hypotheses. then, in order to remove the input information redundancy, we reformat it as a cloze test (i.e., a special multiplechoice question) with logits calibration @xcite , where the identical parts across n-best hypotheses are set as the context and the varying parts are set as blanks (each with several options provided). with such clear instructions for error correction, it would be easier for llms to perform context reasoning and choose the right answer for each blank rather than predicting the entire sentence from redundant n-best inputs 3 . finally, we add a simple post-processing stage to correct the errors in cloze context (i.e., identical parts across n-best list) to further improve the correction result. our contributions are summarized as follows: ## related work large language models. there is recently a surge of research interests in transformer-based llms, such as chatgpt (openai, 2022), gpt-4 (openai, 2023) and llama @xcite . benefiting from the huge model size and abundant training data, llms can well understand the linguistic structures and semantic meanings behind textual data, which shows remarkable performance on a wide range of nlp tasks @xcite @xcite . more recently, researchers have started to explore the potential of llms on multimodal tasks by incorporating other modalities into llms @xcite @xcite @xcite . among them, speechgpt @xcite ) is one of the most popular multimodal llms that represent speech and text using a unified tokenizer, which enables us to add source speech into the original n-best hypotheses input of the ger paradigm. ## methodology in this section, we present our proposed clozeger paradigm in detail. we first introduce the preliminary knowledge of ger in §3.1, and then we investigate to introduce source speech to ger paradigm with multimodal llm ( §3.2). finally, we present the new task format of clozeger in §3.3. ## conclusion in this paper, we propose clozeger, a new paradigm for asr generative error correction. first, we introduce a multimodal llm (i.e., speechgpt) to receive source speech as extra input to improve the fidelity of correction output. then, we reformat ger as a cloze test with logits calibration to remove the input information redundancy and simplify ger with clear instructions. experimental evidence shows that clozeger achieves a new breakthrough over vanilla ger on 9 popular asr datasets. further analysis verifies the effectiveness of different modules in our framework.
| 31,370
|
46
| 2,020
|
Understanding Linguistic Accommodation in Code-Switched Human-Machine Dialogues
|
Code-switching is a ubiquitous phenomenon in multilingual communities. Natural language technologies that wish to communicate like humans must therefore adaptively incorporate code-switching techniques when they are deployed in multilingual settings. To this end, we propose a Hindi-English human-machine dialogue system that elicits code-switching conversations in a controlled setting. It uses different code-switching agent strategies to understand how users respond and accommodate to the agent’s language choice. Through this system, we collect and release a new dataset CommonDost, comprising of 439 human-machine multilingual conversations. We adapt pre-defined metrics to discover linguistic accommodation from users to agents. Finally, we compare these dialogues with Spanish-English dialogues collected in a similar setting, and analyze the impact of linguistic and socio-cultural factors on code-switching patterns across the two language pairs.
|
https://aclanthology.org/2020.conll-1.46
|
## introduction when interlocutors share more than one language, they nearly inevitably engage in codeswitching (cs): shifting from one language to another @xcite @xcite . since most people in the world today are multilingual @xcite , cs is a ubiquitous phenomenon in multilingual communities. it goes beyond simple lexical borrowing to blending of languages at syntactic, grammatical and morphological levels @xcite . code-switching has been studied in linguistics and sociolinguistics for decades @xcite @xcite kya tumhare paas koi dost hai who like to eat mangoes? nahi. mere kisi friend ko aam pasand nahi mere paas bhi 3 dost hai who like to eat apple ins alt alt alt figure 1 : we present a bilingual dialogue system for human-machine conversations in hindi-english (red: hindi, blue: english). we discover that humans positively adopt the agent's code-switching style (alt and ins) and the language choice for keywords (highlighted in bold). advances in dialogue research @xcite @xcite have enabled conversational ai technologies for humanmachine interactions, like alexa and siri. although these technologies are pervasive, they still have limited abilities to accommodate to the user, and they do not account for the ubiquity of multilingual communication. due to the lack of code-switching abilities in existing language technologies, there has been limited work in studying linguistic accommodation in written cs dialogues. with the ultimate goal to enable adaptive codeswitching dialogue agents, in this paper we study user accommodation, i.e., entrainment @xcite in cs human-machine dialogues. our exploratory analysis of user accommodation will facilitate better development of dialogue agents which can eventually accommodate to users in return. to this end, we adopt a collaborative dialogue framework of @xcite , which converses with spanish-english (spanglish) bilinguals. to facilitate a more general analysis, we extend this framework to hindi-english (hinglish), a language pair which is typologically distinct from spanglish and is spoken by millions of people. we begin by providing background on codeswitching ( §2) and linguistic accommodation ( §3) we then introduce our generalized bilingual dialogue system ( §4). in §5, we describe our experimental setup for hinglish data collection and discuss the data statistics. we later provide our exploratory analysis of language accommodation and other socio-linguistic factors affecting the cs patterns in the user utterances ( §6). a case-study comparing code-switching distributions across hinglish and spanglish is presented in §7. finally, we discuss directions for future work in §8. this paper's contributions include: (1) the development of a bilingual collaborative dialogue system easily generalizable to a new cs language pair, (2) a new dataset, commondost, comprising of 439 hindi-english human-machine conversations, (3) adaptation of accommodation metrics and a corresponding analysis of accommodation of language style and choice in cs dialogues, and (4) an exploratory study of linguistic and socio-cultural factors on users' cs patterns across spanglish and hinglish. ## code-switching strategies given that cs is used in very nuanced ways, researchers have been studying how people codeswitch, examining the switch-points of languages syntactically @xcite , prosodically @xcite , lexically (kootstra, 2012), pragmatically @xcite , and so forth. many works have attempted to model code-switching text and speech from a statistical perspective @xcite . recent works and benchmarks such as linguistic codeswitching evaluation (lince) @xcite and gluecos @xcite have provided a unified platform to evaluate cs data for various nlp tasks across various language pairs. our work is in line with these recent efforts to pro-vide nlp capabilities to users with diverse linguistic backgrounds. we extend the human-machine cs dialogue system by @xcite to a new language pair of hindi-english. in order to better understand the style and usage of languages in a code-switched utterance, we cluster and characterize these utterances by a set of predefined cs strategies. previous works have mainly identified two commonly used code-switching (cs) strategies: insertional and alternational, and these strategy distinctions are important in implementations of cs technology @xcite . insertional cs strategy involves one language to be the matrix language (matl) with the other serving as the embedded language (embl). words/phrases from embl are inserted in the sentence while maintaining the grammar and structure of matl @xcite . on the other hand, alternational cs strategy involves alternating between separate independent clauses of the languages, switching from one matl to another. in our work, we focus on the hindi-english language pair. we experiment with 4 cs strategies -(1) en cs is also observed more often in informal and casual settings than formal ones @xcite . we test this hypothesis by inducing informality in the agent's strategies. although recent works @xcite have introduced neural methods to induce informality, we deploy a simple way to moderate formality by adding discourse markers (e.g. "so", "well") at the beginning and ending of sentences. these markers are independent of context and syntax @xcite , and are often associated with informality @xcite . thus, we define four more agent strategies by infusing informality (+ informality) in each of the previously described 4 cs strategies. ## measuring accommodation in dialogue communication accommodation theory posits that people adjust their behaviors or speech styles to their conversational partners' @xcite . linguistic accommodation has proven to reduce interpersonal distance @xcite and is correlated with dialogue success and engagement @xcite . although well-studied in the monolingual dialogues @xcite , it is relatively new in the cs setting. @xcite found rate of code-switching to be accommodated in human-human spanish-english dialogues. choice of language when code-switching can also be adapted in dialogues @xcite . @xcite further discover that part-of-speech of a cs utterance may impact the following language choice. our work adds to this field by studying accommodation of language choice for lexical classes. in terms of quantifying accommodation, we adapt a metric from @xcite to measure accommodation (we refer it to as global accommodation). global accommodation extends the score proposed in @xcite by aggregating a speaker's word usage across an entire dialogue and biases it relatively with other non-partners in the corpus. for two partners a and b, we denote e a,@xmath0. denoting the set of non-partners for the speaker a by n a , we define ratio as for all non-partners @xmath1. the global score for the speaker a is the average of ratio over all the non-partners. the final global score for the dataset is the average of the scores over all the speakers in the dataset. in context of human-machine conversations, we choose the set of non-partners for an agent to be the set of humans that did not interact with this agent. since this metric is defined primarily for lexical accommodation, we redefine different styles as a lexical class to adapt it for measuring stylistic accommodation. @xcite presented another interesting metric which measures accommodation locally across turns within a single dialogue. for two partners a and b, we can formulate this metric as ## bilingual dialogue system our bilingual human-machine dialogue system mainly serves two important purposes: (1) collection of cs data and (2) experimentation of new agent strategies. previous work @xcite developed a rulebased cs dialogue system restricted to a fixed set of prompts. @xcite proposed a more flexible bilingual system for english-spanish as an extension of a monolingual goal-oriented collaborative dialogue framework @xcite , originally designed for the mutualfriends task. this task provides the two conversational partners a and b individually with a knowledge base (kb) of friends, out of which there is exactly one friend common in both kbs. each friend in the kb has several attributes such as hobby, location of work, etc. the goal of the task is to collaboratively find this mutual friend by text conversations between the two partners-which can be human or machine. the modifications made by @xcite for extending this monolingual system to support bilingual spanish-english dialogues were mainly in three components: (1) bilingual readability: supporting instructions and kb available to the users in spanish as well as english, (2) bilingual response generation: procuring parallel spanish sentences using a machine translation (mt) system and applying rule-based transformations for generating code-switched spanglish, (3) bilingual response understanding: translating code-switched spanish-english to monolingual english (using a mt system) and passing it to the pre-existing response understanding system for english. ahn et al. (2020)'s modified spanish-english dialogue system cannot be directly applied across other language pairs due to three key reasons: (1) the dialogue system relies on a robust cs mt system 2 which is more readily available for resourcerich languages like spanish and english. such systems might not be accessible for languages like tagalog and swahili. (2) the linguistic rule-based adaptations for generation are simple in the case of spanish-english as they are typologically closer. on the contrary, linguistically diverse pairs like telugu-english might need further adaptations due to differences in word order and morphology. (3) spanish and english are written using the same script. many other language pairs within which cs is pervasive, like hindi-english, are written in different scripts, and are typically romanized in the cs setting. lack of normalization and robust transliteration models pose challenges to multiple system components for such pairs. in our work, we build a more generalized dialogue system to tackle the challenges stated above. one highlight of this modified system is its simplicity, which helps in adapting to new language pairs easily. we briefly discuss these challenges and our enhancements to various components for our hindi-english dialogue system below. language bias in kb due to social and cultural priors, certain domains and topics in the kb might not be equally represented in both languages. in order to avoid biasing the language usage in the dialogue and promote code-switching, it is necessary to carefully choose equilingual domains. in the case of hinglish, we replace the domain of college majors, which is highly anglicized with respect to hindi, with favourite fruit which is more equally represented in both languages. handling gender-markings third person pronouns and verb forms in hindi are usually gendermarked (eg. karta/karti [he/she does], uska/uski [his/her]). since the spanglish kb does not provide any information about the gender of friends, we consequently notice the dialogues using this system to be gender-skewed. in the common-amigos spanglish data @xcite , the ratio of masculine to feminine word usage was 3.9; whereas for hinglish 3 , this gender-ratio is 27.7. we mitigate this by simply adding a new "gender" attribute to the kb and correspondingly, notice a drastic drop of the gender-ratio to 3.4 for hinglish. dialogue generation the spanglish dialogue system utilizes a mt system 4 to generate parallel spanish-english sentences and leverages rulebased transformations (specific to spanish) to gen-3 tested on a set of 65 pilot dialogues. 4 google translate api in the original implementation. furthermore, the rule-based transformations need appropriate modifications to accommodate the new language pair. for hinglish, we synthesize additional transformations to handle differences in word order and verb conjugations. natural language understanding (nlu) the spanglish dialogue system relies on a robust mt system for converting cs user utterances to english and then exploits an english nlu component for entity extraction. procuring such mt systems foot_2 for other language-pairs is not feasible. this issue is amplified for languages written in non-native script (hinglish) due to lack of normalization in user sentences. we overcome this challenge by building a simple dictionary-based nlu component which can directly understand and extract entities from cs hinglish text. although it cannot handle complex inputs, this simple model still outperforms the translation-based nlu pipeline. ## data we use the modified bilingual dialogue framework ( §4) to collect romanized hindi-english cs data for human-machine dialogues. here, we first describe this data collection process and later discuss statistics for the collected data. ## analysis of hinglish conversations we study the impact of each of the agent strategies (4 cs strategies and their informal counterparts) on the user dialogues using various dialogue-and language-oriented dimensions, as shown in ## comparison of spanglish and hinglish to gain better insights into how linguistic and sociocultural factors influence code-switching patterns, we compare the distributions of the users' usage of various cs strategies in hinglish and spanglish in figure 3 . we observe that en ins --→hi and en ins --→sp are the most dominant cs strategies in hinglish and spanglish respectively. on the other hand, we notice a large difference in the usage of alternational cs strategies in the language pairs. for spanglish, it accounts to roughly 40% while it is merely 10% for hinglish. as attributed by the equivalence constraint, cs points tend to occur only if a syntactic rule is not violated in either of the two languages being mixed @xcite . given this requirement, a pair of languages that have differing word order could have more constraints on where switches can occur. we hypothesize that alternational cs may not work within a verb clause in hindi as it is a verbfinal (sov) language while english is verb-medial (svo). spanish is verb-medial like english, and their word order similarity may facilitate the use of alternational cs. beyond structural differences, sociolinguistic factors may affect cs strategies of speakers. backus (1998) describes a gradient of strategy usage across generations of immigrants. earlier gen-erations of immigrants would progress from simple to complex insertions, and later generations would alternate the two languages, eventually using reverse insertion. as the spanglish dataset includes later generations of immigrants to the us, 90% of hinglish speakers are 1st generation. this would highlight hinglish speakers' affinity towards insertion into the hindi matrix language. additionally, the status of english in the us (for spanglish) and english in india (for hinglish) is different. as found in §6.4, the status of english can vary within regions of india itself, and this can lead to varying uses of cs strategy. attitudes towards language use have been shown to affect code choice in bilingual speakers @xcite . it is likely that attitudes towards cs is not the same in the spanglish and hinglish populations, which can provide further variability in the speakers' language choice. ## conclusion and future work in our work, we proposed a generalized bilingual dialogue system and procured human-machine dialogue data (commondost) for the language pair of hindi-english using this system. adaptation of this dialogue system for newer cs languages could promote collection of more bilingual dialogue data. analysis of the commondost conversations revealed how users positively adopt and accommodate the agent's style of using language in a cs utterance. we also studied how informality and cultural factors independently affect the users' cs patterns. this proves that our findings are extendable across two cs pairs of hinglish and spanglish @xcite . similar analysis can be done for new language pairs (such as arabic-english) and datasets from different domains. another area of potential research would be to compare our findings of the cs patterns and accommodation with human-human cs conversations. finally, we discussed how linguistic and sociopolitical factors affect the distribution of users' cs patterns across the language pairs of hinglish and spanglish. despite their dissimilarities, the similarities across these language pairs is encouraging, as it open avenues to learn about how code-switching functions cross-linguistically. we pave the path for future research on comparisons of multiple cs language pairs.
| 3,580
|
18
| 2,023
|
Extracting Sign Language Articulation from Videos with M edia P ipe
|
This paper concerns evaluating methods for extracting phonological information of Swedish Sign Language signs from video data with MediaPipe’s pose estimation. The methods involve estimating i) the articulation phase, ii) hand dominance (left vs. right), iii) the number of hands articulating (one- vs. two-handed signs) and iv) the sign’s place of articulation. The results show that MediaPipe’s tracking of the hands’ location and movement in videos can be used to estimate the articulation phase of signs. Whereas the inclusion of transport movements improves the accuracy for the estimation of hand dominance and number of hands, removing transport movements is crucial for estimating a sign’s place of articulation.
|
https://aclanthology.org/2023.nodalida-1.18
|
## introduction sign languages -or, signed languages -are languages produced with gestures articulated in space and perceived visually or tactilely. over 200 sign languages have been documented around the globe @xcite but they are minoritized and under-researched. one challenge for quantitative research on sign languages is that they generally lack a conventionalized representation in a machine-readable form, such as phonetic transcription or orthography (see e.g., @xcite @xcite . following technological advances in computer vision, methods have emerged that allow a degree of formbased analysis of body movements, such as gesturing and signing, through human body pose estimation tracking of either real-time or pre-recorded video data @xcite . whereas most body pose tracking utilized in sign/gesture research used to involve either wearable devices (e.g., motion capture sensors) @xcite or 3d cameras (e.g., kinect) @xcite , thus requiring designated hardware, there are now pre-trained models that do human body pose estimation either real-time through a regular video camera or on pre-recorded video data, providing a cost-efficient alternative that has proven to be reliable in estimating human gesturing @xcite . a popular tool for such analysis is openpose @xcite , which has been successfully applied in research on both sign language and gesture @xcite @xcite @xcite . a tool that has become available more recently is google's mediapipe @xcite , which similarly performs human body pose estimation of video data and outputs coordinates of landmarks (joints and anchor points such as eyes, nose and eyebrows). ## conclusions in this paper, i have shown initial explorations of methods to extract basic information about articulation and sign form from sign language video data using mediapipe. the first step of estimating an approximate articulation phase of the sign proved to be possible for most sign videos in the data set, which turned out to be a fruitful endeavor in order to then accurately estimate the place of articulation across signs. for the purpose of estimating hand positions corresponding to a phonological place of articulation, estimating the articulation phase is crucial, since the signal is otherwise disrupted by noise from rest positions and transport movements. being able to automatically segment the articulation phase of signs would have other obvious applications, when extracting phonological information about the actual sign (articulation) rather than contextual noise (transport and rest). however, when estimating hand dominance and number of signs articulating, the full method, which included data from all frames in the sign video, consistently outperformed the short method, for which the data only included frames within the estimated articulation phase. it seems as though the crude method of comparing the relative distance traveled between the two hand benefits from more data than the short articulation phase provides, and that the transport movements to and from the articulation phase are in fact quite useful for magnifying the differences in distance traveled between the two hands. this method works quite well with dictionary data here, with each video containing a single (non-compound) sign. if applied to complex/compound signs or stretches of multiple signs in succession, as in conversational data, transport movements may not be as distinct and more elaborate methods to estimate articulation phases would be necessary. the results of this preliminary and exploratory study has demonstrated some possibilities in ex-tracting sign language articulation from videos with mediapipe, which can be used as a fast and cost-efficient way to analyze pre-recorded but unannotated sign language data in substantially larger quantities than would be feasible with manual annotation.
| 26,007
|
7
| 2,022
|
Part-of-Speech and Morphological Tagging of A lgerian J udeo- A rabic
|
Most linguistic studies of Judeo-Arabic, the ensemble of dialects spoken and written by Jews in Arab lands, are qualitative in nature and rely on laborious manual annotation work, and are therefore limited in scale. In this work, we develop automatic methods for morpho-syntactic tagging of Algerian Judeo-Arabic texts published by Algerian Jews in the 19th–20th centuries, based on a linguistically tagged corpus. First, we describe our semi-automatic approach for preprocessing these texts. Then, we experiment with both an off-the-shelf morphological tagger and several specially designed neural network taggers. Finally, we perform a real-world evaluation of new texts that were never tagged before in comparison with human expert annotators. Our experimental results demonstrate that these methods can dramatically speed up and improve the linguistic research pipeline, enabling linguists to study these dialects on a much greater scale.
|
https://aclanthology.org/2022.nejlt-1.7
|
## introduction application of natural language processing (nlp) to real-world problems has been the field's goal from its early days. as algorithms advance, the contribution of nlp to real problems has become more evident and more substantial. the present study originates from a real-world challenge faced by linguists of semitic languages, in this case researchers of the judeo-arabic dialects of algeria (aja). their challenge, simply put, is how to scale up linguistic analyses of such dialects. semitic languages in general, and arabic in particular, are characterized by a very rich morphology that uses both templatic and concatenative morphemes, combined with the use of a vowelless script ("abjad"). this makes morphological analysis of arabic very time-consuming even for expert linguists. because speakers of the aja dialects are becoming scarce, the attention of linguists in this field has shifted from fieldwork interviews with native speakers to library-based analysis of texts written in those dialects. fortunately, vast collections of aja texts were preserved in printed books, journals and handwritten manuscripts. analyzing this linguistic treasure-trove, however, is proving to be challenging due to its size. the time-consuming manual annotation does not scale, and requires expertise that is hard to find. we aim to scale up the linguistic analysis of this arabic dialect using nlp tools. in particular, our goal is to develop an nlp tool that will assist aja linguists in their real-world task, in a way that they will find it useful. basing our work on the existing linguistically tagged algerian judeo-arabic (taja) corpus (tirosh-becker and becker, 2022), we set out to develop automatic methods for morpho-syntactic tagging of such texts. several specially designed neural network taggers and an off-the-shelf morphological tagger were experimented with, and assessed for their accuracy and likely usefulness. we also considered a hybrid humanin-the-loop approach. finally, we carried out a realworld evaluation of our best performing part-of-speech (pos) taggers, applying them to untagged texts and assessing their quality via a user study with expert aja linguists. our experimental results demonstrate that these methods can dramatically speed up and improve the linguistic research pipeline, enabling linguists to study this language on a much greater scale. judeo-arabic (ja) lies in the intersection of semitic languages and jewish languages. as a semitic language, and more specifically, an arabic language variety, its words are generally composed of 3-letter roots, with added vowels and consonants according to pattern paradigms, as well as affixes and clitics @xcite . arabic is the most widely spoken semitic language, with 300 million native speakers @xcite . in fact, the term 'arabic' refers both to modern standard arabic (msa) and to the arabic dialects spoken throughout the arab world. the two varieties of arabic coexist in a state of diglossia @xcite or continuuglossia @xcite , meaning the language varieties exist side by side, with writers or speakers shifting between varieties according to circumstance. msa is written using the arabic script, which is a right-to-left alphabet. arabic dialects are usually written in arabic script as well, but there is no standardized spelling for dialectal arabic @xcite . arabic uses both templatic and concatenative morphemes. there are two types of templatic morphemes: roots and templates. roots are usually three consonantal radicals that signify some abstract meaning. roots are inserted into abstract patterns called templates. there are two kinds of concatenative morphemes that attach to the templatic morphemes. clitics are morphemes that have the syntactic characteristics of words, but are phonologically bound to another word @xcite , for example "wa", foot_0 meaning "and". affixes are phonologically and syntactically part of the word, and often represent inflectional features, such as person, gender, number, and more. dialectal arabic (da) is a primarily spoken family of language varieties (and in modern days, widely used in written form on social media as well) that exist alongside the written msa. da diverges from msa on several levels. there are differences in phonology, morphology, lexicon, and orthography @xcite . the regional dialects can be broken down into main groups, with one possible breakdown being egyptian, levantine, gulf, iraqi, and maghrebi. even within dialect groups there can be quite a lot of variance between dialects, although in many cases there is a certain level of intelligibility between speakers of different dialects, with more significant difficulty across dialect groups. maghrebi dialects are influenced by the contact with french and berber languages, and the western-most varieties could be unintelligible by speakers from other regions in the middle east, especially in spoken form @xcite . while ja can be looked at as an ensemble of arabic dialects, it is first and foremost a subgroup of jewish languages. jewish languages are a family of language varieties that developed in jewish communities throughout the diaspora. the original language used by jews in the land of israel was hebrew, followed closely by aramaic. as jews spread across the world, they adopted local languages and developed distinctive varieties of these languages. nonetheless hebrew remained their liturgical language, even as it almost died out as a spoken language until its revival in the late 19th and early 20th centuries. perhaps the most well-known of these jewish languages is yiddish, the judeo-german language developed by ashkenazi jews living in central and eastern europe before the holocaust. jewish languages vary in their distance and divergence from their non-jewish sister languages, some being influenced by multiple languages due to language contact. nonetheless, among the features that tie these languages together are the presence of hebrew and aramaic lexical components @xcite , the use of the hebrew alphabet for writing, and more. algerian ja (aja) is a member of the north african judeo-arabic dialect group, i.e., dialects spoken and written by jews of the maghreb. aja is in contact with moroccan and tunisian arabic dialects (both jewish and muslim), with french and to a lesser extent other trade languages such as spanish and italian, and with hebrew and aramaic, the historical jewish cultural languages. in general aja shares many characteristics with other jewish languages, including the use of hebrew script, presence of hebrew and aramaic components, and a mixture of conservative trends, vernacular features, and heterogeneous elements @xcite . to date, aja has been sparsely studied by linguists. the aja dialect of the city of algiers was studied over a century ago by @xcite , with most of the recent work on aja published by tirosh-becker, focusing on constantine, the third largest city in algeria @xcite @xcite . aja research employs fieldwork interviews of informants and the study of selected written texts (e.g., @xcite @xcite . regretfully, the number of aja speakers has decreased following algeria's independence @xcite and the subsequent dispersion of its jewish communities, making fieldwork today almost impossible. hence, this research is now shifting towards an analysis of the vast textual sources left by many of these jewish communities, in both manuscript and print form. most of the linguistic analyses done thus far on aja texts have been based on single or few texts, as each study requires extended effort of poring over texts, dictionaries, and grammars. given the size of these corpora, this is a perfect match for machine learning and nlp approaches. ## data this project has used the tagged algerian judeo-arabic (taja) corpus developed by tirosh-becker and becker (2022). 3 this aja corpus is a collection of modern aja texts published in algeria in the late 19th and the first half of the 20th century. the texts represent a variety of prose genres written by algerian jews, including: 3 the corpus is available through the authors. these texts were manually typed into computerreadable format and subsequently proofread, as hebrew ocr (optical character recognition) failed on these aja texts. this was due not only to the less-thanfavorable conditions under which the books had been stored, leaving the pages grayed and worn, but also because the fonts used in these books are not identical to standard hebrew, as they have ja-specific adaptations, such as diacritics. each text was manually tokenized and annotated by research assistants (ras, usually ma or phd candidates) in a spreadsheet, according to strict guidelines, and most were verified by a senior expert. the digitization and annotation project spanned several years, with some dozen ras contributing to the annotation efforts. approximately 80% of the time spent on the creation of taja was dedicated to the annotation process, as the digitization is a more straightforward (though non-trivial) task. ## preprocessing in this section, we describe several challenges we faced in the preprocessing stage and the steps we took to address them. ## real-world evaluation the end goal of this project is to provide aja language experts with an automatic tagger to help them annotate large volumes of text, a task which is otherwise laborious and time-consuming when tackled manually. to evaluate such real-world usefulness of the taggers we set out to compare the performance of our two best pos models (the hierarchical char-cnn based model and marmot) with that of manual annotation by two expert aja linguists. ## discussion and conclusion the pressing real-world challenge facing researchers of algerian judeo arabic (aja) dialects is how to scale up their linguistic analyses from individual texts to large textual collections. the rich morphology of arabic (as of other semitic languages) and scarcity of expert linguists makes this complex and time-consuming task impractical unless aided by automation. hence, developing automatic taggers that would support realworld linguistic analysis at scale and prove useful for aja linguists is the challenge we aim to tackle. reflecting the linguists' challenges, we focus on the performance of the morphological tagger in tests that are predictive of the real-world setting. for this reason, we did not limit ourselves to purely automated approaches, but also explored a hybrid human-machine approach, wherein the human expert contributes to the automatic approach. the rich morphology of arabic and its use of morpho-syntactic affixes led us to focus on characterbased models (rather than word-based models), as these can identify key morphemes that are essential for annotating oov words. starting from a wordbased lstm neural network architecture, we integrated character-level information via either an lstm or a cnn. subsequently we explored a two-tier hierarchical approach to morphological tagging with pos tags at its base and the morphology tags building on that. this hierarchy mirrors the underlying character of arabic annotation, where each pos tag has a set of legal morphological tags. the two-tier approach also enables exploring a human-in-the-loop step in between the two tiers. our best performing strategy, denoted ajatag for simplicity, is now available for use by aja linguists. 8 to evaluate the usefulness of the ajatag 8 https://github.com/technion-cs-nlp/ nlp4aja strategy we compared it to the off-the-shelf pos and morphological tagger, marmot, which is based on crf. all models were trained on the annotated taja corpus. for the base task of pos tagging, we found that among the evaluated neural network architectures, representing a word using a cnn run on its characters performed better than an lstm or ignoring the characters altogether. training on the taja corpus, the pos accuracy of the char-cnn model was 87.4±0.58%. this accuracy is only slightly lower than the 89.17% accuracy obtained by marmot for this task. the 1.5% difference suggest essentially similar performance for the two models in a real-world setting. morphology tagging, as indicated above, is the most challenging and time-consuming task that takes up 80% of the expert linguist annotation time. here, too, char-cnn performed better than the other neural network models we explored, especially in a two-tier hierarchical approach. the accuracy of this model, denoted herein as 'hierarchical char-cnn (predicted pos)', ranges from 81% to 91% for the different morphology analysis fields (anal-ysis1, analysis2, additional tags). to further improve the performance, we allowed for human input between the two tiers in the form of manual correction of pos tags. using 'true pos' assignments, instead of the predicted assignments, further improved the performance of the 'hierarchical char-cnn (true pos)' morphology tagger. we denote this hybrid strategy ajatag and have compared its performance on aja to marmot. we use mar-mot as is, without modifications or adaptations to a hybrid setting, because for the linguists it is an off-theshelf tool that is to be used as is. evaluation of the morphological tagging by ajatag demonstrated favorable performance across multiple evaluation metrics: it should be noted that the greatest gain in accuracy is in analysis1, which of the morphological analysis fields is the richest and most difficult to assign. both approaches perform well identifying the enclitic field with an accuracy greater than 96%. however, this important performance indicator is where our hybrid ajatag strategy delivered its most important fruits. the accuracy of ajatag in the challenging task of morphologically tagging oov words is 74.91% and 78.42% for the analysis1 and analysis2 fields, respectively, which is significantly better than marmot's oov tagging for these two fields (55.82% and 59.95%, respectively). ajatag also performs much better in the additional tags field for oov words (85.4% compared to marmot's 75.0%). the justification for the hybrid approach explored herein is in its real-world usefulness, outside of the nlp lab. the 56%-60% accuracy of the off-the-shelf solution for the two most important morphological fields, analysis1 and analysis2, when applied to oov words is not sufficient for real linguistic work. in contrast, the hybrid ajatag strategy achieved an accuracy level of 74.91%-78.42% on morphological tagging of oov words, which is expected to be useful for real-world applications, improving upon marmot by 18%-19% for this task on both analysis fields. it is reassuring that even without the added human input, our fully automated hierarchical char-cnn performed better than marmot on pos and analysis1 tagging of oov words. the value of the ajatag strategy was further confirmed by other performance indicators, including its overall accuracy and its accuracy on words with legal tag combinations, as defined above. to assess the feasibility of the human interface element in ajatag, we performed a real-world evaluation of this process. the first-tier pos output was given to two aja linguists to correct, before moving on to the second-tier morphology tagging. pos tags manually corrected by a senior expert were perceived as the 'true' pos assignment, to which the performance of the automatic taggers as well as the corrections by a junior expert were compared. it is reassuring that both automated taggers, our char-cnn model and marmot, performed well at an almost identical accuracy (~89%) relative to the 'true' pos, an accuracy quite similar to the 91% accuracy by the junior expert, who is a phd candidate with several years of experience in aja linguistics. to conclude, while not perfect, the hybrid ajatag approach provides aja linguists with a working solution that already impacts their real-world workflow in a way that off-the-shelf tools cannot provide. in the future we plan to continue improving these tools by addressing limitations such as tagging words with illegal tag combinations. nonetheless, we believe that even in its current form ajatag could prove useful to linguists as they take on the task of analyzing large untagged aja corpora. we hope that in the future we will be able to expand the utility of these tools to other judeo-arabic dialects.
| 18,123
|
141
| 2,024
|
Time is Encoded in the Weights of Finetuned Language Models
|
We present time vectors, a simple tool to customize language models to new time periods. Time vectors are created by finetuning a language model on data from a single time (e.g., a year or month), and then subtracting the weights of the original pretrained model. This vector specifies a direction in weight space that, as our experiments show, improves performance on text from that time period. Time vectors specialized to adjacent time periods appear to be positioned closer together in a manifold. Using this structure, we interpolate between time vectors to induce new models that perform better on intervening and future time periods, without any additional training. We demonstrate the consistency of our findings across different tasks, domains, model sizes, and time scales. Our results suggest that time is encoded in the weight space of finetuned models.
|
https://aclanthology.org/2024.acl-long.141
|
## introduction temporal variation is a fundamental characteristic of language. as we show in §3, it manifests in language model development as temporal misalignment, where deviations in train and test data lead to large performance degradation across different time periods @xcite . this necessitates adaptation techniques for customizing models to specific time periods. designing such techniques is difficult, however, due to the multitude of time scales and the possibility that data from a target time period might be unavailable. recent work has shown that the behavior of neural networks can be edited through closed-form interpolation between parameters of finetuned models @xcite @xcite . in this work, we demonstrate that weight-space interpolation can also be used to cheaply edit language model behavior over time. to this end, we introduce time vectors ( §4), an extension of task figure 1 : we present time vectors, a simple tool to customize language models to new time periods. time vectors (τ i ) specify a direction in weight space that improves performance on text from a time period i. they are computed by subtracting the pretrained weights (θ pre ; left panel) from those finetuned to a target time period (θ i ). we can customize model behavior to new time periods (e.g., intervening months or years) by interpolating between time vectors and adding the result to the pretrained model (middle panel). we can also generalize to a future time period j with analogy arithmetic (right panel). this involves combining a task-specific time vector with analogous time vectors derived from finetuned language models (τ lm j ). we use this structure of time vectors to induce we evaluate language model perplexity (wmt), rouge-l (news summarization), and macro f1 (political affiliation classification). each cell indicates the monthly performance of t5-3b finetuned and evaluated on a single year from that task. we report the percentage difference from the average performance for each year, and find linear degradation as finetuning and evaluation years become more misaligned regardless of task. we display similar trends for t5-small and medium, as well as for other domains and tasks, in §a.1. we measure the linearity of these degradations in appendix our results show that temporal variation is to some extent encoded in the weight space of finetuned models, and that weight interpolation can help customize language models to new time periods. we publicly release our code, data, and over 500 models finetuned on specific time periods. 1 ## data and finetuning in this section, we describe our datasets and finetuning techniques, which serve as the basis for all subsequent experiments. we finetune language models on multiple time-stratified datasets, which we use to analyze temporal misalignment and build time vectors. then, we explore different ways of interpolating between time vectors to generalize to new times. see §4.3-4.5 for more details on interpolation strategies. ## temporal misalignment at multiple time scales we begin with an analysis of temporal misalignment using the new set of models and tasks that we consider in this work ( §2). these findings set the stage for our creation of time vectors in §4. figure 3 : monthly temporal degradation has seasonal patterns. each cell indicates the monthly performance of t5-small finetuned and evaluated on a single month of the wmt dataset. we report the percentage difference in test perplexity from the average on the evaluation month over all finetuned t5-small models (darker is better). the diagonal indicates that each model does best on its finetuning month. models also do relatively better on the same month in other years, visible as the stripes radiating out from the diagonal every 12 months. ## temporal adaptation with time vectors the collection of year and month-finetuned models from §3 presents a new source of data to study temporal misalignment: model weights. in this section, we analyze these weights through the lens of time vectors, formed by taking the difference of a model finetuned on a specific time and the pretrained model. first, we show that the weights of two time vectors become less similar as the times they were finetuned on become more misaligned ( §4.2). then, we attempt to use the reverse relationship to update models to unseen times: reducing misalignment on intervening ( §4.3), future ( §4.4), and multiple time periods ( §4.5) by interpolating time vectors. ## conclusion we connect studies of temporal misalignment and weight arithmetic with time vectors, formed by finetuning a model on a specific time period and then subtracting its pretrained weights. we show that the weights of time vectors are more similar if their corresponding times are closer and vice versa. these similarities are highly correlated to temporal misalignment at both yearly and monthly scales (which exhibit seasonal patterns). leveraging this temporal structure in weight space, we induce new models that perform better on intervening years by interpolating between adjacent time vectors. similarly, we use task analogies to improve downstream performance on future time periods using only unlabeled data from those times. these results show that task arithmetic can be a simple tool for updating models to new time periods.
| 27,308
|
710
| 2,024
|
Effects of diversity incentives on sample diversity and downstream model performance in LLM -based text augmentation
|
The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models. However, more research is needed to assess how different prompts, seed data selection strategies, filtering methods, or model settings affect the quality of paraphrased data (and downstream models). In this study, we investigate three text diversity incentive methods well established in crowdsourcing: taboo words, hints by previous outlier solutions, and chaining on previous outlier solutions. Using these incentive methods as part of instructions to LLMs augmenting text datasets, we measure their effects on generated texts’ lexical diversity and downstream model performance. We compare the effects over 5 different LLMs, 6 datasets and 2 downstream models. We show that diversity is most increased by taboo words, but downstream model performance is highest with hints.
|
https://aclanthology.org/2024.acl-long.710
|
## introduction the emergence of large language models (llms) such as gpt-4, llama, etc., has sparked interest in using them to augment textual datasets @xcite @xcite . in these scenarios, the number of samples is expanded by paraphrasing existing ones through llm prompting. the created paraphrases are then added to the original dataset and used for downstream model training. such methods have been explored for various domains such as sentiment classification @xcite , news classification (piedboeuf and langlais, 2023) and health symptoms classifications @xcite . however, investigation of the effect of various prompts, specific instructions, and selection of seed data inspired by crowd in the text augmentation process when using llms is lacking. crowdsourcing is an established practice for collecting training or validation examples for a variety of nlp tasks. scenarios of data collection using human workers can be similar to those of data augmentation: workers create paraphrases on existing sentences chosen from a dataset. the aim of such data collection is to increase the data diversity and subsequent performance of classifiers trained on the data @xcite . to increase the diversity, various methods are used in crowdsourcing to guide workers. these include taboo words @xcite ) -where most significant words from the collected data are identified and listed in the worker instructions to be avoided during paraphrasing, chaining @xcite -where outliers in the previous paraphrases are identified and used as seed sentences in the next round of data collection, and hints where previous outlier paraphrases are used as examples in the instructions. the hints @xcite ) method itself is similar to llm in-context learning, where examples are included in the instructions for the model to achieve better performance. all of these diversity incentive methods report increased diversity of paraphrases and some also report increased performance of the classifiers trained on the so-collected data. this work is inspired by the parallels between crowdsourcing and llm prompting and by the performance of diversity incentive methods on the diversity of paraphrases and the performance of models trained on them. we investigate the effects of the three diversity incentive methods (originating in crowdsourcing) on data augmentation using llms. the baseline, taken from a previous study @xcite , is a simple prompting for paraphrases. measuring paraphrase diversity and downstream performance of classification models, we assess whether the diversity incentives (added to the base prompt) improve llm outputs similarly as in crowdsourcing scenarios. to our knowledge, this is the first work to investigate the effects of diversity incentive methods on llms. in this paper, we answer the following research questions: rq1: does the usage of diversity incentive methods on llms yield more diverse paraphrases? (compared to base prompting) rq2: do classifiers achieve better performance if trained on data augmented using diversity incentive methods on llms? (compared to base prompting) to answer these questions foot_0 , we have conducted a data augmentation experiment using 5 different llms on 6 different datasets in the tasks of sentiment (movie and app reviews), news, and intent (flight and voice assistant commands) classification. in this experiment, we repeatedly collect llm paraphrases using different diversity incentive methods. then, we compare the lexical diversity of the collected data and the performance of downstream classifiers. additionally, we also conduct an ablation study, where we modify the diversity incentive methods with random data to validate, that the inputs used by these methods (e.g., most influential taboo words, outlier paraphrases) contribute to the method's performance and a combination of the best performing methods for lexical diversity and model performance. in total, we collected 253,500 paraphrases. the most prominent findings are the following: 1) we do not observe statistically significant improvements in lexical diversity of the generated datasets, but only minor improvements using the taboo method, 2) the hints method increases the performance of classification models trained on such data compared to the baseline, while also reducing standard deviation and thus increasing the stability of results, 3) the chaining method and taboo method both do not significantly affect the performance of classification models trained on such data compared to the baseline. ## data collection and evaluation methodology we collected paraphrases for all combinations of the following: 5 different llms, 6 datasets, and 3 diversity incentive methods + 1 base prompting. for each combination, 5 collection iterations were performed: in each 6 random seed sentences per label were drawn from a dataset. for each prompt fired, 5 paraphrases were collected. this totalled in 142,500 collected paraphrases when aggregated all together across datasets and llms. for the ablation study and combination of best methods in section 6 we collected an additional 111,000 paraphrases in total. as the diversity incentive methods need some previously collected data to determine their cues (hints, seeds or taboo words), each iteration consisted of 2 rounds: first we collected data using only the basic prompt and in the second round, we collected data using the given diversity incentive method (or base prompt method). thus, the resulting datasets for each method consist of seed data and data collected from both rounds. the entire data collection process is visualized in figure 1 . after the paraphrases were collected, we evaluated them in several steps. first, we manually checked the validity of a subset (50%) of the collected data (i.e., is the created sample a true paraphrase retaining the label?). second, we computed the diversity of the collected data, comparing the mean vocabulary size (no. unique words) and mean number of unique 3-grams for each diversity incentive method (refers to rq1). third, we evaluated the performance of models trained on the created paraphrases (refers to rq2). for each combination of llm, dataset and method, we finetuned bert-large 5 times and mistral-7b-v0.1 3 times (the dataset also determined the classification task to which a model was finetuned). we evaluated the accuracy of trained model on the full test set for that given dataset specifically and on a subset of the test set for mistral to save computational resources following previous works @xcite @xcite , as the inference time is long and costly. details of the finetuning process can be found in appendix d and e. ## finetuning models on data collected via diversity incentive methods to investigate whether the diversity incentive methods improve the performance of downstream models, we finetuned bert-large 5 times and mistral 3 times for each llm-dataset combination. additionally, as we work with limited data, which was found to cause large variance and instability in finetuning results @xcite @xcite , we sampled data 5 times. this resulted in 25 finetuned classifiers for bert (5 data collection rounds and 5 finetunings for each of those data collection rounds) and 15 for mistral that we evaluate per dataset-llm combination. the full details about hyperparameters and the finetuning setup of bert and mistral classifier can be found in appendices d and e respectively. we report the accuracy of the finetuned models on the test split of each dataset and focus on 2 main attributes: mean accuracy and stability of performance (by measuring standard deviation of accuracy).additionally, we also conducted mann-whitney-u tests (@xmath0.05) between the baseline prompt method and other diversity incentive methods. we are interested in consistent, better performance of a diversity incentive method over the prompt baseline across llms and datasets, as fluctuating performance could be an indicator of random effects. see summary in ## combining diversity incentives as the taboo method achieved best results in lexical diversity in and the hints method achieved best results in model performance, as follow-up, we decided to combine these two methods to see if we can achieve an improvement. we have performed the data collection and finetuning process in the same way as described in section 3. in terms of lexical diversity, the method itself does not have any statistical significance on the results, although the mean number of unique words is higher than the baseline in 18/30 cases and the number of unique n-grams is higher in 16/30 cases. however, in some of the remaining cases a considerable (more than 5%) drop can be observed. in terms of model performance, the combined method statistically significantly decreased the model performance over baseline in 5/30 cases with no increases for bert and increases performance in 4/30 cases for mistral. additionally, it always performed worse as either the hints or taboo method. in summary, the combination of hints and taboo method into one method grants little to no advantage over either of the methods in both lexical diversity and model performance. we hypothesize that this might be due to the more complicated instructions to the llm when collecting the data. a decoupling of the methods in a chain of tasks could potentially improve this approach in the future. ## discussion given the results of our experiments, we note these following observations: first, contrary to the performance of diversity incentive methods observed by related work in crowdsourcing settings (better lexical diversity of paraphrases and better performance of downstream models), not all of the methods show improvement of the lexical diversity when used with llms. the worst performing method is the chaining method, where recent works already pointed out that llms create progressively worse paraphrases when using their own outputs as seed sentences repeatedly @xcite . however, none of the changes in lexical diversity are of statistical significance. second, the best performing method for data augmentation is the hints method, which is similar to in-context learning where demonstrations of samples are provided to the llm as part of the prompt. this might be the reason why this method works so well, as the own paraphrases of the llm guide it to better output, similar to in-context learning. third, we observe that, contrary to some previous works @xcite , the lexical diversity of the paraphrases does not correlate with performance of models trained on them. even though the data collected using the taboo method yield highest lexical diversity, models trained on such data do not achieve consistently better performance against baseline. fourth,the increase in mean performance and stability seems to be small, but in relative terms (compared to the baseline method) it seems to be significant, as the increase of mean performance can range from 0.6% to 2.5% increase over baseline for bert and 1% to 11% increase for mistral. for stability, the increases are even more significant: for bert the range is between 5% to 35% increase over baseline and for mistral from 10% to 66%. fifth, diversity incentives require additional computations (for significant words and outlier paraphrases) and also require larger llm context (e.g., hints use additional paraphrases in instructions of the model), meaning higher costs. as such, the increased computation costs may not warrant the use of diversity incentives. sixth, the combination of the best method for lexical diversity (taboo) and best method for model performance (hints) did not yield the increases in both lexical diversity and model performance, but performed rather poorly. we hypothesize that this might be due to the increased context length for the llm with additional instructions that are hard to perform in one single action. the promising results using the hints method opens possibilities for investigations of incontext learning for text generation in llms, as the quality of such generated data using hints seems to be better than without them. this is in line with the recent results @xcite that indicate that the usage of previous examples in instructions for llms leads to better generated data. ## conclusion in this work, we investigated the effects of different diversity incentive methods used in crowdsourcing on the lexical diversity of llm-augmented textual datasets and performance of classification models trained on such data. we compared 3 of such methods with a baseline of using only prompts asking the llm to paraphrase a given seed. we experimented with 5 llms on 6 datasets. our results indicate that the taboo method increases lexical diversity of the collected data, but that this change is not of statistical significance and affects performance only randomly. the hints method affects lexical diversity randomly, but increases the performance of classification models (both in stability of and mean performance) that were trained on data collected using this method. the chaining method does not improve lexical diversity or model performance of classification models trained on data collected using this method. the combination of hints method and taboo method does not significantly increase the lexical diversity or model performance. a common downside of diversity incentive methods is the increase of inference costs. also, there is still some randomness present when using these methods, as even the best performing methods do not increase lexical diversity or performance of models in all cases. the notable relative increase in stability of performance and mean performance of models trained on data collected using the hints method indicates that llms can produce data of better quality using this method when aiming for downstream task classifier performance.
| 27,878
|
16
| 2,021
|
Interesting cross-border news discovery using cross-lingual article linking and document similarity
|
Team Name: team-8 Embeddia Tool: Cross-Lingual Document Retrieval Zosa et al. Dataset: Estonian and Latvian news datasets abstract: Contemporary news media face increasing amounts of available data that can be of use when prioritizing, selecting and discovering new news. In this work we propose a methodology for retrieving interesting articles in a cross-border news discovery setting. More specifically, we explore how a set of seed documents in Estonian can be projected in Latvian document space and serve as a basis for discovery of novel interesting pieces of Latvian news that would interest Estonian readers. The proposed methodology was evaluated by Estonian journalist who confirmed that in the best setting, from top 10 retrieved Latvian documents, half of them represent news that are potentially interesting to be taken by the Estonian media house and presented to Estonian readers.
|
https://aclanthology.org/2021.hackashop-1.16
|
## introduction this paper presents our results of the participation in the hackaton, which was organised as part of the eacl 2021 hackashop on news media content analysis and automated report generation. we are addressing the embeddia hackathon challenge on identifying interesting news from neighbouring countries @xcite in estonian and latvian context, which is a fully novel document retrieval task performed on recently released em-beddia news datasets. estonian journalists are very interested in identifying stories from latvia, which will attract a large number of readers and are "special". while performing keyword-based search for latvian news, where estonians are mentioned is a simple task, this challenge on the contrary aims to identify a small set of documents from a larger number of topics, e.g. scandals, deaths and gossip that might be somehow connected to estonia: not only by mentioning estonians but by identifying news and stories that estonians relate to (for example, when similar things have happened in estonia or when similar news have been popular in estonia). in our approach, we first automatically create a collection of interesting articles using a stringbased search and cross-lingual document linking, and then rank the query documents based on the proportion of interesting documents in their neighbourhood (where the neighbourhood is defined by a document similarity) by the newly introduced seed news of interest score @xcite . the article first presents the datasets (section 2), introduces the methodology (section 3), and presents our experimental results (section 4). the code and the data are made publicly available (see section 5). finally, section 6 concludes the paper and presents the ideas for further work. ## datasets in this study, we used the following resources. ## methodology our methodology consists of two steps. first, we automatically construct the datasets of interesting latvian articles and next propose a method to retrieve interesting articles by ranking a given query document based on the the proportion of interesting articles in its neighbourhood. ## availability the code and data of the experiments is made available on the github: https://github.com/bkolo sk1/interesting-cross-border-news-discov ery 6 conclusion and future work in this work we tackled the problem of retrieving interesting news from one country for the context of another neighbouring country. we focused on finding interesting news in latvian news space that would be engaging for the estonian public. we used latvian and estonian embeddia datasets to construct the document space. first we used a string matching approach to identify a subset of news in estonian media that originated from latvian news. next, we utilized the methods for ad hoc cross lingual document retrieval to find corresponding articles in the latvian news space. after automatically retrieving this set of latvian news articles of interest, we used this information in a novel metric defined as snir, that analyses a news article's neighbourhood in order to measure it's relevance (interestingness). the assumption of the metric is that if the surrounding documents of a query point are relevant, this new point might be of relevance. the snir scores of randomly selected 20 documents and 20 documents identified as examples of interesting news by an estonian journalist showed that their value differ, which is for the further work we propose exploring the keywords appearing in the clusters of interesting news and exploiting their named entity tags in order to achieve even better performance. we also want to include background knowledge from knowledge graphs to improve the document similarity evaluation. special attention will also be paid to setting a threshold for snir which would allow for real-time investigation of best candidates in a real journalistic practice.
| 10,065
|
11
| 2,022
|
Unmet Creativity Support Needs in Computationally Supported Creative Writing
|
Large language models (LLMs) enabled by the datasets and computing power of the last decade have recently gained popularity for their capacity to generate plausible natural language text from human-provided prompts. This ability makes them appealing to fiction writers as prospective co-creative agents, addressing the common challenge of writer’s block, or getting unstuck. However, creative writers face additional challenges, including maintaining narrative consistency, developing plot structure, architecting reader experience, and refining their expressive intent, which are not well-addressed by current LLM-backed tools. In this paper, we define these needs by grounding them in cognitive and theoretical literature, then survey previous computational narrative research that holds promise for supporting each of them in a co-creative setting.
|
https://aclanthology.org/2022.in2writing-1.11
|
## introduction mixed-initiative co-creative @xcite creativity support tools @xcite for creative writing have recently seen a surge of interest in research communities, coinciding with the introduction of large language models (llms) such as gpt-3 @xcite that can provide coherent suggestions for the continuation of human-written text. several recent efforts have been made to understand the experiences of writers who work with these tools to produce texts @xcite @xcite . however, less attention has been paid to the development of systems that can provide forms of creative writing support beyond short-term suggestions for textual continuation. meanwhile, recent efforts to understand the playful creative writing communities that have emerged around interactive emergent narrative games @xcite and to provide computational support for playful creative writing at the plotstructure level @xcite have revealed a preliminary inventory of several distinct but interrelated creativity support needs among creative writers, including: current large language models are good at addressing the first of these needs, getting unstuck, via short-term suggestions that can prompt writers to take their stories in unexpected new directions. however, they do not directly address consistency maintenance, longer-term plot structure, management of reader experience, or the challenge of refining high-level expressive intent, and some novelists even suggest that llms may actively work against the construction of coherent plot structure due to the highly divergent nature of llm suggestions @xcite . some recent work aims to improve llms in ways that could enable them to meet these needs: for instance, work in long text generation @xcite @xcite could assist users with consistency maintenance; work on hierarchical concept-driven language models @xcite could help to maintain plot structure in generated text; and work in diverse decoding methods @xcite could help users refine their intent by selecting from among diverse potential completions of the same text. however, the possibility of supporting these needs through other forms of technology may also be worth investigating. in this paper, we describe each of these creative writing support needs in more detail, then survey previous research from communities outside of nlp/computational linguistics that have either been shown capable of addressing, or that show potential for supporting these creative needs. our aim with this paper is to create a bridge between the acl community and ai/digital games research community that may yield productive insight towards synthesizing these approaches that have evolved in parallel. we limit the scope of our discussion primarily to narrative fiction, particularly in the form of short stories, novels, and game writing/interactive storytelling, so the suggestions made here may not all be applicable to other forms of creative writing (such as poetry). however, we attempt to avoid limiting ourselves to purely text-based storytelling in which only the written word is used to convey meaning; we are also interested in forms of narrative fiction that target visual, audio, and hybrid renderings of fictional events, such as film and game narrative, since many technologies capable of reasoning about plot structure are readily applicable to these domains. ## technologies and approaches in this section, we overview technologies that have shown promise for addressing the needs outlined in the previous section. ## conclusion we have presented five creative writing support needs, only one of which (getting unstuck) is meaningfully supported by current large language models, and surveyed technologies for addressing the remaining four needs that have arisen from the ai/digital games research community. these technologies are at varying levels of maturity, and most of them have only been tested in purely automated or generative forms rather than in mixed-initiative, co-creative interaction modes. an important line of future work will be to evaluate these technologies in those modes and determine interfaces and interaction protocols that amplify and foster human creativity in the writing process. our goal with this paper is not to assert the superiority of world-model or knowledge-engineering based approaches over llms, but rather to emphasize that there is a set of needs and affordances that these techniques can address and provide that are complementary to the needs addressed and affordances provided by llms. by bridging research communities focused (on one hand) on computing with natural language and (on the other) on simulating story worlds and reasoning about narrative structure, we hope to pave the way for hybrid and unified models that can transform the human creative writing experience-much like the neurosymbolic approaches to automated story generation (martin, 2021) that undergird several recent advances in story generation as a field.
| 17,222
|
114
| 2,025
|
QUST _ NLP at S em E val-2025 Task 7: A Three-Stage Retrieval Framework for Monolingual and Crosslingual Fact-Checked Claim Retrieval
|
This paper describes the participation of team QUST_NLP in the SemEval-2025 Task 7. We propose a three-stage retrieval framework specifically designed for fact-checked claim retrieval. Initially, we evaluate the performance of several retrieval models and select the one that yields the best results for candidate retrieval. Next, we employ multiple re-ranking models to enhance the candidate results, with each model selecting the Top-10 outcomes. In the final stage, we utilize weighted voting to determine the final retrieval outcomes. Our approach achieved 5th place in the monolingual track and 7th place in the crosslingual track. We release our system code at: https://github.com/warmth27/SemEval2025_Task7.
|
https://aclanthology.org/2025.semeval-1.114
|
## introduction semeval-2025 shared task 7 focuses on the retrieval of monolingual and crosslingual factchecked claims, aiming to tackle the global challenge of misinformation spread @xcite . we engaged in two tracks of the semeval-2025 shared task 7: monolingual and crosslingual. the monolingual track demands methods capable of retrieving the relationship between social media posts and fact-checked claims within the same linguistic environment. this task presents challenges such as noise arising from the large volume of data and difficulties related to the imbalance of language resources @xcite . the crosslingual track requires methods that can retrieve factchecked claims related to social media posts regardless of whether the language of the post matches the language of the related fact-checked claim. the primary challenge in crosslingual retrieval lies in translation inconsistencies, particulaedrly for lowresource languages (qi et al., 2023; magueresse * corresponding author @xcite . the absence of high-quality translation tools exacerbates the complexity of achieving accurate crosslingual semantic alignment. to tackle the aforementioned challenges, we propose a three-stage retrieval framework. initially, we evaluate and employ several pre-trained language models for preliminary retrieval of candidate results @xcite , thereby mitigating the noise caused by the large data volume and alleviating the adverse effects of language resource imbalance. subsequently, a re-ranking model is applied to refine the ranking of the candidate results, enhancing the position of fact-checked claims most relevant to the social media posts. for the crosslingual retrieval task, we utilize machinetranslated data for preliminary retrieval, followed by ranking the results using a re-ranking model fine-tuned with english data. finally, a weighted voting strategy is employed to combine the outputs from multiple re-ranking models, further enhancing the system's accuracy. our approach achieved 5th place in the monolingual track and 7th place in the crosslingual track, thereby validating its effectiveness and feasibility in addressing the aforementioned challenges. ## system description our approach utilizes a three-stage retrieval framework: retrieval stage, re-ranking stage, and weighted voting stage. this staged design excels at balancing retrieval efficiency and accuracy, making it particularly suitable for handling large-scale datasets. by generating candidate results during the initial retrieval stage, fine-tuning them during the re-ranking phase, and finally aggregating predictions from multiple models in the weighted voting stage, we are able to obtain the final solution. the detailed process is shown in figure 1 . ## conclusion and limitation this paper introduces a monolingual and crosslingual fact-checked claim retrieval method utilizing a three-stage retrieval framework. by integrating retrieval models, re-ranking models, and weighted voting, we effectively address challenges such as data noise and imbalanced language resources. our findings suggest that employing a mixed input strategy markedly enhances retrieval performance, while fine-tuning further optimizes re-ranking efficacy. our method achieved 5th place in the monolingual track and 7th place in the crosslingual track. we acknowledge that our method has limitations in terms of translation consistency and quality. future work will focus on enhancing translation quality and refine model fine-tuning strategies to overcome these challenges.
| 40,531
|
15
| 2,016
|
Investigating the Impact of Various Partial Diacritization Schemes on A rabic- E nglish Statistical Machine Translation
|
Most diacritics in Arabic represent short vowels. In Arabic orthography, such diacritics are considered optional. The absence of these diacritics naturally leads to significant word ambiguity to top the inherent ambiguity present in fully diacritized words. Word ambiguity is a significant impediment for machine translation. Despite the ambiguity presented by lack of diacritization, context helps ameliorate the situation. Identifying the appropriate amount of diacritic restoration to reduce word sense ambiguity in the context of machine translation is the object of this paper. Diacritic marks help reduce the number of possible lexical word choices assigned to a source word which leads to better quality translated sentences. We investigate a variety of (linguistically motivated) partial diacritization schemes that preserve some of the semantics that in essence complement the implicit contextual information present in the sentences. We also study the effect of training data size and report results on three standard test sets that represent a combination of different genres. The results show statistically significant improvements for some schemes compared to two baselines: text with no diacritics (the typical writing system adopted for Arabic) and text that is fully diacritized.
|
https://aclanthology.org/2016.amta-researchers.15
|
## introduction resolving natural language ambiguity is at the crux of the nlp enterprise. ambiguity refers to the problem of possibly having different interpretations for different segments (words, phrases, etc.) of a sentence. languages such as arabic, hebrew and persian are typically written in a manner that exacerbates this ambiguity problem and increases the homograph rate by underspecifying some of the letters such as short vowels and consonantal gemination, which in turn increases the effect of having multiple interpretations for the same word. this renders text even more ambiguous than typically expected. while context helps native speakers of the language resolve some of the ambiguity, context alone does not always produce adequate clarity for interpretation. the problem is further complicated in arabic by the fact that there are no native speakers of modern standard arabic (msa), which is the language used in education and formal settings. instead, speakers of arabic converse in various dialects of arabic which are at times starkly different from msa. one solution for this problem is diacritic restoration, or diacritization, which refers to ren-dering the underspecified diacritics explicit in the text. we investigate the problem of diacritization within the context of arabic-to-english statistical machine translation (smt) system. we address the problem in msa texts, the majority of which are underspecified for these diacritic marks. we focus here on the most prominent arabic diacritics which are short vowels @xcite , the syllable boundary marker, known as sukoon (o), indefiniteness marker, known as nunation (f, k, n), and the consonantal doubling marker (gemination) known as shadda (∼) 1 . in this study, we aim to investigate what is the appropriate level and type of diacritic restoration that would have the biggest impact on natural language understanding as tested and evaluated via machine translation. hence we experiment with various diacritization schemes based on lexical and/or syntactic information. this current work is a follow on to the pilot work presented in @xcite . however it is different in the following respects: 1-we explore automatically diacritized data; 2-we define more schemes that target both lexical and/or syntactic properties of the arabic language. 3-we test the robustness of our observations taking into consideration varying training size and cross genre evaluation. ## related work automatic arabic diacritization has been addressed thoroughly in @xcite @xcite @xcite . full diacritization indicates rendering the text with all the most prominent diacritics, namely (a, i, u, o, ∼). 2 initial efforts in automatic diacritization include rulebased approaches to add all diacritics in the texts @xcite ; however, it is expensive to maintain these rules to be generalized for unseen instances. most studies focused on full diacritic restoration. for automatic speech recognition (asr), @xcite perform full diacritization on msa speech transcripts for language modeling. they show that developing asr models on fully diacritized datasets improves performance significantly. supervised classifiers such as hidden markov model (hmm) and maximum entropy (maxent) have been employed for diacritization @xcite @xcite . in a study conducted by @xcite , the researchers use maxent trained on msa with lexical and n-gram features to improve asr. another study uses decision trees and stochastic language models to fully diacritize texts in order to render graphemes to synthesized speech @xcite . the buckwalter arabic morphological analysis (bama) @xcite system has been used along with a single tagger or a language model to select amongst the diacritized analyses in context to render text fully diacritized @xcite . in @xcite , the authors show that some inflectional and lexical related morphological features improve the performance of syntactic parsing in arabic. although @xcite have not used diacritics directly in their work, they use the same essential information that is used to diacritize arabic texts. @xcite not only investigate the impact of full diacritization on statistical machine translation (smt) but also introduce the notion of partial diacritization. they also show that several schemes have a small positive effect albeit not significant on smt performance over none and full diacritization despite the significant increase in the number of types. although the results in @xcite are not statistically significant, they provide directions of research that we can exploit to increase the performance of arabic related nlp applications. in a study conducted by alhanai and glass (2014), three partial diacritic schemes have been defined and compared to both fully and non-diacritized versions of the words. in their study, it is found that fully-diacritized text without gemination have statistically better performance than fully diacritized texts including gemination in asr application. our work follows the same general procedure as @xcite where we study the impact of some aspects of diacritization information in nlp applications, smt in particular. for arabic reading comprehension, @xcite studies the impact of partial diacritics in improving arabic speakers' reading comprehension. their study shows the effectiveness of having some level of diacritization between none and fully diacritized forms that help the readers disambiguate homographs that cannot be understood by the surrounding contexts. this shows the importance of having accurate automatic partial diacritization not only in improving different nlp applications but also to diacritize texts to help readers understand arabic texts better. having the goal of helping other researchers develop partial diacritization, @xcite has conducted a pilot study that minimally diacritize the dataset to reduce lexical ambiguity and help generate models to find an optimal level of diacritization in some nlp applications. although the result of this minimally-diacritized annotation has been highly affected by the annotators' subjectivity and background, it has shown some promising results for future studies. the idea of integrating word sense disambiguation (wsd) technologies into the smt framework has been studied previously, tackling different aspects of the phenomenon and showing statistically significant improvement integrating explicit wsd into the smt system @xcite @xcite . mainly, wsd integration improves the ability of the system to choose the target translation if it has been incorporated efficiently. @xcite show an improvement in chinese-to-english smt system in eight different automatic evaluation metrics when they integrate wsd in their translation system at decode time. they use the same parallel corpus used for training and the phrase translation table generated by the smt tool to disambiguate senses of the words by using the aligned phrases in the target language. all of the previous work incorporates features that help disambiguate senses in a supervised or unsupervised manner to generate better quality translation. some of these studies change the smt pipeline to integrate wsd but others implement it as a pre-processing step at decode time. in this study, we have the same goal as theirs which is to appropriately select the correct sense of a target word at decode time. we implement this by adding a certain amount of diacritics in arabic as preprocessing in the data preparation step. thus, the translation quality is not only enhanced by the appropriate choice of target word but also by the fact that the word alignment procedure is improved. ## scheme extraction we investigate the impact of various partial diacritization schemes on smt application. we compare their performance against two baselines, specifically full diacritization where all the diacritics are present and none where no diacritics are present. similar to the extraction strategy of @xcite , each of these schemes is identified from fully diacritized arabic datasets. additionally, the extraction process of some schemes involves the full morphological analysis of the words' part of speech and their lemmas. to identify these morphological features, we use madamira, a morphological analyzer and disambiguator for the arabic language @xcite . the quality of diacritization schemes rely on the performance of the automatic diacritization to predict diacritics. it is important to note that we rely on the underlying diacritized lemma form for ensuring extraction accuracy. @xcite define six different diacritization schemes based on their usage prominence in the arabic treebank (atb) @xcite . namely, they are fully diacritized (full), passive voice diacritic marks (pass), consonant doubling or gemination (gem), pres-ence of the syllable boundary marker sukoon (suk), syntactic case and mood diacritics (cm), and the case of no diacritization (none). in this study, we adopt the same previously mentioned schemes in addition to introducing several new ones: full-cm, pass+cm, pass+gem, suk+gem, pass+suk, pass+suk+gem, full-cm-pass, tanween. 3 the following is a detailed explanation of these diacritic schemes. the schemes are linguistically-motivated reflecting lexical, syntactic, or both types of information. the arabic sentences are written in buckwalter transliteration foot_3 and are tokenized according to the atb style (arabic treebank tokenization). it is crucial to note that if the word is not affected by the defined diacritic pattern, we remove all of its diacritics (i.e. none scheme). baselines: none: indicates that no diacritics are kept at all in the sentence, including the removal of the naturally occurring diacritics.
| 986
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 18