Abstracts,Class "Sign language is one of the oldest and most natural forms of language for communication, but since most people do not know sign language and interpreters are very difficult to come by, we have come up with a real-time method using neural networks for fingerspelling-based Indian Sign Language. We collected a dataset of depth based segmented RGB image for classifying 36 different gestures (alphabets and numerals). The system takes in a hand gesture as input and returns the corresponding recognized character as output in real time on the monitor screen. For classification we used Convolutional Neural Network. Our method provides 95.7 % accuracy for the 36-hand gesture.",Sign Language and Fingerspelling Recognition "Sign language recognition is one of the most challenging tasks of today__ era. Most of the researchers working in this domain have focused on different types of implementations for sign recognition. These implementations require the development of smart prototypes for capturing and classifying sign gestures. Keeping in mind the aspects of prototype design, sensor-based, vision-based, and hybrid approach-based prototypes have been designed. The authors in this paper have designed sensor-based assistive gloves to capture signs for the alphabet and digits. These signs are a small but important fraction of the ASL dictionary since they play an essential role in fingerspelling, which is a universal signed linguistic strategy for expressing personal names, technical terms, gaps in the lexicon, and emphasis. A scaled conjugate gradient-based back propagation algorithm is used to train a fully-connected neural network on a self-collected dataset of isolated static postures of digits, alphabetic, and alphanumeric characters. The authors also analyzed the impact of activation functions on the performance of neural networks. Successful implementation of the recognition network produced promising results for this small dataset of static gestures of digits, alphabetic, and alphanumeric characters.",Sign Language and Fingerspelling Recognition "Sign language users tend to be socially restricted due to the general population__ lack of knowledge of sign language. Some attempts have been made to develop technologies that improve this aspect by translating sign language. However; these approaches generally use a third-person camera for collecting the information, limiting sign users to environments prepared for this purpose.In this study, we develop a first-person view Japanese fingerspelling recognition system using an Optical See-Through Head Mount Display (OSTHMD). The system estimates the hand posture from the camera mounted on the OSTHMD and applies machine learning to the hand posture data to classify the hand gestures and convert them into speech. 37 Japanese sign language fingerspelling were successfully recognized by using a Microsoft pose extractor. Next, using a support vector machine, 37 out of 53 Japanese sign language fingerspelling were successfully identified with more than 70% identification rate. Finally, the specified labels were converted into speech using the speech output module with Azure API.The main purpose of this research is to propose a system that enables sign language users to communicate with verbal people without environmental restrictions.",Sign Language and Fingerspelling Recognition "Although not a global language, sign language is an essential tool for the deaf community. Communication between these communities and hearing population is severely hampered by this, as human-based interpretation can be both costly and time-consuming. In this paper, we present a real-time American Sign Language (ASL) generation and recognition system that makes use of Convolutional Neural Networks and deep learning (CNNs). Despite differences in lighting, skin tones, and backdrops, our technology is capable of correctly identifying and generating ASL signs. We trained our model on a large dataset of ASL signs in order to obtain a high level of accuracy. Our findings show that, with accuracy rates of 98.53% and 98.84%, respectively, our system achieves high accuracy rates in both training and validation. Our approach uses the advantages of CNNs to accomplish quick and precise recognition of individual letters and words, making it particularly effective for sign fingerspelling recognition. We believe that our technology has the ability to transform communication between the hearing community and the deaf and hard-of-hearing communities by providing a dependable and cost-effective way of sign language interpretation. Our method could help people who use sign language communicate more easily and live better in a range of environments, including schools, hospitals, and public places.",Sign Language and Fingerspelling Recognition "Sign Language Recognition (SLR) is a Computer Vision (CV) and Machine Learning (ML) task, with potential applications that would be beneficial to the Deaf community, which includes not only deaf persons but also hearing people who use Sign Languages. SLR is particularly challenging due to the lack of training datasets for CV and ML models, which impacts their overall accuracy and robustness. In this paper, we explore the use of synthetic images to augment a dataset of fingerspelling signs and we evaluate whether this could be used to reliably increase the performance of an SLR system. Our model is based on a pretrained convolutional network, fine-tuned using synthetic images, and tested using a corpus dataset of real recordings of native signers. An accuracy of 71% recognition was achieved using skeletal wireframe image training datasets and using the MediaPipe pose estimation model in the test pipeline. This compares favourably with state-of-the-art CV models which achieve up to 62% accuracy with __n-the-wild_ fingerspelling test datasets.",Sign Language and Fingerspelling Recognition "Sign language is a method of communication using hand gestures that are usually used by Deaf people. In Indonesia, there are 2 types of sign language, namely SIBI and BISINDO. However, in everyday life, BISINDO is more often used. Communication gaps often occur between Deaf people and hearing people. So that we need media that can bridge their communication. one of the technologies that can be used is SLR (Sign Language Recognition). SLR itself has various kinds of approaches, one of which is a vision-based SLR. Vision-based SLR has an advantage, such as not requiring a special device attached to the hand, but simply making gestures with bare hands in front of the camera. In this study, we created a machine learning model with a vision-based SLR approach. The model we created was using the CNN (Convolutional Neural Network) architecture. The CNN model was trained and tested on the BISINDO alphabet (A-Z) dataset that we created on our own. This model achieves an accuracy of 99.28% on validation accuracy, 98.57% on testing accuracy, and 98.07% on real-time testing accuracy.",Sign Language and Fingerspelling Recognition "The goal of sign language technologies is to develop a bridging solution for the communication gap between the hearing-impaired community and the rest of society. Real-time Sign Language Recognition (SLR) is a state-of-the-art subject that promises to facilitate communication between the hearing-impaired community and others. Our research uses transfer learning to provide vision-based sign language recognition. We investigated recent works that use CNN-based methods and provided a literature review on deep learning systems for the sign language recognition (SLR) problem. This paper discusses the architecture of deep learning methods for SLR systems and explains a transfer learning application for fingerspelling sign classification. For the experiments, we used the Azerbaijani Sign Language Fingerspelling dataset and got 88.0% accuracy.",Sign Language and Fingerspelling Recognition "The recent development of disability studies in academic bodies has expedited the promotion of investigation on disability. With computer-aided tools, communication between the impaired person and someone who does not understand sign language could be accessible. A large number of people across the world are using sign language (e.g., British Sign Language (BSL), Asian Sign Language (ASL), Indian Sign Language (ISL), etc.) with hand gestures for communication. In BSL recognition, the involvement of both hands overlapping each other becomes the main challenge. Moreover, BSL comprises ambiguous signs concerning viewpoint. However, existing traditional techniques seem in-stable, less accurate, and inefficient. In this work, the BSL fingerspelling alphabet recognition problem explores using a Deep learning framework to address the above-mentioned concerns. Convolutional Neural Network (CNN) is employed to detect and recognize for classification of 26 alphabets after being trained on the BSL corpus dataset. The proposed work outperforms the existing works with better precision (6%), recall (4%), and F-measure (5_%). It reported better results on the BSL corpus dataset and webcam videos. The model achieved better accuracy (98.0%) for a large lexicon of words than previous models (Goh & Holden [6]: 69.5%, Rambhau [9]: 79.2%, and Liwicki et al. [8]: 92.5%). The 3D CNN-based proposal performs robust hand detection, much more accurate sign recognition, more scalability, and less ambiguity in BSL finger-spelling recognition.",Sign Language and Fingerspelling Recognition "Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Fingerspelling recognition method from isolate sign language has attracted research interest in computer vision and human-computer interaction based on a novel technique. The essential for real-time recognition of isolate sign language has grown with the emergence of better-capturing devices such as Kinect sensors. The purpose of this paper is to design a user independent framework for automatic recognition of American Sign Language which can recognize several one-handed dynamic isolated signs and interpreting their meaning. We built datasets as a raw data for alphabets (A__) or numbers (1_20) by used left-hand the 3D point (XL, YL, ZL) or switch by right-hand (XR, YR, ZR) centroid as one of contribution. The proposed approach was tested for gestures that involve left-hand or right-hand and was compared with other approach and gave better accuracy. Two machine learning methods are involved like Hidden Conditional Random Field (HCRF), and Random Decision Forest (RDF) for the classification part. The third contribution based on low lighting condition and cluttered background. In this research work is achieved for recognition accuracy over 99.7%.",Sign Language and Fingerspelling Recognition "Sign Language Recognition(SLR) is a complex gesture recognition problem because of the quick and highly coarticulated motion involved in gestures. This research work focuses on Fingerspelling recognition task, which constitutes 35% of the American Sign Language (ASL). Fingerspelling identifies the word letter by letter. Fingerspelling is used for signing the words which do not have designated ASL signs such as technical terms, content words and proper nouns. In our proposed work for ASL Fingerspelling recognition, we consider ChicagoFSWild dataset which consists of occlusions and images captured in varying illuminations, lighting conditions (in the wild environments). The optical flow is obtained from Lucas-Kanade algorithm, prior is generated, images are resized and cropped with face-roi technique to get the region of interest (ROI). The visual attention mechanism attends to the ROI iteratively. ResNet, pretrained on Imagenet is used for the extraction of spatial features. The Bi-LSTM network with Connectionist Temporal Classification (CTC) is used to predict the sign. It provides the accuracy of 57% on ChicagoFSWild dataset for Fingerspelling recognition task.",Sign Language and Fingerspelling Recognition "Natural Language Processing (NLP) is a vital field of artificial intelligence that automates the study of human language. However for Malay manuscripts (MM) written in old jawi, its exposure on such field is limited. Besides, most of the studies related to MM studies and NLP were focused on rule based or rule based machine transliteration (RBMT). Hence the objective of this study is to propose a statistical approach for old jawi to modern jawi transliteration of Malay manuscript contents using Phrase Based Statistical Machine Translation (PBSMT) as its model. In order to achieve such purpose, quality score of Word Error Rate (WER) was computed on the transliteration output. Besides, the issues formerly encountered by rule based approach such as vocals limitation and homograph, reduplication, letters error and combination of multiple words were observed in the implementation. Moreover, this paper utilized exploratory approach as its research strategy and mixed method as its research method. The data for the analysis were extracted from a MM titled Bid¨¡yat al-Mubtad¨© bi-F¨¡lillah al-Muhd¨©. Quality score of WER was computed for the evaluation of SMT output. Afterwards, related issues were identified and assessed. The research found that quality score of PBSMT for old jawi to modern jawi transliteration was high in terms of WER, however the issues of rule based were generally addressed by PBSMT except homograph. The research is however limited to the approach of SMT that solely focused on PBSMT as its model. Moreover, the corpus size was limited to one manuscript while SMT relies on corpus size. Nevertheless the research contributes to the wider coverage on Malay language as one of the under resource languages in NLP, in form of old and modern jawi. Besides, to the best of the researcher¡¯s knowledge, it is also the first to apply SMT (PBSMT) approach on old jawi transliteration. Most importantly, the study is to contribute on MM¡¯s.",Rule-based MT (RBMT) "This paper presents a comparison of post-editing (PE) changes performed on English-to-Finnish neural (NMT), rule-based (RBMT) and statistical machine translation (SMT) output, combining a product-based and a process-based approach. A total of 33 translation students acted as participants in a PE experiment providing both post-edited texts and edit process data. Our product-based analysis of the post-edited texts shows statistically significant differences in the distribution of edit types between machine translation systems. Deletions were the most common edit type for the RBMT, insertions for the SMT, and word form changes as well as word substitutions for the NMT system. The results also show significant differences in the correctness and necessity of the edits, particularly in the form of a large number of unnecessary edits in the RBMT output. Problems related to certain verb forms and ambiguity were observed for NMT and SMT, while RBMT was more likely to handle them correctly. Process-based comparison of effort indicators shows a slight increase of keystrokes per word for NMT output, and a slight decrease in average pause length for NMT compared to RBMT and SMT in specific text blocks. A statistically significant difference was observed in the number of visits per sub-segment, which is lower for NMT than for RBMT and SMT. The results suggest that although different types of edits were needed to outputs from NMT, RBMT and SMT systems, the difference is not necessarily reflected in process-based effort indicators.",Rule-based MT (RBMT) "Machine translation has witnessed great development in the recent decades and we have entered the era of neural machine translation (NMT). A review of MT is necessary for a better understanding of the relationship between MT and human translators and translation teaching in this era when MT has flourished. This paper first briefs the machine translation (MT) development in the past decades, focusing on the features, application, and drawbacks of each main paradigm of rule-based machine translation (RBMT), corpus-based translation (CBMT), and long-short term memory (LSTM), a main paradigm of NMT. It continues with a discussion of what MT means to human translators and translation teaching in universities. It concludes that MT should not and could not replace human translators which will always be vital in some fields and aspects; only a good integration between the two can ensure satisfying output with post-editing by human translators to meet the increasingly demanding market. This signifies that translation teaching in universities should embrace MT knowledge. ",Rule-based MT (RBMT) "This article re-looks into machine translation (MT) errors and proposes a function-oriented MT post-editing (MTPE) typology in a new technological context. Driven by the technological advances of the neural machine translation (NMT) system over the past several years, the author thinks that we should re-examine MT errors created by NMT systems, and understand whether the NMT system can resolve the issues the rule-based MT (RBMT) and statistical MT (SMT) systems have encountered. A mixed-methods approach is used to complete this study, and technical texts, journalistic texts and web-based company texts are chosen as analytical materials. The three-phased procedure consists of (1) cross-checking the differences between source texts (STs), MT outputs and corresponding human translations (HTs) to identify MT errors, (2) proposing a three-tier MTPE typology to supplement the current binary MTPE typology and (3) exploring empirical and theoretical implications of this research. The findings differ from previous MTPE studies in three aspects: (1) amending linguistic, pragmatic and affective MT errors with the strategies of “accurateenough editing,” “clear-enough editing” and “attractive-enough editing,” not the strategies of light editing and full editing; (2) replacing the existing editor-driven MTPE typology with a functiondriven MTPE typology; and (3) using a progressive, flexible MTPE typology to meet the textual functions of different types of MT texts. Overall, this article re-examines MT errors and MTPE strategies, and raises an alternative MTPE typology from the perspective of textual functions in the framework of the NMT scenario. It expects to add some novel insights to contemporary MT studies.",Rule-based MT (RBMT) "To build an Indonesian Machine Translation (MT), it is not only needed a related syntactic analysis to the correct spelling of words but also needed related contextual analysis, consist type and function of word, morphology, and semantic. The dictionaries usage is needed to translates Indonesian basic words and to captures good word translations through the semantic and context of words in a sentence or document. This study purposes to extracts Indonesian and Tolaki words for building a good MT by comparing the development of Indonesian MT which focuses on deep cases of morphology and syntactic. We developed morphtool to captures the morphological elements of Indonesian and Tolaki words. For working in deep syntactic case, we build a rule to captures the function and type of word that can affect the word itself translation in the sentence. We combine supervised and unsupervised techniques to work on the text extraction in the words, sentences, and documents through the morphonemic rules of Indonesian- Tolaki syntaxis manner. Then, we use hybrid MT, combining Statistical MT (SMT) and Rule Based MT (RBMT), for sentence translation process. The hybrid MT evaluation process from the Indonesian-Tolaki to English translation performance test shows the accuracy result is 0.74. Meanwhile, the performance test of the English to Indonesian-Tolaki translation shows the accuracy result is 0.71. These results indicate that the proposed MT method can work better than the SMT and RBMT methods with an average accuracy of around 70%.",Rule-based MT (RBMT) "In this paper we describe a rule-based, bi-directional machine translation system for the Finnish—English language pair. The baseline system was based on the existing data of FinnWordNet, omorfi and apertium-eng. We have built the disambiguation, lexical selection and translation rules by hand. The dictionaries and rules have been developed based on the shared task data. We describe in this article the use of the shared task data as a kind of a test-driven development workflow in RBMT development and show that it suits perfectly to a modern software engineering continuous integration workflow of RBMT and yields big increases to BLEU scores with minimal effort.",Rule-based MT (RBMT) "Corpus-based approaches to machine translation (MT) have difficulties when the amount of parallel corpora to use for training is scarce, especially if the languages involved in the translation are highly inflected. This problem can be addressed from different perspectives, including data augmentation, transfer learning, and the use of additional resources, such as those used in rule-based MT. This paper focuses on the hybridisation of rule-based MT and neural MT for the Breton–French under-resourced language pair in an attempt to study to what extent the rule-based MT resources help improve the translation quality of the neural MT system for this particular under-resourced language pair. We combine both translation approaches in a multi-source neural MT architecture and find out that, even though the rule-based system has a low performance according to automatic evaluation metrics, using it leads to improved translation quality.",Rule-based MT (RBMT) "Machine translation is to translate one language into another language, which has undergone a great evolution. The model of machine translation has been continuously improved, aiming to make the translation effect closer to the artificial translation. This article briefly summarizes the development history of machine translation, and introduces the main models of each stage of development. The initial machine translation mode is the Rule Based Machine Translation (RBMT) and Statistical Machine Translation (SMT). Recent mainstream translation approach enables Neural Machine Translation (NMT). It includes the input and the output, attention mechanism, and BLEU evaluation method. On this basis, there are also many expansion and innovation models, such as GPKD and other models to improve the evaluation effect. In general, machine translation can replace a part of human translation. However, it cannot completely replace human beings, because of the different human thinking and machine logic. People and machines have to cooperate with each other to improve the common efficiency.",Rule-based MT (RBMT) "This article aimed to address the problems of word order confusion, context dependency, and ambiguity in traditional machine translation (MT) methods for verb recognition. By applying advanced intelligent algorithms of artificial intelligence, verb recognition can be better processed and the quality and accuracy of MT can be improved. Based on Neural machine translation (NMT), basic attention mechanisms, historical attention information, dynamically obtain information related to the generated words, and constraint mechanisms were introduced to embed semantic information, represent polysemy, and annotate semantic roles of verbs. This article used the Workshop on machine translation (WMT), British National Corpus (BNC), Gutenberg, Reuters Corpus, OpenSubtitles corpus, and enhanced the data in the corpus. The improved NMT model was compared with traditional NMT models, Rule Based machine translation (RBMT), and Statistical machine translation (SMT). The experimental results showed that the average verb semantic matching degree of the improved NMT model in 5 corpora was 0.85, and the average Bilingual Evaluation Understudy (BLEU) score in 5 corpora was 0.90. The improved NMT model in this article can effectively improve the accuracy of verb recognition in MT, providing new methods for verb recognition in MT.",Rule-based MT (RBMT) "Machine translation (MT) systems translate text between different languages by automatically learning in-depth knowledge of bilingual lexicons, grammar and semantics from the training examples. Although neural machine translation (NMT) has led the field of MT, we have a poor understanding on how and why it works. In this paper, we bridge the gap by assessing the bilingual knowledge learned by NMT models with phrase table -- an interpretable table of bilingual lexicons. We extract the phrase table from the training examples that an NMT model correctly predicts. Extensive experiments on widely-used datasets show that the phrase table is reasonable and consistent against language pairs and random seeds. Equipped with the interpretable phrase table, we find that NMT models learn patterns from simple to complex and distill essential bilingual knowledge from the training examples. We also revisit some advances that potentially affect the learning of bilingual knowledge (e.g., back-translation), and report some interesting findings. We believe this work opens a new angle to interpret NMT with statistic models, and provides empirical supports for recent advances in improving NMT models.",Rule-based MT (RBMT) "The landscape of transformer model inference is increasingly diverse in model size, model characteristics, latency and throughput requirements, hardware requirements, etc. With such diversity, designing a versatile inference system is challenging. DeepSpeed-Inference addresses these challenges by (1) a multi-GPU inference solution to minimize latency while maximizing throughput for both dense and sparse transformers when the model fits in aggregate GPU memory, and (2) a heterogeneous inference solution that leverages CPU/NVMe/GPU memory to enable high-throughput inference for models larger than aggregate GPU memory. DeepSpeed-Inference reduces latency by 6.4× and increases throughput by 1.5 ×over the state-of-the-art. It enables trillion parameter scale inference under real-time latency constraints by leveraging hundreds of GPUs, an unprecedented scale for inference. It can inference 25 ×larger models than with GPU-only solutions, while delivering a high throughput of 84 TFLOPS (over 50% of A6000 peak).",Transformer Models "The transformer is the most critical algorithm innovation of the Nature Language Processing (NLP) field in recent years. Unlike the Recurrent Neural Network (RNN) models, transformers are able to process on dimensions of sequence lengths in parallel, therefore leads to better accuracy on long sequences. However, efficient deployments of them for online services in data centers equipped with GPUs are not easy. First, more computation introduced by transformer structures makes it more challenging to meet the latency and throughput constraints of serving. Second, NLP tasks take in sentences of variable length. The variability of input dimensions brings a severe problem to efficient memory management and serving optimization. To solve the above challenges, this paper designed a transformer serving system called TurboTransformers, which consists of a computing runtime and a serving framework. Three innovative features make it stand out from other similar works. An efficient parallel algorithm is proposed for GPU-based batch reduction operations, like Softmax and LayerNorm, which are major hot spots besides BLAS routines. A memory allocation algorithm, which better balances the memory footprint and allocation/free efficiency, is designed for variable-length input situations. A serving framework equipped with a new batch scheduler using dynamic programming achieves the optimal throughput on variable-length requests. The system can achieve the state-of-the-art transformer model serving performance on GPU platforms and can be seamlessly integrated into your PyTorch code with a few lines of code.",Transformer Models "Transformer is the state-of-the-art model in recent machine translation evaluations. Two strands of research are promising to improve models of this kind: the first uses wide networks (a.k.a. Transformer-Big) and has been the de facto standard for development of the Transformer system, and the other uses deeper language representation but faces the difficulty arising from learning deep networks. Here, we continue the line of research on the latter. We claim that a truly deep Transformer model can surpass the Transformer-Big counterpart by 1) proper use of layer normalization and 2) a novel way of passing the combination of previous layers to the next. On WMT’16 English-German and NIST OpenMT’12 Chinese-English tasks, our deep system (30/25-layer encoder) outperforms the shallow Transformer-Big/Base baseline (6-layer encoder) by 0.4-2.4 BLEU points. As another bonus, the deep model is 1.6X smaller in size and 3X faster in training than Transformer-Big.",Transformer Models "Transformer architectures are highly expressive because they use self-attention mechanisms to encode long-range dependencies in the input sequences. In this paper, we present a literature review on Transformer-based (TB) models, providing a detailed overview of each model in comparison to the Transformer’s standard architecture. This survey focuses on TB models used in the field of Natural Language Processing (NLP) for textual-based tasks. We begin with an overview of the fundamental concepts at the heart of the success of these models. Then, we classify them based on their architecture and training mode. We compare the advantages and disadvantages of popular techniques in terms of architectural design and experimental value. Finally, we discuss open research, directions, and potential future work to help solve current TB application challenges in NLP.",Transformer Models "The question answering system is frequently applied in the area of natural language processing (NLP) because of the wide variety of applications. It consists of answering questions using natural language. The problem is, in general, solved by employing a dataset that consists of an input text, a query, and the text segment or span from the input text that provides the question’s answer. The ability to make human-level predictions from data has improved significantly thanks to deep learning models, particularly the Transformer architecture, which has been state-of-the-art in text-based models in recent years. This paper reviews studies related to the use of transformer models in the implementation of question-answering (QA) systems. The paper’s first focus is on the attention and transformer models. A brief description of the architectures is presented by classifying them into models based on encoders, decoders, and on both Encoder-Decoder. Following that, we examine the most recent research trends in textual QA datasets by highlighting the architecture of QA systems and categorizing them according to various criteria. We survey also a significant set of evaluation metrics that have been developed in order to evaluate the models’ performance. Finally, we highlight solutions built to simplify the implementation of Transformer models.",Transformer Models "Transformer-based sequence-to-sequence architectures, while achieving state-of-the-art results on a large number of NLP tasks, can still suffer from overfitting during training. In practice, this is usually countered either by applying regularization methods (e.g. dropout, L2-regularization) or by providing huge amounts of training data. Additionally, Transformer and other architectures are known to struggle when generating very long sequences. For example, in machine translation, the neural-based systems perform worse on very long sequences when compared to the preceding phrase-based translation approaches (Koehn and Knowles, 2017). We present results which suggest that the issue might also be in the mismatch between the length distributions of the training and validation data combined with the aforementioned tendency of the neural networks to overfit to the training data. We demonstrate on a simple string editing tasks and a machine translation task that the Transformer model performance drops significantly when facing sequences of length diverging from the length distribution in the training data. Additionally, we show that the observed drop in performance is due to the hypothesis length corresponding to the lengths seen by the model during training rather than the length of the input sequence.",Transformer Models "Transformer-based models are the state-of-the-art for Natural Language Understanding (NLU) applications. Models are getting bigger and better on various tasks. However, Transformer models remain computationally challenging since they are not efficient at inference-time compared to traditional approaches. In this paper, we present FastFormers, a set of recipes to achieve efficient inference-time performance for Transformer-based models on various NLU tasks. We show how carefully utilizing knowledge distillation, structured pruning and numerical optimization can lead to drastic improvements on inference efficiency. We provide effective recipes that can guide practitioners to choose the best settings for various NLU tasks and pretrained models. Applying the proposed recipes to the SuperGLUE benchmark, we achieve from 9.8x up to 233.9x speed-up compared to out-of-the-box models on CPU. On GPU, we also achieve up to 12.4x speed-up with the presented methods. We show that FastFormers can drastically reduce cost of serving 100 million requests from 4,223 USD to just 18 USD on an Azure F16s_v2 instance. This translates to a sustainable runtime by reducing energy consumption 6.9x - 125.8x according to the metrics used in the SustaiNLP 2020 shared task.",Transformer Models "Transformer-based deep NLP models are trained using hundreds of millions of parameters, limiting their applicability in computationally constrained environments. In this paper, we study the cause of these limitations by defining a notion of Redundancy, which we categorize into two classes: General Redundancy and Task-specific Redundancy. We dissect two popular pretrained models, BERT and XLNet, studying how much redundancy they exhibit at a representation-level and at a more fine-grained neuron-level. Our analysis reveals interesting insights, such as: i) 85% of the neurons across the network are redundant and ii) at least 92% of them can be removed when optimizing towards a downstream task. Based on our analysis, we present an efficient feature-based transfer learning procedure, which maintains 97% performance while using at-most 10% of the original neurons.",Transformer Models "In this paper, we present a new approach to time series forecasting. Time series data are prevalent in many scientific and engineering disciplines. Time series forecasting is a crucial task in modeling time series data, and is an important area of machine learning. In this work we developed a novel method that employs Transformer-based machine learning models to forecast time series data. This approach works by leveraging self-attention mechanisms to learn complex patterns and dynamics from time series data. Moreover, it is a generic framework and can be applied to univariate and multivariate time series data, as well as time series embeddings. Using influenza-like illness (ILI) forecasting as a case study, we show that the forecasting results produced by our approach are favorably comparable to the state-of-the-art.",Transformer Models "We introduce DropHead, a structured dropout method specifically designed for regularizing the multi-head attention mechanism which is a key component of transformer. In contrast to the conventional dropout mechanism which randomly drops units or connections, DropHead drops entire attention heads during training to prevent the multi-head attention model from being dominated by a small portion of attention heads. It can help reduce the risk of overfitting and allow the models to better benefit from the multi-head attention. Given the interaction between multi-headedness and training dynamics, we further propose a novel dropout rate scheduler to adjust the dropout rate of DropHead throughout training, which results in a better regularization effect. Experimental results demonstrate that our proposed approach can improve transformer models by 0.9 BLEU score on WMT14 En-De translation task and around 1.0 accuracy for various text classification tasks.",Transformer Models "Generalization is a key element behind a strong performing neural network: models that generalize perform well even with novel inputs. We investigated a specific form of generalization known as systematic compositionality, the algebraic capacity to understand and produce a potentially infinite number of novel combinations from known components [Chomsky and Lightfoot, 2002, Montague, 1970]. The principle of systematic compositionality is especially adequate in explaining efficient language learning in humans. For example, once a child learns the meaning of the word “jump” and the meaning of the word “twice,” he or she can understand the utterance “jump twice.” However, it is not clear whether neural networks, particularly RNNs, compose systematically as humans do. Identifying systematic compositionality in RNNs, or lack thereof, can give insight to their need for large sets of training examples.",Recurrent Neural Networks (RNNs) "Recurrent neural networks (RNNs) have demonstrated very impressive performances in learning sequential data, such as in language translation and music generation. Here, we show that the intrinsic computational aspect of RNNs is very similar to that of classical stress update algorithms in modeling history-dependent materials with an emphasis on viscoelasticity. Several numerical examples are designed, including 1-dimensional and 3-dimensional cases, which testify the ability of RNN model to compute the viscoelastic response when predicting on unseen test data. Additionally, it is found that the RNN model trained only on linear and step strain inputs can perform very well on prediction of completely different quadratic strain inputs, demonstrating certain level of generalization ability in extrapolation. Moreover, it is observed that the extrapolation ability depends on the types of strain inputs. The performance is better for continuous strain inputs than that for jump strain inputs. The differences in the generalization ability of RNN models in viscoelasticity and other history-dependent materials are discussed. It suggests that RNN data-driven modeling can be an alternative to the conventional viscoelasticity models.",Recurrent Neural Networks (RNNs) "Recurrent neural networks (RNNs) have been widely adopted in research areas concerned with sequential data, such as text, audio, and video. However, RNNs consisting of sigma cells or tanh cells are unable to learn the relevant information of input data when the input gap is large. By introducing gate functions into the cell structure, the long short-term memory (LSTM) could handle the problem of long-term dependencies well. Since its introduction, almost all the exciting results based on RNNs have been achieved by the LSTM. The LSTM has become the focus of deep learning. We review the LSTM cell and its variants to explore the learning capacity of the LSTM cell. Furthermore, the LSTM networks are divided into two broad categories: LSTM-dominated networks and integrated LSTM networks. In addition, their various applications are discussed. Finally, future research directions are presented for LSTM networks.",Recurrent Neural Networks (RNNs) "Recurrent neural networks (RNNs) are widely used throughout neuroscience as models of local neural activity. Many properties of single RNNs are well characterized theoretically, but experimental neuroscience has moved in the direction of studying multiple interacting areas, and RNN theory needs to be likewise extended. We take a constructive approach towards this problem, leveraging tools from nonlinear control theory and machine learning to characterize when combinations of stable RNNs will themselves be stable. Importantly, we derive conditions which allow for massive feedback connections between interacting RNNs. We parameterize these conditions for easy optimization using gradient-based techniques, and show that stability-constrained""networks of networks""can perform well on challenging sequential-processing benchmark tasks. Altogether, our results provide a principled approach towards understanding distributed, modular function in the brain.",Recurrent Neural Networks (RNNs) "Back-propagation through time (BPTT) has been widely used for training Recurrent Neural Networks (RNNs). BPTT updates RNN parameters on an instance by back-propagating the error in time over the entire sequence length, and as a result, leads to poor trainability due to the well-known gradient explosion/decay phenomena. While a number of prior works have proposed to mitigate vanishing/explosion effect through careful RNN architecture design, these RNN variants still train with BPTT. We propose a novel forward-propagation algorithm, FPTT , where at each time, for an instance, we update RNN parameters by optimizing an instantaneous risk function. Our proposed risk is a regularization penalty at time t that evolves dynamically based on previously observed losses, and allows for RNN parameter updates to converge to a stationary solution of the empirical RNN objective. We consider both sequence-to-sequence as well as terminal loss problems. Empirically FPTT outperforms BPTT on a number of well-known benchmark tasks, thus enabling architectures like LSTMs to solve long range dependencies problems.",Recurrent Neural Networks (RNNs) "This paper addresses the synchronization of multiple fractional-order recurrent neural networks (RNNs) with time-varying delays under event-triggered communications. Based on the assumption of the existence of strong connectivity or a spanning tree in the communication digraph, two sets of sufficient conditions are derived for achieving event-triggered synchronization. Moreover, an additional condition is derived to preclude Zeno behaviors. As a generalization of existing results, the criteria herein are also applicable to the event-triggered synchronization of multiple integer-order RNNs with or without delays. Two numerical examples are elaborated to illustrate the new results.",Recurrent Neural Networks (RNNs) "In this paper, we address the Clifford-valued distributed optimization subject to linear equality and inequality constraints. The objective function of the optimization problems is composed of the sum of convex functions defined in the Clifford domain. Based on the generalized Clifford gradient, a system of multiple Clifford-valued recurrent neural networks (RNNs) is proposed for solving the distributed optimization problems. Each Clifford-valued RNN minimizes a local objective function individually, with local interactions with others. The convergence of the neural system is rigorously proved based on the Lyapunov theory. Two illustrative examples are delineated to demonstrate the viability of the results in this article.",Recurrent Neural Networks (RNNs) "Variants of deep networks have been widely used for hyperspectral image (HSI)-classification tasks. Among them, in recent years, recurrent neural networks (RNNs) have attracted considerable attention in the remote sensing community. However, complex geometries cannot be learned easily by the traditional recurrent units [e.g., long short-term memory (LSTM) and gated recurrent unit (GRU)]. In this article, we propose a geometry-aware deep recurrent neural network (Geo-DRNN) for HSI classification. We build this network upon two modules: a U-shaped network (U-Net) and RNNs. We first input the original HSI patches to the U-Net, which can be trained with very few images and obtain a preliminary classification result. We then add RNNs on the top of the U-Net so as to mimic the human brain to refine continuously the output-classification map. However, instead of using the traditional dot product in each gate of the RNNs, we introduce a Net-Gated GRU that increases the nonlinear representation power. Finally, we use a pretrained ResNet as a regularizer to improve further the ability of the proposed network to describe complex geometries. To this end, we construct a geometry-aware ResNet loss, which leverages the pretrained ResNet’s knowledge about the different structures in the real world. Our experimental results on real HSIs and road topology images demonstrate that our approach outperforms the state-of-the-art classification methods and can learn complex geometries.",Recurrent Neural Networks (RNNs) "This paper presents a sentiment analysis solution on tweets using Recurrent Neural Networks (RNNs). The method is can classifying tweets with an 80.74% accuracy rate, considering a binary task, after experimenting with 20 different design approaches. The solution integrates an attention mechanism aiming to enhance the network, with a two-way localization system: at memory cell level and at network level. We present an in-depth literature review for Twitter sentiment analysis and the building blocks that grounded the design decisions of our solution, employed as a core classification component within a sentiment indicator of the SynergyCrowds platform.",Recurrent Neural Networks (RNNs) "State-of-the-art solutions in the areas of ""Language Modelling & Generating Text"", ""Speech Recognition"", ""Generating Image Descriptions"" or ""Video Tagging"" have been using Recurrent Neural Networks as the foundation for their approaches. Understanding the underlying concepts is therefore of tremendous importance if we want to keep up with recent or upcoming publications in those areas. In this work we give a short overview over some of the most important concepts in the realm of Recurrent Neural Networks which enables readers to easily understand the fundamentals such as but not limited to ""Backpropagation through Time"" or ""Long Short-Term Memory Units"" as well as some of the more recent advances like the ""Attention Mechanism"" or ""Pointer Networks"". We also give recommendations for further reading regarding more complex topics where it is necessary.",Recurrent Neural Networks (RNNs) "Large Language Models (LLMs) have emerged as a groundbreaking technology with their unparalleled text generation capabilities across various applications. Nevertheless, concerns persist regarding the accuracy and appropriateness of their generated content. A contemporary methodology, self-correction, has been proposed as a remedy to these issues. Building upon this premise, this paper critically examines the role and efficacy of self-correction within LLMs, shedding light on its true potential and limitations. Central to our investigation is the notion of intrinsic self-correction, whereby an LLM attempts to correct its initial responses based solely on its inherent capabilities, without the crutch of external feedback. In the context of reasoning, our research indicates that LLMs struggle to self-correct their responses without external feedback, and at times, their performance even degrades after self-correction. Drawing from these insights, we offer suggestions for future research and practical applications in this field.",Large Language Models (LLMs) "Large Language Models (LLMs) are a type of artificial intelligence that has been revolutionizing various fields, including biomedicine. They have the capability to process and analyze large amounts of data, understand natural language, and generate new content, making them highly desirable in many biomedical applications and beyond. In this workshop, we aim to introduce the attendees to an in-depth understanding of the rise of LLMs in biomedicine, and how they are being used to drive innovation and improve outcomes in the field, along with associated challenges and pitfalls.",Large Language Models (LLMs) "Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH.",Large Language Models (LLMs) "Large language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps. Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting. CoT prompting has two major paradigms. One leverages a simple prompt like""Let's think step by step""to facilitate step-by-step thinking before answering a question. The other uses a few manual demonstrations one by one, each composed of a question and a reasoning chain that leads to an answer. The superior performance of the second paradigm hinges on the hand-crafting of task-specific demonstrations one by one. We show that such manual efforts may be eliminated by leveraging LLMs with the""Let's think step by step""prompt to generate reasoning chains for demonstrations one by one, i.e., let's think not just step by step, but also one by one. However, these generated chains often come with mistakes. To mitigate the effect of such mistakes, we find that diversity matters for automatically constructing demonstrations. We propose an automatic CoT prompting method: Auto-CoT. It samples questions with diversity and generates reasoning chains to construct demonstrations. On ten public benchmark reasoning tasks with GPT-3, Auto-CoT consistently matches or exceeds the performance of the CoT paradigm that requires manual designs of demonstrations. Code is available at https://github.com/amazon-research/auto-cot",Large Language Models (LLMs) "Since the recent prosperity of Large Language Models (LLMs), there have been interleaved discussions regarding how to reduce hallucinations from LLM responses, how to increase the factuality of LLMs, and whether Knowledge Graphs (KGs), which store the world knowledge in a symbolic form, will be replaced with LLMs. In this paper, we try to answer these questions from a new angle: How knowledgeable are LLMs? To answer this question, we constructed Head-to-Tail, a benchmark that consists of 18K question-answer (QA) pairs regarding head, torso, and tail facts in terms of popularity. We designed an automated evaluation method and a set of metrics that closely approximate the knowledge an LLM confidently internalizes. Through a comprehensive evaluation of 16 publicly available LLMs, we show that existing LLMs are still far from being perfect in terms of their grasp of factual knowledge, especially for facts of torso-to-tail entities.",Large Language Models (LLMs) "Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements which can undermine trust in their output. Existing fact-checking approaches either require access to the output probability distribution (which may not be available for systems such as ChatGPT) or external databases that are interfaced via separate, often complex, modules. In this work, we propose""SelfCheckGPT"", a simple sampling-based approach that can be used to fact-check the responses of black-box models in a zero-resource fashion, i.e. without an external database. SelfCheckGPT leverages the simple idea that if an LLM has knowledge of a given concept, sampled responses are likely to be similar and contain consistent facts. However, for hallucinated facts, stochastically sampled responses are likely to diverge and contradict one another. We investigate this approach by using GPT-3 to generate passages about individuals from the WikiBio dataset, and manually annotate the factuality of the generated passages. We demonstrate that SelfCheckGPT can: i) detect non-factual and factual sentences; and ii) rank passages in terms of factuality. We compare our approach to several baselines and show that our approach has considerably higher AUC-PR scores in sentence-level hallucination detection and higher correlation scores in passage-level factuality assessment compared to grey-box methods.",Large Language Models (LLMs) "Large Language Models (LLMs) have demonstrated remarkable zero-shot generalization across various language-related tasks, including search engines. However, existing work utilizes the generative ability of LLMs for Information Retrieval (IR) rather than direct passage ranking. The discrepancy between the pre-training objectives of LLMs and the ranking objective poses another challenge. In this paper, we first investigate generative LLMs such as ChatGPT and GPT-4 for relevance ranking in IR. Surprisingly, our experiments reveal that properly instructed LLMs can deliver competitive, even superior results to state-of-the-art supervised methods on popular IR benchmarks. Furthermore, to address concerns about data contamination of LLMs, we collect a new test set called NovelEval, based on the latest knowledge and aiming to verify the model's ability to rank unknown knowledge. Finally, to improve efficiency in real-world applications, we delve into the potential for distilling the ranking capabilities of ChatGPT into small specialized models using a permutation distillation scheme. Our evaluation results turn out that a distilled 440M model outperforms a 3B supervised model on the BEIR benchmark.",Large Language Models (LLMs) "Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+.",Large Language Models (LLMs) "The performance of large language models (LLMs) on existing reasoning benchmarks has significantly improved over the past years. In response, we present JEEBench, a considerably more challenging benchmark dataset for evaluating the problem solving abilities of LLMs. We curate 515 challenging pre-engineering mathematics, physics and chemistry problems from the highly competitive IIT JEE-Advanced exam. Long-horizon reasoning on top of deep in-domain knowledge is essential for solving problems in this benchmark. Our evaluation on various open-source and proprietary models reveals that the highest performance, even after using techniques like self-consistency, self-refinement and chain-of-thought prompting, is less than 40%. The typical failure modes of GPT-4, the best model, are errors in algebraic manipulation, difficulty in grounding abstract concepts into mathematical equations accurately and failure in retrieving relevant domain-specific concepts. We also observe that by mere prompting, GPT-4 is unable to assess risk introduced by negative marking for incorrect answers. For this, we develop a post-hoc confidence-thresholding method over self-consistency, which enables effective response selection. We hope that our challenging benchmark will guide future re-search in problem-solving using LLMs.",Large Language Models (LLMs) "Large language models (LLMs) have emerged as a widely-used tool for information seeking, but their generated outputs are prone to hallucination. In this work, our aim is to allow LLMs to generate text with citations, improving their factual correctness and verifiability. Existing work mainly relies on commercial search engines and human evaluation, making it challenging to reproduce and compare different modeling approaches. We propose ALCE, the first benchmark for Automatic LLMs' Citation Evaluation. ALCE collects a diverse set of questions and retrieval corpora and requires building end-to-end systems to retrieve supporting evidence and generate answers with citations. We develop automatic metrics along three dimensions -- fluency, correctness, and citation quality -- and demonstrate their strong correlation with human judgements. Our experiments with state-of-the-art LLMs and novel prompting strategies show that current systems have considerable room for improvement -- For example, on the ELI5 dataset, even the best models lack complete citation support 50% of the time. Our analyses further highlight promising future directions, including developing better retrievers, advancing long-context LLMs, and improving the ability to synthesize information from multiple sources.",Large Language Models (LLMs) "Bilingual Lexicon Induction (BLI) aims at inducing word translations in two distinct languages. The generated bilingual dictionaries via BLI are essential for cross-lingual NLP applications. Most existing methods assume that a mapping matrix can be learned to project the embedding of a word in the source language to that of a word in the target language which shares the same meaning. However, a single matrix may not be able to provide sufficiently large parameter space and to tailor to the semantics of words across different domains and topics due to the complicated nature of linguistic regularities. In this paper, we propose a Soft Piecewise Mapping Model (SPMM). It generates word alignments in two languages by learning multiple mapping matrices with orthogonal constraint. Each matrix encodes the embedding translation knowledge over a distribution of latent topics in the embedding spaces. Such learning problem can be formulated as an extended version of the Wahba’s problem, with a closed-form solution derived. To address the limited size of training data for low-resourced languages and emerging domains, an iterative boosting method based on SPMM is used to augment training dictionaries. Experiments conducted on both general and domain-specific corpora show that SPMM is effective and outperforms previous methods.",Bilingual Lexicon Induction (BLI) "Much recent work in bilingual lexicon induction (BLI) views word embeddings as vectors in Euclidean space. As such, BLI is typically solved by finding a linear transformation that maps embeddings to a common space. Alternatively, word embeddings may be understood as nodes in a weighted graph. This framing allows us to examine a node's graph neighborhood without assuming a linear transform, and exploits new techniques from the graph matching optimization literature. These contrasting approaches have not been compared in BLI so far. In this work, we study the behavior of Euclidean versus graph-based approaches to BLI under differing data conditions and show that they complement each other when combined.",Bilingual Lexicon Induction (BLI) "Most Bilingual Lexicon Induction (BLI) methods retrieve word translation pairs by finding the closest target word for a given source word based on cross-lingual word embeddings (WEs). However, we find that solely retrieving translation from the source-to-target perspective leads to some false positive translation pairs, which significantly harm the precision of BLI. To address this problem, we propose a novel and effective method to improve translation pair retrieval in cross-lingual WEs. Specifically, we consider both source-side and target-side perspectives throughout the retrieval process to alleviate false positive word pairings that emanate from a single perspective. On a benchmark dataset of BLI, our proposed method achieves competitive performance compared to existing state-of-the-art (SOTA) methods. It demonstrates effectiveness and robustness across six experimental languages, including similar language pairs and distant language pairs, under both supervised and unsupervised settings.",Bilingual Lexicon Induction (BLI) "Bilingual word lexicons map words in one language to their synonyms in another language. Numerous papers have explored bilingual lexicon induction (BLI) in high-resource scenarios, framing a typical pipeline that consists of two steps: (i) unsupervised bitext mining and (ii) unsupervised word alignment. At the core of those steps are pre-trained large language models (LLMs).In this paper we present the analysis of the BLI pipeline for German and two of its dialects, Bavarian and Alemannic. This setup poses a number of unique challenges, attributed to the scarceness of resources, relatedness of the languages and lack of standardization in the orthography of dialects. We analyze the BLI outputs with respect to word frequency and the pairwise edit distance. Finally, we release an evaluation dataset consisting of manual annotations for 1K bilingual word pairs labeled according to their semantic similarity.",Bilingual Lexicon Induction (BLI) "Bilingual Lexicon Induction (BLI) is a core task in multilingual NLP that still, to a large extent, relies on calculating cross-lingual word representations. Inspired by the global paradigm shift in NLP towards Large Language Models (LLMs), we examine the potential of the latest generation of LLMs for the development of bilingual lexicons. We ask the following research question: Is it possible to prompt and fine-tune multilingual LLMs (mLLMs) for BLI, and how does this approach compare against and complement current BLI approaches? To this end, we systematically study 1) zero-shot prompting for unsupervised BLI and 2) few-shot in-context prompting with a set of seed translation pairs, both without any LLM fine-tuning, as well as 3) standard BLI-oriented fine-tuning of smaller LLMs. We experiment with 18 open-source text-to-text mLLMs of different sizes (from 0.3B to 13B parameters) on two standard BLI benchmarks covering a range of typologically diverse languages. Our work is the first to demonstrate strong BLI capabilities of text-to-text mLLMs. The results reveal that few-shot prompting with in-context examples from nearest neighbours achieves the best performance, establishing new state-of-the-art BLI scores for many language pairs. We also conduct a series of in-depth analyses and ablation studies, providing more insights on BLI with (m)LLMs, also along with their limitations.",Bilingual Lexicon Induction (BLI) "The word embedding models such as Word2vec and FastText simultaneously learn dual representations of input vectors and output vectors. In contrast, almost all existing unsupervised bilingual lexicon induction (UBLI) methods use only input vectors without utilizing output vectors. In this article, we propose a novel approach to making full use of both input and output vectors for more robust and strong UBLI. We discover the Common Difference Property that one orthogonal transformation can connect not only the input vectors of two languages but also the output vectors. Therefore, we can learn just one transformation to induce two different dictionaries from the input and output vectors, respectively. Between these two quite different dictionaries, a more accurate lexicon with less noise can be induced by taking the intersection of them in UBLI procedure. Extensive experiments show that our method achieves much more robust and strong results than state-of-the-art methods in distant language pairs, while reserving comparable performances in similar language pairs.",Bilingual Lexicon Induction (BLI) "Contextualized word embeddings have emerged as the most important tool for performing NLP tasks in a large variety of languages. In order to improve the cross- lingual representation and transfer learning quality, contextualized embedding alignment techniques, such as mapping and model fine-tuning, are employed. Existing techniques however are time-, data- and computational resource-intensive. In this paper we analyze these techniques by utilizing three tasks: bilingual lexicon induction (BLI), word retrieval and cross-lingual natural language inference (XNLI) for a high resource (German-English) and a low resource (Bengali-English) language pair. In contrast to previous works which focus only on a few popular models, we compare five multilingual and seven monolingual language models and investigate the effect of various aspects on their performance, such as vocabulary size, number of languages used for training and number of parameters. Additionally, we propose a parameter-, data- and runtime-efficient technique which can be trained with 10% of the data, less than 10% of the time and have less than 5% of the trainable parameters compared to model fine-tuning. We show that our proposed method is competitive with resource heavy models, even outperforming them in some cases, even though it relies on less resource",Bilingual Lexicon Induction (BLI) "Bilingual Lexicon Induction (BLI) aims to map words in one language to their translations in another, and is typically through learning linear projections to align monolingual word representation spaces. Two classes of word representations have been explored for BLI: static word embeddings and contextual representations, but there is no studies to combine both. In this paper, we propose a simple yet effective mechanism to combine the static word embeddings and the contextual representations to utilize the advantages of both paradigms. We test the combination mechanism on various language pairs under the supervised and unsupervised BLI benchmark settings. Experiments show that our mechanism consistently improves performances over robust BLI baselines on all language pairs by averagely improving 3.2 points in the supervised setting, and 3.1 points in the unsupervised setting.",Bilingual Lexicon Induction (BLI) "Bilingual Lexicon Induction (BLI), where words are translated between two languages, is an important NLP task. While noticeable progress on BLI in rich resource languages using static word embeddings has been achieved. The word translation performance can be further improved by incorporating information from contextualized word embeddings. In this paper, we introduce ProMap, a novel approach for BLI that leverages the power of prompting pretrained multilingual and multidialectal language models to address these challenges. To overcome the employment of subword tokens in these models, ProMap relies on an effective padded prompting of language models with a seed dictionary that achieves good performance when used independently. We also demonstrate the effectiveness of ProMap in re-ranking results from other BLI methods such as with aligned static word embeddings. When evaluated on both rich-resource and low-resource languages, ProMap consistently achieves state-of-the-art results. Furthermore, ProMap enables strong performance in few-shot scenarios (even with less than 10 training examples), making it a valuable tool for low-resource language translation. Overall, we believe our method offers both exciting and promising direction for BLI in general and low-resource languages in particular.",Bilingual Lexicon Induction (BLI) "Bilingual lexicon induction (BLI) with limited bilingual supervision is a crucial yet challenging task in multilingual NLP. Current state-of-the-art BLI methods rely on the induction of cross-lingual word embeddings (CLWEs) to capture cross-lingual word similarities; such CLWEs are obtained 1) via traditional static models (e.g., VecMap), or 2) by extracting type-level CLWEs from multilingual pretrained language models (mPLMs), or 3) through combining the former two options. In this work, we propose a novel semi-supervised post-hoc reranking method termed BLICEr (BLI with Cross-Encoder Reranking), applicable to any precalculated CLWE space, which improves their BLI capability. The key idea is to 'extract' cross-lingual lexical knowledge from mPLMs, and then combine it with the original CLWEs. This crucial step is done via 1) creating a word similarity dataset, comprising positive word pairs (i.e., true translations) and hard negative pairs induced from the original CLWE space, and then 2) fine-tuning an mPLM (e.g., mBERT or XLM-R) in a cross-encoder manner to predict the similarity scores. At inference, we 3) combine the similarity score from the original CLWE space with the score from the BLI-tuned cross-encoder. BLICEr establishes new state-of-the-art results on two standard BLI benchmarks spanning a wide spectrum of diverse languages: it substantially outperforms a series of strong baselines across the board. We also validate the robustness of BLICEr with different CLWEs.",Bilingual Lexicon Induction (BLI) "Abstract In the article, we describe recent trends in the detection of hate speech and offensive language on social media. We accord from the latest studies and scientific contributions. The article describes current trends and the most used methods in connection with the detection of hate speech and offensive language. At the same time, we focus on the importance of emoticons, hashtags, and swearing in the field of social networks. We point out the topicality of the selected topic, describe the next direction of our work, and suggest possible solutions to current problems in this field of research.",Hate and Offensive Speech Detection "Preprocessing is a crucial step for each task related to text classification. Preprocessing can have a significant impact on classification performance, but at present there are few large-scale studies evaluating the effectiveness of preprocessing techniques and their combinations. In this work, we explore the impact of 26 widely used text preprocessing techniques on the performance of hate and offensive speech detection algorithms. We evaluate six common machine learning models, such as logistic regression, random forest, linear support vector classifier, convolutional neural network, bidirectional encoder representations from transformers (BERT), and RoBERTa, on four common Twitter benchmarks. Our results show that some preprocessing techniques are useful for improving the accuracy of models while others may even cause a loss of efficiency. In addition, the effectiveness of preprocessing techniques varies depending on the chosen dataset and the classification method. We also explore two ways to combine the techniques that have proved effective during a separate evaluation. Our results show that combining techniques can produce different results. In our experiments, combining techniques works better for traditional machine learning methods than for other methods.",Hate and Offensive Speech Detection "Offensive language and Hate Speech are rampant on social media platforms (Facebook, Twitter, etc.) in Egypt for quite a while now, appearing in Tweets, Facebook posts and comments, etc., It is an increasingly outreaching problem that needs immediate attention. This paper focuses on the problem of detecting and classifying both offensive language and Hate Speech using State-of-the-art techniques in text classification. Pre-trained transformer models have gained a reputation of astounding general language understanding that could be fine-tuned for language-specific tasks like Text classification, We collected an Egyptian-Arabic dialect Custom dataset of about 8,000 text samples manually labelled into 5 distinct classes: (Neutral, Offensive, Sexism, Religious Discrimination, Racism), It was used to fine-tune and evaluate multiple different Arabic pre-trained transformer models based on different transformer architectures and pre-training approaches for the Natural Language Processing downstream task of text classification. We achieved an average accuracy of about 96% across all fine-tuned transformer models.",Hate and Offensive Speech Detection "The easily accessibility of different online platform allows every individuals people to express their ideas and share experiences easily without any restriction because of freedom of speech. Since social media don't have general framework to identify hate and neutral speech this results anonymity. However, the propagation of hate speech on social media distresses the society in many aspects, such as affecting the mental health of targeted audiences, affects social interaction and distraction of properties. This research proposed the SVM with TF-IDF, N-gram, and W2vec feature extraction to construct dataset which is binary classifier to detect hate speech for Afaan Oromoo language. To construct dataset for this study first we crawl data from Facebook posts and comments by using Face pager and scrap storm API. After we collect we labeled the collected data to two class hate and neutral class. The general objective of this research is to design a framework which classify hate and neutral speech. Furthermore, when we compare the results of different Machine Learning algorithms. The experiment is evaluated based on accuracy, F-score, recall and precision measurements. The framework based on SVM with n-gram combination with TF-IDF achieve 96% in all metrics.",Hate and Offensive Speech Detection "On social media networks like Twitter, Facebook, and Tumblr, people frequently share information. However, these platforms are also notorious for the spread of hate speech and insults, often posted anonymously. Hate speech involves using violent, abusive, or aggressive language towards a particular group based on factors such as gender, race, religion, or region. The prevalence of hate speech on these websites is a major concern, and manually detecting it can be time-consuming. To address this issue, this study presents an automated hate speech detection model that is evaluated on a publicly available Twitter dataset. The proposed method emphasizes data pre-processing, including stemming, term frequency-inverse document frequency (TF-IDF) for feature extraction, and various sampling techniques (random sampler, synthetic minority over-sampling technique (SMOTE), and ALL-KNN) to balance an imbalanced dataset. The logistic regression, support vector machine (SVM), and k-nearest neighbor (k-NN) machine learning classifiers were trained and tested using hold-out cross-validation to reduce overfitting and evaluate performance. The performance was evaluated using metrics such as accuracy, precision, and confusion matrix. The results showed that the logistic regression classifier using the SMOTE approach had the best performance, with an accuracy of 82%, a macro average of precision, recall, and an F1-score of 80%, 82%, and 79%, respectively.",Hate and Offensive Speech Detection "The user-generated content on the internet includ- ing that on social media may contain offensive language and hate speech which negatively affect the mental health of the whole internet society and may lead to hate crimes. Intelligent models for automatic detection of offensive language and hate speech have attracted significant attention recently. In this paper, we propose an automatic method for detecting offensive language and fine-grained hate speech from Arabic tweets. We compare between BERT and two conventional machine learning techniques (SVM, logistic regression). We also investigate the use of sentiment analysis and emojis descriptions as appending features along with the textual content of the tweets. The experiments shows that BERT-based model gives the best results, surpassing the best benchmark systems in the literature, on all three tasks: (a) offensive language detection with 84.3% F1-score, (b) hate speech detection with 81.8% F1-score, and (c) fine-grained hate-speech recognition (e.g., race, religion, social class, etc.) with 45.1% F1-score. The use of sentiment analysis slightly improves the performance of the models when detecting offensive language and hate speech but has no positive effect on the performance of the models when recognising the type of the hate speech. The use of textual emoji description as features can improve or deteriorate the performance of the models depending on the size of the examples per class and whether the emojis are considered among distinctive features between classes or not.",Hate and Offensive Speech Detection "The prevalence of social media platforms prompted detecting any language that is intended to harm or intimidate another person or group of people in online posts and comments. On Twitter, for instance, users are susceptible to cyberbullying and hate speech, which may develop into physical and psychological violence. A transformer-based approach is presented in this study to address the offensive speech detection issue. This model employs versions of the CAMeLBERT model and is validated using a mixture of four benchmark Twitter Arabic datasets annotated for hate speech detection task, including the (OSACT5 2022) workshop shared task dataset. The presented model was capable of recognizing Arabic tweets containing offensive speech with 87.15 % accuracy and 83.6 % F1 score.",Hate and Offensive Speech Detection "Social media often serves as a breeding ground for various hateful and offensive content. Identifying such content on social media is crucial due to its impact on the race, gender, or religion in an unprejudiced society. However, while there is extensive research in hate speech detection in English, there is a gap in hateful content detection in low-resource languages like Bengali. Besides, a current trend on social media is the use of Romanized Bengali for regular interactions. To overcome the existing research’s limitations, in this study, we develop an annotated dataset of 10K Bengali posts consisting of 5K actual and 5K Romanized Bengali tweets. We implement several baseline models for the classification of such hateful posts. We further explore the interlingual transfer mechanism to boost classification performance. Finally, we perform an in-depth error analysis by looking into the misclassified posts by the models. While training actual and Romanized datasets separately, we observe that XLM-Roberta performs the best. Further, we witness that on joint training and few-shot training, MuRIL outperforms other models by interpreting the semantic expressions better. We make our code and dataset public for others.",Hate and Offensive Speech Detection "With online social platforms becoming more and more accessible to the common masses, the volume of public utterances on a range of issues, events, and persons etc. has increased profoundly. Though most of the content is a manifestation of personal feelings of the individuals, yet a lot of this content often comprises of hate and offensive speech. Exchange of hate and offensive speech has now become a global phenomenon with increased intolerance among societies. However companies running these social media platforms need to discern and remove such unwanted content. This article focuses on automatic detection of hate and offensive speech from Twitter data by employing both conventional machine learning algorithms as well as deep learning architectures. We conducted extensive experiments on a benchmark 25K Twitter dataset with traditional machine learning algorithms as well as using deep learning architectures. The results obtained by us using deep learning architectures are better than state-of-the-art methods used for hate and offensive speech detection.",Hate and Offensive Speech Detection "Internet and social media usage has skyrocketed over the past two decades, changing how people communicate with one another on a basic level. Numerous favourable results have resulted from this. The risks and harms that come with it are also there. It is impossible for humans to control the amount of damaging content, such as hate speech, that is available online. Researching automated methods for hate speech identification has drawn more attention from academics. Through the creation of a single homogeneous dataset, we investigate various publicly accessible datasets in this work. We establish a baseline model and enhance model performance scores using various optimisation strategies after classifying them into two categories: hate or non-hate. After achieving a competitive performance score, we develop a tool that, using the same feedback, quickly locates and evaluates a page with an effective measure. This tool then retrains our model using the new data. In three languages: English, German, and Spanish. We demonstrate the superior performance of our multilingual approach. In comparison to most monolingual models, this results in performance that is equal to or better.",Hate and Offensive Speech Detection "Because of the rapid advancement of technology over the last several years, the number of internet users is growing at an exponential rate, and as a result, email communication has become popular as a means of exchanging information over the internet. Sending data and communicating with peers via email is the most cost-effective method. These email services also cause problems for users by sending electronic junk mail, often known as spam mail. Spam email is a privacy concern that is linked to a slew of commercial and dangerous websites, causing phishing, virus distribution, and a slew of other problems. This study examines several aspects that have been used for email spam classification, as well as offering an overview of a handful of classifiers or algorithms that have been successfully evaluated, as well as exploratory data analysis. The proposed email spam classifier uses three parallel layers of machine learning and deep learning techniques, followed by a decision function to determine whether or not the emails are spam. During testing, it was found that the proposed classifier beats similar systems on the standard dataset with an accuracy of 98.4%.",Email Spam and Phishing Detection "Email Spam has become a vital issue currently, with high-speed growth of internet users. Some people are using them for illegal conducts, phishing and fraud. Sending malicious link through spam emails which can harm our system and may also they will seek into our system. The need of email spam detection is to prevent spam messages from lagging into user’s inbox so it’ll improve user experience. This project will identify those spam emails by using machine learning approach. Machine learning is one amongst the applications of Artificial Intelligence that allow systems to read and improve from experience without being specific programmed. This paper will discuss the machine learning algorithm which is Naïve Bayes. It is a probabilistic classifier, which means it predicts on the idea of the probability of an object and it is selected for the email spam detection having best precision and accuracy.",Email Spam and Phishing Detection "Email spam has become a prevalent issue in recent times, with the growing number of internet users, spam emails are also on the rise. Many individuals use them for illegal and unethical activities such as phishing and fraud. Spammers send dangerous links through spam emails, which can harm our systems and gain access to personal information. It has become easier for criminals to create fake profiles and email accounts. They often impersonate real individuals in their spam emails, making them difficult to identify. This project aims to identify and detect fraudulent spam messages. The paper will explore the use of machine learning techniques, algorithms, and apply them to data sets. The goal is to select the best methods for maximum precision and accuracy in email spam detection.",Email Spam and Phishing Detection "Anything that is connected to the internet is vulnerable, for example mobile phones, personal laptops, tablets, routers, and smart speakers. Cybercriminals need one point of weakness like unprotected devices or a weak password or any attachment to potentially enter into the system. There is a need to pause before proceeding with any mail or downloading any document or accessing any link in a message because there is a risk of phishing. Every day 320 billion spam emails are sent to many people. According to statistics of spam mail, it was noted that out of every 3000 emails, 1 mail is spam that contains phishing links, malware, fake messages, fake offers, etc. The hacker tries to get confidential information about people, companies, and bank account details. In 2023, Spam mails are still a big real-life problem because some people are still not aware of spam emails and they aren’t able to detect spam mail manually. So, there is a need for the development of a spam detector system that can detect spam emails with higher accuracy. In this paper, there will be a discussion about implementation, execution and obtained results of deep learning algorithms like LSTM (one-directional), BiLSTM (Bi-directional), BERT, and Convolution Neural Networks using a dataset that was downloaded from Kaggle. An accuracy of 98% was obtained with the CNN, 96% was obtained with the LSTM (one-directional) model, 97% with the BiLSTM (Bi-directional) model, and 99% was obtained with the BERT model. The best accuracy ‘of 99%’ with great recall value, less precision, and a great F1 score was attained by implementing the BERT model for spam detection. Keywords-Deep Learning, CIA, SIANN, RFC",Email Spam and Phishing Detection "With the influx of technological advancements and the increased simplicity in communication, especially through emails, the upsurge in the volume of unsolicited bulk emails (UBEs) has become a severe threat to global security and economy. Spam emails not only waste users’ time, but also consume a lot of network bandwidth, and may also include malware as executable files. Alternatively, phishing emails falsely claim users’ personal information to facilitate identity theft and are comparatively more dangerous. Thus, there is an intrinsic need for the development of more robust and dependable UBE filters that facilitate automatic detection of such emails. There are several countermeasures to spam and phishing, including blacklisting and content-based filtering. However, in addition to content-based features, behavior-based features are well-suited in the detection of UBEs. Machine learning models are being extensively used by leading internet service providers like Yahoo, Gmail, and Outlook, to filter and classify UBEs successfully. There are far too many options to consider, owing to the need to facilitate UBE detection and the recent advances in this domain. In this paper, we aim at elucidating on the way of extracting email content and behavior-based features, what features are appropriate in the detection of UBEs, and the selection of the most discriminating feature set. Furthermore, to accurately handle the menace of UBEs, we facilitate an exhaustive comparative study using several state-of-the-art machine learning algorithms. Our proposed models resulted in an overall accuracy of 99% in the classification of UBEs. The text is accompanied by snippets of Python code, to enable the reader to implement the approaches elucidated in this paper",Email Spam and Phishing Detection "Phishing emails pose a severe risk to online users, necessitating effective identification methods to safeguard digital communication. Detection techniques are continuously researched to address the evolution of phishing strategies. Machine learning (ML) is a powerful tool for automated phishing email detection, but existing techniques like support vector machines and Naive Bayes have proven slow or ineffective in handling spam filtering. This study attempts to provide a phishing email detector and reliable classifier using a hybrid machine classifier with term frequency-inverse document frequency (TF-IDF) and an effective feature extraction technique (FET) on a real-world dataset from Kaggle. Exploratory data analysis is conducted to enhance understanding of the dataset and identify any conspicuous errors and outliers to facilitate the detection process. The FET converts the data text into a numerical representation that can be used for ML algorithms. The model’s performance is evaluated using accuracy, precision, recall, F1 score, receiver operating characteristic (ROC) curve and area under the ROC curve metrics. The research findings indicate that the hybrid model utilising TF-IDF achieved superior performance, with an accuracy of 87.5%. The paper offers valuable knowledge on using ML to identify phishing emails and highlights the importance of combining various models.",Email Spam and Phishing Detection "Spam is the act of sending unsolicited emails to a large number of users for phishing, spreading malware, etc. Internet Service Providers (ISPs) and email inbox providers (like Gmail, Yahoo Mail, AOL, etc.) rely on SPAM filters, firewalls, and blacklist directories to prevent ""unsolicited"" SPAM emails from entering your inbox. Spam mails are overrunning email inboxes, which significantly slows down internet performance. It is crucial to properly analyze the connections between these spammers and spam because the majority of us tend to provide them with crucial information, such as our contact information. Since the benefactor covers a large percentage of the costs related to spamming, it effectively serves as advertising for the cost of mailing. The study of existing work shows that machine learning and deep learning are frequently employed to effectively identify email spam. This research paper is secondary work in which we have studied, and implemented the various machine learning and deep learning approaches to identify email spam in Python. The four machine learning algorithms—KNN, Navies Bayes, BiLSTM, and Deep CNN—show that they can be utilized effectively to detect spam. Yet the Deep CNN outperforms the other three based on accuracy and the F1 score.",Email Spam and Phishing Detection "The risk of cyberattacks against businesses has risen considerably, with Business Email Compromise (BEC) schemes taking the lead as one of the most common phishing attack methods. The daily evolution of this assault mechanism’s attack methods has shown a very high level of proficiency against organisations. Since the majority of BEC emails lack a payloader, they have become challenging for organisations to identify or detect using typical spam filtering and static feature extraction techniques. Hence, an efficient and effective BEC phishing detection approach is required to provide an effective solution to various organisations to protect against such attacks. This paper provides a systematic review and examination of the state of the art of BEC phishing detection techniques to provide a detailed understanding of the topic to allow researchers to identify the main principles of BEC phishing detection, the common Machine Learning (ML) algorithms used, the features used to detect BEC phishing, and the common datasets used. Based on the selected search strategy, 38 articles (of 950 articles) were chosen for closer examination. Out of these articles, the contributions of the selected articles were discussed and summarised to highlight their contributions as well as their limitations. In addition, the features of BEC phishing used for detection were provided, as well as the ML algorithms and datasets that were used in BEC phishing detection models were discussed. In the end, open issues and future research directions of BEC phishing detection based on ML were discussed.",Email Spam and Phishing Detection "Breakthroughs in technology are happening as we speak, but the threat of their misuse is also increasing. Even a tiny amount of exposure within an organization can potentially force the organization out of business. In a digital world, information is the greatest asset. A phishing attack is an attack on the critical information of an individual or an organization. In a phishing attack, the perpetrator uses emails to lure people from different organizations or individuals for using infected URLs, attachments, and offers. The emails contain URLs, sender email information, and reply email information, masked with a legit source to hide the malicious content. Because an individual or the organization receives a vast number of emails every day, it is difficult to detect the infected emails. In such cases, Machine Learning algorithms categorize emails into spam and legitimate mail. A Naive Bayesian network is a supervised Machine Learning algorithm, while it is also an effective way to classify a large number of emails. The Naive Bayesian Classifier is fast in the classification of a large dataset. To further improve the performance, Count Vectorization is applied, and for determining the legitimacy of the sender's email, used Blacklisting algorithm. In this paper, we have analyzed machine learning algorithms for the classification of emails.",Email Spam and Phishing Detection "The proliferation of phishing sites and emails poses significant challenges to existing cybersecurity efforts. Despite advances in spam filters and email security protocols, problems with oversight and false positives persist. Users often struggle to understand why emails are flagged as spam, risking the possibility of missing important communications or mistakenly trusting phishing emails. This study introduces ChatSpamDetector, a system that uses large language models (LLMs) to detect phishing emails. By converting email data into a prompt suitable for LLM analysis, the system provides a highly accurate determination of whether an email is phishing or not. Importantly, it offers detailed reasoning for its phishing determinations, assisting users in making informed decisions about how to handle suspicious emails. We conducted an evaluation using a comprehensive phishing email dataset and compared our system to several LLMs and baseline systems. We confirmed that our system using GPT-4 has superior detection capabilities with an accuracy of 99.70%. Advanced contextual interpretation by LLMs enables the identification of various phishing tactics and impersonations, making them a potentially powerful tool in the fight against email-based phishing threats.",Email Spam and Phishing Detection "Fake news production, accessibility, and consumption have all increased with the rise of internet-connected gadgets and social media platforms. A good fake news detection system is essential because the news readers receive can affect their opinions. Several works on fake news detection have been done using machine learning and deep learning approaches. Recently, the deep learning approach has been preferred over machine learning because of its ability to comprehend the intricacies of textual data. The introduction of transformer architecture changed the NLP paradigm and distinguished itself from recurrent models by enabling the processing of sentences as a whole rather than word by word. The attention mechanisms introduced in Transformers allowed them to understand the relationship between far-apart tokens in a sentence. Numerous deep learning works on fake news detection have been published by focusing on different features to determine the authenticity of a news source. We performed an extensive analysis of the comprehensive NELA-GT 2020 dataset, which revealed that the title and content of a news source contain discernible information critical for determining its integrity. To this objective, we introduce ‘FakeNews Transformer’ — a specialized Transformer-based architecture that considers the news story’s title and content to assess its veracity. Our proposed work achieved an accuracy of 74.0% on a subset of the NELA-GT 2020 dataset. To our knowledge, FakeNews Transformer is the first published work that considers both title and content for evaluating a news article; thus, we compare the performance of our work against two BERT and two LSTM models working independently on title and content. Our work outperformed the BERT and LSTM models working independently on title by 7.6% and 9.6%, while performing better than the BERT and LSTM models working independently on content by 8.9% and 10.5%, respectively.",Fake News Detection "The strategy for identifying fake news incorporates in blending of Natural Language Processing (NLP) techniques, Reinforcement Learning (RL) and block chain technology. Identifying false information on Twitter is essential because of the platform's broad appeal and significant impact on public conversation. For millions of people globally, Twitter is their main source of news, which makes it a great place for information to spread quickly. The procedure commences with gathering a comprehensive dataset of news articles and their corresponding metadata, followed by NLP-based pre-processing to cleanse and tokenize the text. Pertinent attributes, such as word frequency and readability, are then extracted and utilized to train an RL agent. This agent has received training to distinguish between between authentic and fabricated news through a system of rewards and penalties. After training, the RL agent uses the traits it has collected to classify fresh news as true or false. While the potential role of block chain technology is mentioned, further explanation is necessary. This inventive strategy aims to halt the sharing of misleading information and untrue in the realm of digital news.",Fake News Detection "The paper presents our solutions for the MediaEval 2020 task namely FakeNews: Corona Virus and 5G Conspiracy Multimedia Twitter-Data-Based Analysis. The task aims to analyze tweets related to COVID-19 and 5G conspiracy theories to detect misinformation spreaders. The task is composed of two sub-tasks namely (i) text-based, and (ii) structure-based fake news detection. For the first task, we propose six different solutions relying on Bag of Words (BoW) and BERT embedding. Three of the methods aim at binary classification task by differentiating in 5G conspiracy and the rest of the COVID-19 related tweets while the rest of them treat the task as ternary classification problem. In the ternary classification task, our BoW and BERT based methods obtained an F1-score of .606% and .566% on the development set, respectively. On the binary classification, the BoW and BERT based solutions obtained an average F1-score of .666% and .693%, respectively. On the other hand, for structure-based fake news detection, we rely on Graph Neural Networks (GNNs) achieving an average ROC of .95% on the development set. © 2020 Copyright 2020 for this paper by its authors. All Rights Reserved.",Fake News Detection "In today's digital age, the swift spreading of information has revolutionized the way for news consumers and makes them informed. However, this convenience comes with a downside – the propagation of fake news, which can spread misinformation, manipulate public opinions, and undermine the credibility of legitimate sources. The term ""fake news"" refers to intentionally fabricated or misleading information that is frequently presented as news for a variety of cognitive processes, including commercial, social, or political gain. Machine Learning (ML), with its ability to analyze large datasets and discern patterns, has emerged as a promising solution for tackling the issue of fake news. By leveraging techniques such as Natural Language Processing (NLP), classification algorithms, and anomaly detection, ML models can be trained to identify and differentiate between authentic news and fake news. That in turn prevents the news consumer from misleading, prevent a product or service from defame and also helps in political defaming. Machine learning algorithms can be used to analyze historical data and make accurate predictions about whether news fakes or not. In this study, the proposed machine learning-based news analysis model utilizes feature selection technique to categorize the news. The model explores different classification algorithms, including Decision Tree (DT), Passive Aggressive Classifier (PAC), Logistic Regression (LR), and Random Forest (RF), to build the fake news prediction model. The experimental results show that the Passive Aggressive Classifier outperforms other models with an accuracy rate of 93%. The proposed model can help news channels; social media and consumers to distinguish between fake and real news, and minimize the risk of misleading",Fake News Detection "Given the ubiquity of fake news online, a reliable mechanism for automated detection is needed. This project proposes a new end-to-end detection pipeline, which uses Natural Language Processing (NLP) techniques for automated evidence extraction from online sources given an input claim of arbitrary length. This project also compiles a dataset of input claims and evidence larger than state of the art datasets. Distant supervision is used to generate weakly labelled training data and increase sample size. The resultant dataset displays topical variation and variations in length and features. The final ensemble models demonstrate high detection accuracy and micro-average F1 scores. The results validate distance supervision as a viable strategy for model training and data collection. A ConvNet-RNN hybrid was found to be the best performing style based model, while a Siamese LSTM with layer-weights sharing was found to be the best performing truth based model. Generally, truth based models outperformed style based models, and ensembling different models leads to performance gains over any single classifier.",Fake News Detection "With the widespread use of social media platforms within our modern society, these platforms have become a popular medium for disseminating news across the globe. While some of these platforms are considered reliable sources for sharing news, others publicize the information without much validation. The transmission of fake news on social media impacts people’s behavior and negatively influences people’s decisions. During the COVID-19 outbreak, it was more evident than ever. This has led to a demand for conducting research studies to explore sophisticated approaches to assess the integrity of news worldwide. The main objective of this research paper was to outline our proposed experimental methodology to detect and access fake news using Data Mining and Natural Language Processing. The presented research effort provides a method to verify the authenticity of the news disseminated in social networks by dividing the process into four significant stages: news aggregation, publication collection, data analysis, and matching results.",Fake News Detection "With the development of technology, the spread of fake news on social networks is increasing. Many researchers and organizations have taken action to detect fake news manually or automatically. In this study, various Machine Learning Algorithms and Transformer based approaches are used to select the best performing model that can distinguish news as fake or real. In order to contribute to the Turkish literature in the field of Natural Language Processing (NLP), the dataset is specifically prepared in Turkish. The words were vectorized using Word2Vec, BERT and SBERT and classified using Machine Learning Algorithms such as Support Vector Machines, Naive Bayes, Logistic Regression, KNN and BERT/SBERT deep learning models. The highest F1 score of 0.99 was obtained from the transformer-based BERT and SBERT.",Fake News Detection "Currently, Fake News easily go viral on social networks, this is a cause for concern worldwide. An alternative to detect this type of information is the use of Machine Learning and Natural Language Processing. Nevertheless, due to the high volume of information it is crucial to define mechanism easy to implement and to deploy. The aim of this research is to demonstrate that the use of basic Neural Networks together with a modified hyperparameter optimization algorithm, allows obtaining similar results to those obtained when using SVM and NLP. The source of information covers verified trending news in the country as well as false headers from mid-2021 to February 2023. The output of the experiments determines that 86% of true news can be accurately identified with the proposed approach. While, the 78% of fake news can be accurately identified too with a mean error around 0.049.",Fake News Detection "The upsurge of fake news in recent times, facilitated due to the swift dissemination of information on social media, has necessitated the development of advanced detection techniques. This research focuses on optimizing the Hugging Face Transformer models – a cutting-edge Natural Language Processing (NLP) tool – to enhance fake news detection. These models are widely recognized for their superior performance in understanding and generating human language. However, their application in fake news detection remains under-explored. Thus, this paper explores how Python, a high-level programming language known for its simplicity and robustness, can be used to fine-tune these models for this purpose. The primary objective is to improve their speed, efficiency, and accuracy in detecting fake news. We propose a comprehensive framework that uses Python-based methodologies to tweak various aspects of the Hugging Face Transformer models, such as their architecture, training paradigms, and hyperparameters. The expected outcome is a significant improvement in the models’ performance metrics, which will be evaluated using standard benchmarks within the fake-news detection domain. Overall, this research paves the way for harnessing the full potential of Hugging Face Transformers in curbing the menace of fake news, thus contributing to a more reliable and truthful digital information ecosystem.",Fake News Detection "Fake news has been a problem ever since the internet boomed. The easier access and exponential growth of the knowledge offered on social media networks have created it knotty to largely differentiate between false and true information. Opposing such fake news is important because the world's view and mindset are shaped by information. People form their own opinions through the day-to-day news. If this information is false, it can have devastating consequences. The quality of social media networks is additionally at stake wherever the spreading of pretend data is prevailing. Machine learning and Natural Language Processing have competed for a significant role in the classification of the data though with some limitations. The need of an hour is to stop these types of fake news especially in developing countries like India and focus on the correct, proper news article which will not affect people's mentality negatively.",Fake News Detection "Recently, product/ service reviews and online businesses have been similar to the blood–heart relationship as they greatly impact customers’ purchase decisions. There is an increasing incentive to manipulate reviews, mostly profit-motivated, as positive reviews imply high purchases and vice versa. Therefore, a suitable fake review detection approach is paramount in ensuring fair e-business competition and sustainability. Most existing methods mainly utilize discrete review features such as text similarity, rating deviation, review content, product information, the semantic meaning of reviews, and reviewer behaviors. In the matter of discourse, some recent researchers attempted multi-feature (review- and reviewer-centric features) integration. However, such approaches face two issues: (1) Review representation is extracted in an independent manner, thus ignoring correlations between them (2) Lack of a unified framework that can jointly learn latent text feature vectors, aspect ratings, and overall rating. To address the named issues, we propose a novel Deep Hybrid Model for fake review detection, which jointly learns from latent text feature vectors, aspect ratings, and overall ratings. Initially, it computes contextualized review text vectors, extracts aspects, and calculates respective rating values. Then, contextualized word vectors, overall ratings, and aspect ratings are concatenated. Finally, the model learns to classify reviews from such unified multi-dimensional feature representation. Extensive experiments on a publicly available dataset demonstrate that the proposed approach significantly outperforms state-of-the-art baseline approaches.",Fake Review Detection "Now-a-days, use of apps has increased with the increasing craze towards mobiles. For all types of mobile application, users are preferring smartphones. Generally, depending on how many users already have downloaded that application? , what are the ratings and reviews? , what are the comments? , etc., users download mobile applications. In the mobile app market, fraud ranking points to false activity that has a reason to push up the mobile apps in the popularity list. Certainly, it turns more periodic for app developers to use fake mechanism. Here, the paper proposes semantic analysis of app review for fraud detection in mobile apps. Firstly we propose to correctly detect the misrepresentation by excavating the active periods, also called as leading sessions, of the mobile apps. Furthermore, we will inspect two types of evidences, namely, ranking-based evidences, review-based evidences and use natural language processing (NLP) to get action words. Next, convert review to ratings and finally perform pattern analysis on session with app data gathered from the app store. So, the paper proposes approach to validate its effectiveness, and also show the scalability of the detection algorithm.",Fake Review Detection "Online reviews are a growing market, but it is struggling with fake reviews. They undermine both the value of reviews to the user, and their trust in the review sites. However, fake positive reviews can boost a business, and so a small industry producing fake reviews has developed. The two sides are facing an arms race that involves more and more natural language processing (NLP). So far, NLP has been used mostly for detection, and works well on human-generated reviews. But what happens if NLP techniques are used to generate fake reviews as well? We investigate the question in an adversarial setup, by assessing the detectability of different fake-review generation strategies. We use generative models to produce reviews based on meta-information, and evaluate their effectiveness against deceptiondetection models and human judges. We find that meta-information helps detection, but that NLP-generated reviews conditioned on such information are also much harder to detect than conventional ones.",Fake Review Detection "Online shopping stores have grown steadily over the past few years. Due to the massive growth of these businesses, the detection of fake reviews has attracted attention. Fake reviews are seriously trying to mislead customers and thereby undermine the honesty and authenticity of online shopping environments. So far, various of fake review classifiers have been proposed that take into account the actual content of the review. To improve the accuracies of existing fake review classification or detection approaches, we propose to use BERT (Bidirectional Encoder Representation from Transformers) model to extract word embeddings from texts (i.e. reviews). Word embeddings are obtained in various basic methods such as SVM (Support vector machine), Random Forests, Naive Bayes and others. The confusion matrix method was also taken into account to evaluate and graphically represent the results. The results indicate that the SVM classifiers outperforms the others in terms of accuracy and f1-score with an accuracy of 87.81%, which is 7.6% higher than the classifier used in the previous study [5].",Fake Review Detection "In this COVID-19 scenario the majority have an interest in on-line searching. So, many folks order the merchandise depends on the previous reviews. These reviews square measure enjoying necessary role in creating purchase choices. however, in these reviews' spammers might manufacture pretend reviews because of such behavior of spammers clients would I mislead and create the incorrect call to beat this drawback we've to spot the actual one who posed reviews over just once and therefore the admin can delete that review supported the customer review info.",Fake Review Detection "In order to enhance brand benefits or discredit competitors, some merchants hire fake reviewers to post large amounts of fake reviews on e-commerce platforms. This behavior inevitably harms consumers’ interests and causes unfair market competition for other merchants. Researches on fake review detection mainly focus on mining the content of the reviews, the behavioral features of the reviewers, or building models using deep learning. However, most existing research have not taken into the differences in motivation between fake positive and fake negative reviews, the review time distribution features of true reviewers post reviews, and how to effectively integrate multi-modal features. In this paper, we collect restaurant review datasets from Yelp.com in three different regions, and propose a fake review detection method based on a neural network model called BERT-Multi feature-TextCNN-BiGRU-Attention(BMTBA). Firstly, we use the BERT pre-training model to train a restaurant review language model. Then, we propose to use a multimodal fusion method to combine the BERT pre-trained word vector sequences with extracted multidimensional statistical features as input(including a newly proposed reviewer feature called Review weekday). Finally, considering that the motivation for fake positive and fake negative reviews is different, we construct fake positive and fake negative model separately to detect them. Multiple ablation experiments are conducted on the three datasets mentioned above, and the results show that the proposed BMTBA model outperformed the baseline model (BERT-TextCNN-BiGRU-Attention) with a higher classification detection accuracy of 94.68%.",Fake Review Detection "Detecting fake reviews can help customers make better purchasing decisions and maintain a positive online business environment. In recent years, pre-trained language models have significantly improved the performance of natural language processing tasks. These models are able to generate different representation vectors for each word in different contexts, thus solving the challenge of multiple meanings of a word, which traditional word vector methods such as Word2Vec cannot solve, and, therefore, better capturing the text’s contextual information. In addition, we consider that reviews generally contain rich opinion and sentiment expressions, while most pre-trained language models, including BERT, lack the consideration of sentiment knowledge in the pre-training stage. Based on the above considerations, we propose a new fake review detection model based on a pre-trained language model and convolutional neural network, which is called BSTC. BSTC considers BERT, SKEP, and TextCNN, where SKEP is a pre-trained language model based on sentiment knowledge enhancement. We conducted a series of experiments on three gold-standard datasets, and the findings illustrate that BSTC outperforms state-of-the-art methods in detecting fake reviews. It achieved the highest accuracy on all three gold-standard datasets—Hotel, Restaurant, and Doctor—with 93.44%, 91.25%, and 92.86%, respectively.",Fake Review Detection "Fake (deceptive) reviews have become a serious problem for online consumers, with the proliferation of online marketplaces leading to an increase in spurious reviews that are often used to lure or discourage potential customers. While sentiment analysis has been introduced to the e-commerce sector, the lack of an effective method to differentiate between authentic and fake reviews is still a major challenge. Existing approaches face issues such as slow convergence and inadequate precision. In order to address these challenges, this paper proposes a new approach that integrates sentiment features into the review detection process. The proposed approach uses a feature extraction method that utilizes a preconstructed sentiment dictionary, a pre-trained BERT model to extract feature vectors, and a fully connected dense layer to classify reviews as real or fake using the softMax function. The effectiveness of the proposed approach was evaluated on the Yelp dataset, showing a nearly 7% improvement in accuracy compared to existing feature sets and a nearly 4% improvement over existing state-of-the-art methods. The integration of sentiment features has shown promising results in detecting fake reviews, which is crucial for ensuring a fair and trustworthy online marketplace.",Fake Review Detection "The increasing prevalence of fake online reviews jeopardizes firms' profits, consumers' well-being, and the trustworthiness of e-commerce ecosystems. We face the significant challenge of accurately detecting fake reviews. In this paper, we undertake a comprehensive investigation of traditional and state-of-the-art machine learning models in classification, based on textual features, to detect fake online reviews. We attempt to examine existing and noteworthy models for fake online review detection, in terms of the effectiveness of textual features, the efficiency of sampling methods, and their performance of detection. Adopting a quantitative and data-driven approach, we scrutinize both tree-based and transformer-based detection models. Our comparative studies evidence that transformer-based models (specifically BERT and GPT-3) outperform tree-based models (i.e., Random Forest and XGBoost), in terms of accuracy, precision, and recall metrics. We use real data from online reviews on Yelp.com for implementation. The results demonstrate that our proposed approach can identify fraudulent reviews effectively and efficiently. Synthesizing ChatGPT-3, tree-based, and transformer-based models for fake online review detection is rather new but promising, this paper highlights their potential for better detection of fake online reviews.",Fake Review Detection "Fighting fake news is a difficult and challenging task. With an increasing impact on the social and political environment, fake news exert an unprecedently dramatic influence on people’s lives. In response to this phenomenon, initiatives addressing automated fake news detection have gained popularity, generating widespread research interest. However, most approaches targeting English and low-resource languages experience problems when devising such solutions. This study focuses on the progress of such investigations, while highlighting existing solutions, challenges, and observations shared by various research groups. In addition, given the limited amount of automated analyses performed on Romanian fake news, we inspect the applicability of the available approaches in the Romanian context, while identifying future research paths.",Fake Review Detection "Although previous research on Aspect-based Sentiment Analysis (ABSA) for Indonesian reviews in hotel domain has been conducted using CNN and XGBoost, its model did not generalize well in test data and high number of OOV words contributed to misclassification cases. Nowadays, most state-of-the-art results for wide array of NLP tasks are achieved by utilizing pretrained language representation. In this paper, we intend to incorporate one of the foremost language representation model, BERT, to perform ABSA in Indonesian reviews dataset. By combining multilingual BERT (m-BERT) with task transformation method, we manage to achieve significant improvement by 8% on the F1-score compared to the result from our previous study.",Aspect-Based Sentiment Analysis (ABSA) "Aspect-based sentiment analysis (ABSA), a task in sentiment analysis, predicts the sentiment polarity of specific aspects mentioned in the input sentence. Recent research has demonstrated the effectiveness of Bidirectional Encoder Representation from Transformers (BERT) and its variants in improving the performance of various Natural Language Processing (NLP) tasks, including sentiment analysis. However, BERT, trained on Wikipedia and BookCorpus dataset, lacks domain-specific knowledge. Also, for the ABSA task, the Attention mechanism leverages the aspect information to determine the sentiment orientation of the aspect within the given sentence. Based on the abovementioned observations, this paper proposes a novel approach called the IAN-BERT model. The IAN-BERT model leverages attention mechanisms to enhance a post-trained BERT representation trained on Amazon and Yelp datasets. The objective is to capture domain-specific knowledge using BERT representation and identify the significance of context words with aspect terms and vice versa. By incorporating attention mechanisms, the IAN-BERT model aims to improve the model’s ability to extract more relevant and informative features from the input text, ultimately leading to better predictions. Experimental evaluations conducted on SemEval-14 (Restaurant and Laptop dataset) and MAMS dataset demonstrate the effectiveness and superiority of the IAN-BERT model in aspect-based sentiment analysis.",Aspect-Based Sentiment Analysis (ABSA) "Due to the breathtaking growth of social media or newspaper user comments, online product reviews comments, sentiment analysis (SA) has captured substantial interest from the researchers. With the fast increase of domain, SA work aims not only to predict the sentiment of a sentence or document but also to give the necessary detail on different aspects of the sentence or document (i.e. aspect-based sentiment analysis). A considerable number of datasets for SA and aspect-based sentiment analysis (ABSA) have been made available for English and other well-known European languages. In this paper, we present a manually annotated Bengali dataset of high quality, BAN-ABSA, which is annotated with aspect and its associated sentiment by three native Bengali speakers. The dataset consists of 2619 positive, 4721 negative and 1669 neutral data samples from 9009 unique comments gathered from some famous Bengali news portals. In addition, we conducted a baseline evaluation with a focus on deep learning model, achieved an accuracy of 78.75% for aspect term extraction and accuracy of 71.08% for sentiment classification. Experiments on the BAN-ABSA dataset show that the CNN model is better in terms of accuracy though Bi-LSTM significantly outperforms CNN model in terms of average F1-score.",Aspect-Based Sentiment Analysis (ABSA) "This study aims to gain a deeper understanding of online student reviews regarding the learning process at a private university in Indonesia and to compare the effectiveness of several algorithms: Naive Bayes, K-NN, Decision Tree, and Indo-Bert. Traditional Sentiment Analysis methods can only analyze sentences as a whole, prompting this research to develop an Aspect-Based Sentiment Analysis (ABSA) approach, which includes aspect extraction and sentiment classification. However, ABSA has inconsistencies in aspect detection and sentiment classification. To address this, we propose the BERT method using the pre-trained Indo-Bert model, currently the best NLP model for the Indonesian language. This study also fine-tunes hyperparameters to optimize results. The dataset comprises 10,000 student reviews obtained from online questionnaires. Experimental results show that the aspect extraction model has an accuracy of 0.890 and an F1-Score of 0.897, while the sentiment classification model has an accuracy of 0.879 and an F1-Score of 0.882. These results demonstrate the effectiveness of the proposed method in identifying aspects and sentiments in student reviews and provide a comparison between the four algorithms.",Aspect-Based Sentiment Analysis (ABSA) "spect-Based Sentiment Analysis (ABSA) is increasingly crucial in Natural Language Processing (NLP) for applications such as customer feedback analysis and product recommendation systems. ABSA goes beyond traditional sentiment analysis by extracting sentiments related to specific aspects mentioned in the text; existing attention-based models often need help to effectively connect aspects with context due to language complexity and multiple sentiment polarities in a single sentence. Recent research underscores the value of integrating syntactic information, such as dependency trees, to understand long-range syntactic relationships better and link aspects with context. Despite these advantages, challenges persist, including sensitivity to parsing errors and increased computational complexity when combining syntactic and semantic information. To address these issues, we propose Amplifying Aspect-Sentence Awareness (A3SN), a novel technique designed to enhance ABSA through amplifying aspect-sentence awareness attention. Following the transformer's standard process, our innovative approach incorporates multi-head attention mechanisms to augment the model with sentence and aspect semantic information. We added another multi-head attention module: amplify aspect-sentence awareness attention. By doubling its focus between the sentence and aspect, we effectively highlighted aspect importance within the sentence context. This enables accurate capture of subtle relationships and dependencies. Additionally, gated fusion integrates feature representations from multi-head and amplified aspect-sentence awareness attention mechanisms, which is essential for ABSA. Experimental results across three benchmark datasets demonstrate A3SN's effectiveness and outperform state-of-the-art (SOTA) baseline models.",Aspect-Based Sentiment Analysis (ABSA) "Sentiment analysis is a natural language processing (NLP) task of identifying orextracting the sentiment content of a text unit. This task has become an active research topic since the early 2000s. During the two last editions of the VLSP workshop series, the shared task on Sentiment Analysis (SA) for Vietnamese has been organized in order to provide an objective evaluation measurement about the performance (quality) of sentiment analysis tools, and encouragethe development of Vietnamese sentiment analysis systems, as well as to provide benchmark datasets for this task. The rst campaign in 2016 only focused on the sentiment polarity classication, with a dataset containing reviews of electronic products. The second campaign in 2018 addressed the problem of Aspect Based Sentiment Analysis (ABSA) for Vietnamese, by providing two datasets containing reviews in restaurant and hotel domains. These data are accessible for research purpose via the VLSP website vlsp.org.vn/resources. This paper describes the built datasets as well as the evaluation results of the systems participating to these campaigns.",Aspect-Based Sentiment Analysis (ABSA) "Aspect-based sentiment analysis (ABSA) is a task in natural language processing (NLP) that involves predicting the sentiment polarity towards a specific aspect in text. Graph neural networks (GNNs) have been shown to be effective tools for sentiment analysis tasks, but current research often overlooks affective information in the text, leading to irrelevant information being learned for specific aspects. To address this issue, we propose a novel GNN model, MHAKE-GCN, which is based on the graph convolutional neural network (GCN) and multi-head attention (MHA). Our model incorporates external sentiment knowledge into the GCN and fully extracts semantic and syntactic information from a sentence using MHA. By adding weights to sentiment words associated with aspect words, our model can better learn sentiment expressions related to specific aspects. Our model was evaluated on four publicly benchmark datasets and compared against twelve other methods. The results of the experiments demonstrate the effectiveness of the proposed model for the task of aspect-based sentiment analysis.",Aspect-Based Sentiment Analysis (ABSA) "Sentiment analysis (SA) is also known as opinion mining, it is the process of gathering and analyzing people's opinions about a particular service, good, or company on websites like Twitter, Facebook, Instagram, LinkedIn, and blogs, among other places. This article covers a thorough analysis of SA and its levels. This manuscript's main focus is on aspect-based SA, which helps manufacturing organizations make better decisions by examining consumers' viewpoints and opinions of their products. The many approaches and methods used in aspect-based sentiment analysis are covered in this review study (ABSA). The features associated with the aspects were manually drawn out in traditional methods, which made it a time-consuming and error-prone operation. Nevertheless, these restrictions may be overcome as artificial intelligence develops. Therefore, to increase the effectiveness of ABSA, researchers are increasingly using AI-based machine learning (ML) and deep learning (DL) techniques. Additionally, certain recently released ABSA approaches based on ML and DL are examined, contrasted, and based on this research, gaps in both methodologies are discovered. At the conclusion of this study, the difficulties that current ABSA models encounter are also emphasized, along with suggestions that can be made to improve the efficacy and precision of ABSA systems.",Aspect-Based Sentiment Analysis (ABSA) "Aspect-based sentiment analysis (ABSA) is currently among the most vigorous areas in natural language processing (NLP). Individuals, private and government institutions are increasingly using media sources for decision making. In the last decade, aspect extraction has been the most essential phase of sentiment analysis (SA) to conduct an abridged sentiment classification. However, previous studies on sentiment analysis mostly focused on explicit aspects extraction with limited work on implicit aspects. To the best of our knowledge, this is the first systematic review that covers implicit, explicit, and the combination of both implicit and explicit aspect extractions. Therefore, this systematic review has been conducted to, 1) identify techniques used for extracting implicit, explicit, or both implicit and explicit aspects; 2) analyze the various evaluation metrics, data domains, and languages involved in the implicit and explicit aspect extraction in sentiment analysis from years 2008 to 2019; 3) identify the key challenges associated with the techniques based on the result of a comprehensive comparative analysis; and finally, 4) highlight the feasible opportunities for future research directions. This review can be used to assist novice and prominent researchers to understand the concept of both implicit and explicit aspect extractions in aspect-based sentiment analysis domain.",Aspect-Based Sentiment Analysis (ABSA) "Sentiment analysis has become one of the most important tools in natural language processing, since it opens many possibilities to understand people's opinions on different topics. Aspect-based sentiment analysis aims to take this a step further and find out what exactly someone is talking about, and if he likes or dislikes it. Real world examples of perfect areas for this topic are the millions of available customer reviews in online shops. There have been multiple approaches to tackle this problem, using machine learning, deep learning and neural networks. However, currently the number of labeled reviews for training classifiers is very small. Therefore, we undertook multiple steps to research ways of improving ABSA performance on small datasets, by comparing recurrent and feed-forward neural networks and incorporating additional input data that was generated using different readily available NLP tools.",Aspect-Based Sentiment Analysis (ABSA) "Recent research on dialog state tracking (DST) focuses on methods that allow few- and zero-shot transfer to new domains or schemas. However, performance gains heavily depend on aggressive data augmentation and fine-tuning of ever larger language model based architectures. In contrast, general purpose language models, trained on large amounts of diverse data, hold the promise of solving any kind of task without task-specific training. We present preliminary experimental results on the ChatGPT research preview, showing that ChatGPT achieves state-of-the-art performance in zero-shot DST. Despite our findings, we argue that properties inherent to general purpose models limit their ability to replace specialized systems. We further theorize that the in-context learning capabilities of such models will likely become powerful tools to support the development of dedicated dialog state trackers and enable dynamic methods.",Dialogue State Tracking (DST) "Dialogue State Tracking (DST) is a sub-task of task-based dialogue systems where the user intention is tracked through a set of (domain, slot, slot-value) triplets. Existing DST models can be difficult to extend for new datasets with larger domains/slots mainly due to either of the two reasons- i) prediction of domain-slot as a pair, and ii) dependency of model parameters on the number of slots and domains. In this work, we propose to address these issues using a Hierarchical DST (Hi-DST) model. At a given turn, the model first detects a change in domain followed by domain prediction if required. Then it decides suitable action for each slot in the predicted domains and finds their value accordingly. The model parameters of Hi-DST are independent of the number of domains/slots. Due to the hierarchical modeling, it achieves O(|M|+|N|) belief state prediction for a single turn where M and N are the set of unique domains and slots respectively. We argue that the hierarchical structure helps in the model explainability and makes it easily extensible to new datasets. Experiments on the MultiWOZ dataset show that our proposed model achieves comparable joint accuracy performance to state-of-the-art DST models.",Dialogue State Tracking (DST) "The dialogue state tracking module is a crucial component of task-oriented dialogue systems. Recently, some Dialogue State Tracking (DST) methods have used the previous dialogue state as auxiliary input, resulting in errors that propagate and subsequently affect predictions. This paper proposes utilizing dialogue-level state as the prediction target and randomly removing historical dialogue state during training. The experiments demonstrate that this approach can effectively enhance the performance of the DST algorithm, alleviate error propagation, and achieve competitive results on both noisy (MultiWOZ 2.1) and clean (MultiWOZ 2.4) datasets.",Dialogue State Tracking (DST) "Sequence-to-sequence state-of-the-art systems for dialogue state tracking (DST) use the full dialogue history as input, represent the current state as a list with all the slots, and generate the entire state from scratch at each dialogue turn. This approach is inefficient, especially when the number of slots is large and the conversation is long. We propose Diable, a new task formalisation that simplifies the design and implementation of efficient DST systems and allows one to easily plug and play large language models. We represent the dialogue state as a table and formalise DST as a table manipulation task. At each turn, the system updates the previous state by generating table operations based on the dialogue context. Extensive experimentation on the MultiWoz datasets demonstrates that Diable (i) outperforms strong efficient DST baselines, (ii) is 2.4x more time efficient than current state-of-the-art methods while retaining competitive Joint Goal Accuracy, and (iii) is robust to noisy data annotations due to the table operations approach.",Dialogue State Tracking (DST) "Recently proposed dialogue state tracking (DST) approaches predict the dialogue state of a target turn sequentially based on the previous dialogue state. During the training time, the ground-truth previous dialogue state is utilized as the historical context. However, only the previously predicted dialogue state can be used in inference. This discrepancy might lead to error propagation, i.e., mistakes made by the model in the current turn are likely to be carried over to the following turns.To solve this problem, we propose Correctable Dialogue State Tracking (Correctable-DST). Specifically, it consists of three stages: (1) a Predictive State Simulator is exploited to generate a previously “predicted” dialogue state based on the ground-truth previous dialogue state during training; (2) a Slot Detector is proposed to determine the slots with an incorrect value in the previously “predicted” state and the slots whose values are to be updated in the current turn; (3) a State Generator takes the name of the above-selected slots as a prompt to generate the current state.Empirical results show that our approach achieves 67.51%, 68.24%, 70.30%, 71.38%, and 81.27% joint goal accuracy on MultiWOZ 2.0-2.4 datasets, respectively, and achieves a new state-of-the-art performance with significant improvements.",Dialogue State Tracking (DST) "We present a method for performing zero-shot Dialogue State Tracking (DST) by casting the task as a learning-to-ask-questions framework. The framework learns to pair the best question generation (QG) strategy with in-domain question answering (QA) methods to extract slot values from a dialogue without any human intervention. A novel self-supervised QA pretraining step using in-domain data is essential to learn the structure without requiring any slot-filling annotations. Moreover, we show that QG methods need to be aligned with the same grammatical person used in the dialogue. Empirical evaluation on the MultiWOZ 2.1 dataset demonstrates that our approach, when used alongside robust QA models, outperforms existing zero-shot methods in the challenging task of zero-shot cross domain adaptation-given a comparable amount of domain knowledge during data creation. Finally, we analyze the impact of the types of questions used, and demonstrate that the algorithmic approach outperforms template-based question generation.",Dialogue State Tracking (DST) "Different from traditional task-oriented and open-domain dialogue systems, insurance agents aim to engage customers for helping them satisfy specific demands and emotional companionship. As a result, customer-to-agent dialogues are usually very long, and many turns of them are pure chit-chat without any useful marketing clues. This brings challenges to dialogue state tracking task in insurance marketing. To deal with these long and sparse dialogues, we propose a new dialogue state tracking architecture containing three components: dialogue encoder, Smart History Collector (SHC) and dialogue state classifier. SHC, a deliberately designed memory network, effectively selects relevant dialogue history via slot-attention, and then updates dialogue history memory. With SHC, our model is able to keep track of the vital information and filter out pure chit-chat. Experimental results demonstrate that our proposed LS-DST significantly outperforms the state-of-the-art baselines on real insurance dialogue dataset.",Dialogue State Tracking (DST) "Few-shot dialogue state tracking (DST) model tracks user requests in dialogue with reliable accuracy even with a small amount of data. In this paper, we introduce an ontology-free few-shot DST with self-feeding belief state input. The self-feeding belief state input increases the accuracy in multi-turn dialogue by summarizing previous dialogue. Also, we newly developed a slot-gate auxiliary task. This new auxiliary task helps classify whether a slot is mentioned in the dialogue. Our model achieved the best score in a few-shot setting for four domains on multiWOZ 2.0.",Dialogue State Tracking (DST) "Task-oriented dialogue systems depend on dialogue state tracking to keep track of the intentions of users in the course of conversations. Although recent models in dialogue state tracking exhibit good performance, the errors in predicting the value of each slot at the current dialogue turn of these models are easily carried over to the next turn, and unlikely to be revised in the next turn, resulting in error propagation. In this paper, we propose a revisable state prediction for dialogue state tracking, which constructs a two-stage slot value prediction process composed of an original prediction and a revising prediction. The original prediction process jointly models the previous dialogue state and dialogue context to predict the original dialogue state of the current dialogue turn. Then, in order to avoid the errors existing in the original dialogue state continuing to the next dialogue turn, a revising prediction process utilizes the dialogue context to revise errors, alleviating the error propagation. Experiments are conducted on MultiWOZ 2.0, MultiWOZ 2.1, and MultiWOZ 2.4 and results indicate that our model outperforms previous state-of-the-art works, achieving new state-of-the-art performances with 56.35, 58.09, and 75.65% joint goal accuracy, respectively, which has a significant improvement (2.15, 1.73, and 2.03%) over the previous best results.",Dialogue State Tracking (DST) "This paper focuses on end-to-end task-oriented dialogue systems, which jointly handle dialogue state tracking (DST) and response generation. Traditional methods usually adopt a supervised paradigm to learn DST from a manually labeled corpus. However, the annotation of the corpus is costly, time-consuming, and cannot cover a wide range of domains in the real world. To solve this problem, we propose a multi-span prediction network (MSPN) that performs unsupervised DST for end-to-end task-oriented dialogue. Specifically, MSPN contains a novel split-merge copy mechanism that captures long-term dependencies in dialogues to automatically extract multiple text spans as keywords. Based on these keywords, MSPN uses a semantic distance based clustering approach to obtain the values of each slot. In addition, we propose an ontology-based reinforcement learning approach, which employs the values of each slot to train MSPN to generate relevant values. Experimental results on single-domain and multi-domain task-oriented dialogue datasets show that MSPN achieves state-of-the-art performance with significant improvements. Besides, we construct a new Chinese dialogue dataset MeDial in the low-resource medical domain, which further demonstrates the adaptability of MSPN.",Dialogue State Tracking (DST) "The technological development in current era demands the need of Artificial Intelligence (AI) in all fields. The AI in medical field is not an exception for various real time applications as per user demands. The applications are medical report summarization, image captioning, Visual Question Answering (VQA) and Visual Question Generation (VQG). ImageCLEF is one of the forum which constantly conducing the challenges in these applications. In this paper, for the given MEDVQA-GI dataset, three medical VQA and one medical VQG models are proposed. The medical VQA models are developed using VisionTransformer (ViT), SegFormer and VisualBERT techniques through a combination of eighteen QA-pairs based on categories and resulted an accuracy of 95.6%, 95.7% and 62.4% respectively. Also, the proposed medical VQG model is developed using Category based Medical Visual Question Generation (CMVQG) technique only.",Visual QA (VQA) "Earth vision research typically focuses on extracting geospatial object locations and categories but neglects the exploration of relations between objects and comprehensive reasoning. Based on city planning needs, we develop a multi-modal multi-task VQA dataset (EarthVQA) to advance relational reasoning-based judging, counting, and comprehensive analysis. The EarthVQA dataset contains 6000 images, corresponding semantic masks, and 208,593 QA pairs with urban and rural governance requirements embedded. As objects are the basis for complex relational reasoning, we propose a Semantic OBject Awareness framework (SOBA) to advance VQA in an object-centric way. To preserve refined spatial locations and semantics, SOBA leverages a segmentation network for object semantics generation. The object-guided attention aggregates object interior features via pseudo masks, and bidirectional cross-attention further models object external relations hierarchically. To optimize object counting, we propose a numerical difference loss that dynamically adds difference penalties, unifying the classification and regression tasks. Experimental results show that SOBA outperforms both advanced general and remote sensing methods. We believe this dataset and framework provide a strong benchmark for Earth vision's complex analysis.",Visual QA (VQA) "Text-VQA aims at answering questions that require understanding the textual cues in an image. Despite the great progress of existing Text-VQA methods, their performance suffers from insufficient human-labeled question-answer (QA) pairs. However, we observe that, in general, the scene text is not fully exploited in the existing datasets -- only a small portion of the text in each image participates in the annotated QA activities. This results in a huge waste of useful information. To address this deficiency, we develop a new method to generate high-quality and diverse QA pairs by explicitly utilizing the existing rich text available in the scene context of each image. Specifically, we propose, TAG, a text-aware visual question-answer generation architecture that learns to produce meaningful, and accurate QA samples using a multimodal transformer. The architecture exploits underexplored scene text information and enhances scene understanding of Text-VQA models by combining the generated QA pairs with the initial training data. Extensive experimental results on two well-known Text-VQA benchmarks (TextVQA and ST-VQA) demonstrate that our proposed TAG effectively enlarges the training data that helps improve the Text-VQA performance without extra labeling effort. Moreover, our model outperforms state-of-the-art approaches that are pre-trained with extra large-scale data.",Visual QA (VQA) "Visual Question Answering can be a functionally relevant task if purposed as such. In this paper, we aim to investigate and evaluate its efficacy in terms of localization-based question answering. We do this specifically in the context of autonomous driving where this functionality is important. To achieve our aim, we provide a new dataset, Auto-QA. Our new dataset is built over the Argoverse dataset and provides a truly multi-modal setting with seven views per frame and point-cloud LIDAR data being available for answering a localization-based question. We contribute localized attention adaptations of most popular VQA baselines and evaluate them on this task. We also provide joint point-cloud and image-based baselines that perform well on this task. An additional evaluation that we perform is to analyse whether the attention module is accurate or not for the image-based VQA baselines. To summarize, through this work we thoroughly analyze the localization abilities through visual question answering for autonomous driving and provide a new benchmark task for the same. Our best joint baseline model achieves a useful 74.8% accuracy on this task.",Visual QA (VQA) "Recently, 3D vision-and-language tasks have attracted increasing research interest. Compared to other vision-and-language tasks, the 3D visual question answering (VQA) task is less exploited and is more susceptible to language priors and co-reference ambiguity. Meanwhile, a couple of recently proposed 3D VQA datasets do not well support 3D VQA task due to their limited scale and annotation methods. In this work, we formally define and address a 3D grounded question answering (GQA) task by collecting a new 3D VQA dataset, referred to as flexible and explainable 3D GQA (FE-3DGQA), with diverse and relatively free-form question-answer pairs, as well as dense and completely grounded bounding box annotations. To achieve more explainable answers, we label the objects appeared in the complex QA pairs with different semantic types, including answer-grounded objects (both appeared and not appeared in the questions), and contextual objects for answer-grounded objects. We also propose a new 3D VQA framework to effectively predict the completely visually grounded and explainable answer. Extensive experiments verify that our newly collected benchmark datasets can be effectively used to evaluate various 3D VQA methods from different aspects and our newly proposed framework also achieves the state-of-the-art performance on the new benchmark dataset.",Visual QA (VQA) "To contribute to automating the medical vision-language model, we propose a novel Chest-Xray Different Visual Question Answering (VQA) task. Given a pair of main and reference images, this task attempts to answer several questions on both diseases and, more importantly, the differences between them. This is consistent with the radiologist's diagnosis practice that compares the current image with the reference before concluding the report. We collect a new dataset, namely MIMIC-Diff-VQA, including 700,703 QA pairs from 164,324 pairs of main and reference images. Compared to existing medical VQA datasets, our questions are tailored to the Assessment-Diagnosis-Intervention-Evaluation treatment procedure used by clinical professionals. Meanwhile, we also propose a novel expert knowledge-aware graph representation learning model to address this task. The proposed baseline model leverages expert knowledge such as anatomical structure prior, semantic, and spatial knowledge to construct a multi-relationship graph, representing the image differences between two images for the image difference VQA task.",Visual QA (VQA) "Visual Question Answering (VQA) is one of the most important tasks in autonomous driving, which requires accurate recognition and complex situation evaluations. How-ever, datasets annotated in a QA format, which guarantees precise language generation and scene recognition from driving scenes, have not been established yet. In this work, we introduce Markup-QA, a novel dataset annotation technique in which QAs are enclosed within markups. This approach facilitates the simultaneous evaluation of a model's capabilities in sentence generation and VQA. Moreover, using this annotation methodology, we designed the NuScenes-MQA dataset. This dataset empowers the development of vision language models, especially for autonomous driving tasks, by focusing on both descriptive capabilities and precise QA.",Visual QA (VQA) "Visual Question Answering (VQA) deep-learning systems tend to capture superficial statistical correlations in the training data because of strong language priors and fail to generalize to test data with a significantly different question-answer (QA) distribution. To address this issue, we introduce a self-critical training objective that ensures that visual explanations of correct answers match the most influential image regions more than other competitive answer candidates. The influential regions are either determined from human visual/textual explanations or automatically from just significant words in the question and answer. We evaluate our approach on the VQA generalization task using the VQA-CP dataset, achieving a new state-of-the-art i.e., 49.5% using textual explanations and 48.5% using automatically annotated regions.",Visual QA (VQA) "Despite Visual Question Answering (VQA) has realized impressive progress over the last few years, today's VQA models tend to capture superficial linguistic correlations in the train set and fail to generalize to the test set with different QA distributions. To reduce the language biases, several recent works introduce an auxiliary question-only model to regularize the training of targeted VQA model, and achieve dominating performance on VQA-CP. However, since the complexity of design, current methods are unable to equip the ensemble-based models with two indispensable characteristics of an ideal VQA model: 1) visual-explainable: the model should rely on the right visual regions when making decisions. 2) question-sensitive: the model should be sensitive to the linguistic variations in question. To this end, we propose a model-agnostic Counterfactual Samples Synthesizing (CSS) training scheme. The CSS generates numerous counterfactual training samples by masking critical objects in images or words in questions, and assigning different ground-truth answers. After training with the complementary samples (ie, the original and generated samples), the VQA models are forced to focus on all critical objects and words, which significantly improves both visual-explainable and question-sensitive abilities. In return, the performance of these models is further boosted. Extensive ablations have shown the effectiveness of CSS. Particularly, by building on top of the model LMH, we achieve a record-breaking performance of 58.95% on VQA-CP v2, with 6.5% gains.",Visual QA (VQA) "While models for Visual Question Answering (VQA) have steadily improved over the years, interacting with one quickly reveals that these models lack consistency. For instance, if a model answers “red” to “What color is the balloon?”, it might answer “no” if asked, “Is the balloon red?”. These responses violate simple notions of entailment and raise questions about how effectively VQA models ground language. In this work, we introduce a dataset, ConVQA, and metrics that enable quantitative evaluation of consistency in VQA. For a given observable fact in an image (e.g. the balloon’s color), we generate a set of logically consistent question-answer (QA) pairs (e.g. Is the balloon red?) and also collect a human-annotated set of common-sense based consistent QA pairs (e.g. Is the balloon the same color as tomato sauce?). Further, we propose a consistency-improving data augmentation module, a Consistency Teacher Module (CTM). CTM automatically generates entailed (or similar-intent) questions for a source QA pair and fine-tunes the VQA model if the VQA’s answer to the entailed question is consistent with the source QA pair. We demonstrate that our CTM-based training improves the consistency of VQA models on the Con-VQA datasets and is a strong baseline for further research.",Visual QA (VQA) "Recent state-of-the-art open-domain QA models are typically based on a two stage retriever-reader approach in which the retriever first finds the relevant knowledge/passages and the reader then leverages that to predict the answer. Prior work has shown that the performance of the reader usually tends to improve with the increase in the number of these passages. Thus, state-of-the-art models use a large number of passages (e.g. 100) for inference. While the reader in this approach achieves high prediction performance, its inference is computationally very expensive. We humans, on the other hand, use a more efficient strategy while answering: firstly, if we can confidently answer the question using our already acquired knowledge then we do not even use the external knowledge, and in the case when we do require external knowledge, we don't read the entire knowledge at once, instead, we only read that much knowledge that is sufficient to find the answer. Motivated by this procedure, we ask a research question""Can the open-domain QA reader utilize external knowledge efficiently like humans without sacrificing the prediction performance?""Driven by this question, we explore an approach that utilizes both 'closed-book' (leveraging knowledge already present in the model parameters) and 'open-book' inference (leveraging external knowledge). Furthermore, instead of using a large fixed number of passages for open-book inference, we dynamically read the external knowledge in multiple 'knowledge iterations'. Through comprehensive experiments on NQ and TriviaQA datasets, we demonstrate that this dynamic reading approach improves both the 'inference efficiency' and the 'prediction accuracy' of the reader. Comparing with the FiD reader, this approach matches its accuracy by utilizing just 18.32% of its reader inference cost and also outperforms it by achieving up to 55.10% accuracy on NQ Open.",Open-Domain QA "The goal of the open-domain table QA task is to answer a question based on retrieving and extracting information from a large corpus of structured tables. Currently, the accuracy of the most popular framework in open-domain QA: the two-stage retrieval, is limited by the table retriever. Inspired by the research on Text-to-SQL, this paper proposes to use execution guidance to enhance the effect of table retrieval. Our contributions are mainly threefold: 1. Proposed using execution-guided method to enhance table retrieval to fully leveraging schema information of tables. 2. Proposed the pure Text-to-SQL task for open domains. We design a two-stage Table QA framework based on semantic parsing to generate logical forms and answers simultaneously. 3. Proposed an open-domain Text-to-SQL dataset: Open-domain WikiSQL. We change the original WikiSQL to become suitable for the Open-domain setting, by removing the approximate tables, decontextualizing the questions, etc. We conducted experiments on the new dataset using BM25 and DPR as the retriever, and HydraNet as the generator of SQL. The results show that the execute-guided significantly improves the table retrieval by 19% (DPR in hit@1) and achieves good performance (accuracy of logical form and execution improves by 12.7% and 13.1%) on end-to-end open-domain Text-to-SQL tasks as well.",Open-Domain QA "Although counterfactual reasoning is a fundamental aspect of intelligence, the lack of large-scale counterfactual open-domain question-answering (QA) benchmarks makes it difficult to evaluate and improve models on this ability. To address this void, we introduce the first such dataset, named IfQA, where each question is based on a counterfactual presupposition via an""if""clause. For example, if Los Angeles was on the east coast of the U.S., what would be the time difference between Los Angeles and Paris? Such questions require models to go beyond retrieving direct factual knowledge from the Web: they must identify the right information to retrieve and reason about an imagined situation that may even go against the facts built into their parameters. The IfQA dataset contains over 3,800 questions that were annotated annotated by crowdworkers on relevant Wikipedia passages. Empirical analysis reveals that the IfQA dataset is highly challenging for existing open-domain QA methods, including supervised retrieve-then-read pipeline methods (EM score 36.2), as well as recent few-shot approaches such as chain-of-thought prompting with GPT-3 (EM score 27.4). The unique challenges posed by the IfQA benchmark will push open-domain QA research on both retrieval and counterfactual reasoning fronts.",Open-Domain QA "Existing state-of-the-art methods for open-domain question-answering (ODQA) use an open book approach in which information is first retrieved from a large text corpus or knowledge base (KB) and then reasoned over to produce an answer. A recent alternative is to retrieve from a collection of previously-generated question-answer pairs; this has several practical advantages including being more memory and compute-efficient. Question-answer pairs are also appealing in that they can be viewed as an intermediate between text and KB triples: like KB triples, they often concisely express a single relationship, but like text, have much higher coverage than traditional KBs. In this work, we describe a new QA system that augments a text-to-text model with a large memory of question-answer pairs, and a new pre-training task for the latent step of question retrieval. The pre-training task substantially simplifies training and greatly improves performance on smaller QA benchmarks. Unlike prior systems of this sort, our QA system can also answer multi-hop questions that do not explicitly appear in the collection of stored question-answer pairs.",Open-Domain QA "In recent years, extensive state-of-the-art research has been conducted on natural language processing (NLP) issues. This includes improved text generation and text comprehension models. These solutions are deeply data dependent, as models use high-quality data. The need for more data in a particular language severely restricts the number of available datasets. This investigation proposes methodology for creating conversational datasets (MCCD), designed to extract multi-turn and multi-user conversational datasets. MCCD can obtain data from existing sources and identify multiple answers to the same message to create conversation flows for the extracted datasets. MCCD creates larger datasets suited to question answering (Questions & Answers (QA)) of open-domain conversational agents. In addition, this article proposes a tool based on MCCD to assist future researchers and applications. Our software tool was applied to extract two human conversation datasets. The evaluation of our methodology and resulted datasets was conducted based on the training of a Portuguese NLP model. We explored the outcome models in a classification task, obtaining better results than a state-of-the-art models.",Open-Domain QA "Deep NLP models have been shown to be brittle to input perturbations. Recent work has shown that data augmentation using counterfactuals — i.e. minimally perturbed inputs — can help ameliorate this weakness. We focus on the task of creating counterfactuals for question answering, which presents unique challenges related to world knowledge, semantic diversity, and answerability. To address these challenges, we develop a Retrieve-Generate-Filter(RGF) technique to create counterfactual evaluation and training data with minimal human supervision. Using an open-domain QA framework and question generation model trained on original task data, we create counterfactuals that are fluent, semantically diverse, and automatically labeled. Data augmentation with RGF counterfactuals improves performance on out-of-domain and challenging evaluation sets over and above existing methods, in both the reading comprehension and open-domain QA settings. Moreover, we find that RGF data leads to significant improvements in a model’s robustness to local perturbations.",Open-Domain QA "While research on explaining predictions of open-domain QA systems (ODQA) is gaining momentum, most works do not evaluate whether these explanations improve user trust. Furthermore, many users interact with ODQA using voice -assistants, yet prior works exclusively focus on visual displays, risking (as we also show) incorrectly extrapolating the effectiveness of explanations across modalities. To better understand the effectiveness of ODQA explanations strategies in the wild, we conduct user studies that measure whether explanations help users correctly decide when to accept or reject an ODQA system’s answer. Unlike prior work, we control for explanation modality , i.e. , whether they are communicated to users through a spoken or visual interface, and contrast effectiveness across modalities. We show that explanations derived from retrieved evidence can outperform strong base-lines across modalities but the best explanation strategy varies with the modality. We show common failure cases of current explanations, emphasize end-to-end evaluation of explanations, and caution against evaluating them in proxy modalities that differ from deployment.",Open-Domain QA "Question answering (QA) is a critical task for speech-based retrieval from knowledge sources, by sifting only the answers without requiring to read supporting documents. Specifically, open-domain QA aims to answer user questions on unrestricted knowledge sources. Ideally, adding a source should not decrease the accuracy, but we find this property (denoted as""monotonicity"") does not hold for current state-of-the-art methods. We identify the cause, and based on that we propose Judge-Specialist framework. Our framework consists of (1) specialist retrievers/readers to cover individual sources, and (2) judge, a dedicated language model to select the final answer. Our experiments show that our framework not only ensures monotonicity, but also outperforms state-of-the-art multi-source QA methods on Natural Questions. Additionally, we show that our models robustly preserve the monotonicity against noise from speech recognition.",Open-Domain QA "Although open-domain question answering (QA) draws great attention in recent years, it requires large amounts of resources for building the full system and it is often difficult to reproduce previous results due to complex configurations. In this paper, we introduce SF-QA: simple and fair evaluation framework for open-domain QA. SF-QA framework modularizes the pipeline open-domain QA system, which makes the task itself easily accessible and reproducible to research groups without enough computing resources. The proposed evaluation framework is publicly available and anyone can contribute to the code and evaluations.",Open-Domain QA "Ambiguous questions persist in open-domain question answering, because formulating a precise question with a unique answer is often challenging. Previously, Min et al. (2020) have tackled this issue by generating disambiguated questions for all possible interpretations of the ambiguous question. This can be effective, but not ideal for providing an answer to the user. Instead, we propose to ask a clarification question, where the user's response will help identify the interpretation that best aligns with the user's intention. We first present CAMBIGNQ, a dataset consisting of 5,654 ambiguous questions, each with relevant passages, possible answers, and a clarification question. The clarification questions were efficiently created by generating them using InstructGPT and manually revising them as necessary. We then define a pipeline of tasks and design appropriate evaluation metrics. Lastly, we achieve 61.3 F1 on ambiguity detection and 40.5 F1 on clarification-based QA, providing strong baselines for future work.",Open-Domain QA "In recent years, multiple-choice Visual Question Answering (VQA) has become topical and achieved remarkable progress. However, most pioneer multiple-choice VQA models are heavily driven by statistical correlations in datasets, which cannot perform well on multimodal understanding and suffer from poor generalization. In this paper, we identify two kinds of spurious correlations, i.e., a Vision-Answer bias (VA bias) and a Question-Answer bias (QA bias). To systematically and scientifically study these biases, we construct a new video question answering (videoQA) benchmark NExT-OOD in OOD setting and propose a graph-based cross-sample method for bias reduction. Specifically, the NExT-OOD is designed to quantify models’ generalizability and measure their reasoning ability comprehensively. It contains three sub-datasets including NExT-OOD-VA, NExT-OOD-QA, and NExT-OOD-VQA, which are designed for the VA bias, QA bias, and VA&QA bias, respectively. We evaluate several existing multiple-choice VQA models on our NExT-OOD, and illustrate that their performance degrades significantly compared with the results obtained on the original multiple-choice VQA dataset. Besides, to mitigate the VA bias and QA bias, we explicitly consider the cross-sample information and design a contrastive graph matching loss in our approach, which provides adequate debiasing guidance from the perspective of whole dataset, and encourages the model to focus on multimodal contents instead of spurious statistical regularities. Extensive experimental results illustrate that our method significantly outperforms other bias reduction strategies, demonstrating the effectiveness and generalizability of the proposed approach.",Multiple Choice QA (MCQA) "Question answer (QA) system is closely related to NLP and IR tasks. An automated QA system should understand the semantics of question and derive answers relevant to it. In case of MCQ system this tasks becomes difficult as the model needs to understand the semantics and select an answer from a given choice. In this paper we propose a ensemble approach to predict answers to Multiple choice question using LSTM model, hybrid LSTM –Convolution NN model and Multilayer Perception (MLP) model. Firstly, by using LSTM and hybrid LSTM-CNN models are trained parallel. Multilayer Perception is used to predict option to training dataset separately. The 8thGr-NDMC datasets are selected for model evaluation and comparison. The 8th GR-NDMC is used for experimentation purpose. The observed results demonstrate that the proposed approach performs better than some other single forecasting models.",Multiple Choice QA (MCQA) "The recent success of machine learning systems on various QA datasets could be interpreted as a significant improvement in models’ language understanding abilities. However, using various perturbations, multiple recent works have shown that good performance on a dataset might not indicate performance that correlates well with human’s expectations from models that “understand” language. In this work we consider a top performing model on several Multiple Choice Question Answering (MCQA) datasets, and evaluate it against a set of expectations one might have from such a model, using a series of zero-information perturbations of the model’s inputs. Our results show that the model clearly falls short of our expectations, and motivates a modified training approach that forces the model to better attend to the inputs. We show that the new training paradigm leads to a model that performs on par with the original model while better satisfying our expectations.",Multiple Choice QA (MCQA) "Open-domain question answering (QA) involves many knowledge and reasoning challenges, but are successful QA models actually learning such knowledge when trained on benchmark QA tasks? We investigate this via several new diagnostic tasks probing whether multiple-choice QA models know definitions and taxonomic reasoning—two skills widespread in existing benchmarks and fundamental to more complex reasoning. We introduce a methodology for automatically building probe datasets from expert knowledge sources, allowing for systematic control and a comprehensive evaluation. We include ways to carefully control for artifacts that may arise during this process. Our evaluation confirms that transformer-based multiple-choice QA models are already predisposed to recognize certain types of structural linguistic knowledge. However, it also reveals a more nuanced picture: their performance notably degrades even with a slight increase in the number of “hops” in the underlying taxonomic hierarchy, and with more challenging distractor candidates. Further, existing models are far from perfect when assessed at the level of clusters of semantically connected probes, such as all hypernym questions about a single concept.",Multiple Choice QA (MCQA) "Data contamination in model evaluation has become increasingly prevalent with the growing popularity of large language models. It allows models to""cheat""via memorisation instead of displaying true capabilities. Therefore, contamination analysis has become an crucial part of reliable model evaluation to validate results. However, existing contamination analysis is usually conducted internally by large language model developers and often lacks transparency and completeness. This paper presents an extensive data contamination report for over 15 popular large language models across six popular multiple-choice QA benchmarks. We also introduce an open-source pipeline that enables the community to perform contamination analysis on customised data and models. Our experiments reveal varying contamination levels ranging from 1\% to 45\% across benchmarks, with the contamination degree increasing rapidly over time. Performance analysis of large language models indicates that data contamination does not necessarily lead to increased model metrics: while significant accuracy boosts of up to 14\% and 7\% are observed on contaminated C-Eval and Hellaswag benchmarks, only a minimal increase is noted on contaminated MMLU. We also find larger models seem able to gain more advantages than smaller models on contaminated test sets.",Multiple Choice QA (MCQA) "This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.",Multiple Choice QA (MCQA) "This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS \&NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity. Each sample contains a question, correct answer(s), and other options which requires a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects \&topics. A detailed explanation of the solution, along with the above information, is provided in this study.",Multiple Choice QA (MCQA) "In a spoken multiple-choice question answering (MCQA) task, where passages, questions, and choices are given in the form of speech, usually only the auto-transcribed text is considered in system development. The acoustic-level information may contain useful cues for answer prediction. However, to the best of our knowledge, only a few studies focus on using the acoustic-level information or fusing the acoustic-level information with the text-level information for a spoken MCQA task. Therefore, this paper presents a hierarchical multistage multimodal (HMM) framework based on convolutional neural networks (CNNs) to integrate text- and acoustic-level statistics into neural modeling for spoken MCQA. Specifically, the acoustic-level statistics are expected to offset text inaccuracies caused by automatic speech recognition (ASR) systems or representation inadequacy lurking in word embedding generators, thereby making the spoken MCQA system robust. In the proposed HMM framework, two modalities are first manipulated to separately derive the acoustic- and text-level representations for the passage, question, and choices. Next, these clever features are jointly involved in inferring the relationships among the passage, question, and choices. Then, a final representation is derived for each choice, which encodes the relationship of the choice to the passage and question. Finally, the most likely answer is determined based on the individual final representations of all choices. Evaluated on the data of “Formosa Grand Challenge - Talk to AI”, a Mandarin Chinese spoken MCQA contest held in 2018, the proposed HMM framework achieves remarkable improvements in accuracy over the text-only baseline.",Multiple Choice QA (MCQA) "We introduce KorMedMCQA, the first Korean multiple-choice question answering (MCQA) benchmark derived from Korean healthcare professional licensing examinations, covering from the year 2012 to year 2023. This dataset consists of a selection of questions from the license examinations for doctors, nurses, and pharmacists, featuring a diverse array of subjects. We conduct baseline experiments on various large language models, including proprietary/open-source, multilingual/Korean-additional pretrained, and clinical context pretrained models, highlighting the potential for further enhancements. We make our data publicly available on HuggingFace (https://huggingface.co/datasets/sean0042/KorMedMCQA) and provide a evaluation script via LM-Harness, inviting further exploration and advancement in Korean healthcare environments.",Multiple Choice QA (MCQA) "Unsupervised question answering is a promising yet challenging task, which alleviates the burden of building large-scale annotated data in a new domain. It motivates us to study the unsupervised multiple-choice question answering (MCQA) problem. In this paper, we propose a novel framework designed to generate synthetic MCQA data barely based on contexts from the universal domain without relying on any form of manual annotation. Possible answers are extracted and used to produce related questions, then we leverage both named entities (NE) and knowledge graphs to discover plausible distractors to form complete synthetic samples. Experiments on multiple MCQA datasets demonstrate the effectiveness of our method.",Multiple Choice QA (MCQA) "Due to the enormous and exponential advancement in the online social network, the triad of Facebook, Twitter and Whatsapp posed a great challenge in the form of fake news in front of us. In recent years many events like false propaganda of the ‘US presidential election’, opinion spamming in ‘Brexit referendum’, and long-tail series of viral rumors after many natural calamities around the world, created a lot of chaos and law and order problem. Simultaneously, this rapid explosion of fake news also attracted the attention of different researchers to investigate the real cause of it and thus to developed some tools and techniques to relieve and discover the Rumors across online media as soon as possible. In this regard, the Machine Learning (ML) algorithms and Natural Language Processing (NLP) algorithms emerged as the remarkably vital and essential tool to detect fake news in the current age. NLP when aided with machine learning produced many remarkable results that were possible just by manual fact-checking or by normal text detection process. We have systematically discussed the role of NLP and machine learning in the fake news detection process, and various detection techniques based on these. Basic terminology of NLP and machine learning too explained in brief. At last, we gave light on the future trends, open issues, challenges, and potential research oriented toward NLP and ML-based approaches.",NLP for Social Media "One prominent dark side of online information behavior is the spreading of rumors. The feature analysis and crowd identification of social media rumor refuters based on machine learning methods can shed light on the rumor refutation process. This paper analyzed the association between user features and rumor refuting behavior in five main rumor categories: economics, society, disaster, politics, and military. Natural language processing (NLP) techniques are applied to quantify the user’s sentiment tendency and recent interests. Then, those results were combined with other personalized features to train an XGBoost classification model, and potential refuters can be identified. Information from 58,807 Sina Weibo users (including their 646,877 microblogs) for the five anti-rumor microblog categories was collected for model training and feature analysis. The results revealed that there were significant differences between rumor stiflers and refuters, as well as between refuters for different categories. Refuters tended to be more active on social media and a large proportion of them gathered in more developed regions. Tweeting history was a vital reference as well, and refuters showed higher interest in topics related with the rumor refuting message. Meanwhile, features such as gender, age, user labels and sentiment tendency also varied between refuters considering categories.",NLP for Social Media "Social media has become a major source of information for healthcare professionals but due to the growing volume of data in unstructured format, analyzing these resources accurately has become a challenge. In this study, we trained health related NER and classification models on different datasets published within the Social Media Mining for Health Applications (#SMM4H 2022) workshop. Transformer based Bert for Token Classification and Bert for Sequence Classification algorithms as well as vanilla NER and text classification algorithms from Spark NLP library were utilized during this study without changing the underlying DL architecture. The trained models are available within a production-grade code base as part of the Spark NLP library; can scale up for training and inference in any Spark cluster; has GPU support and libraries for popular programming languages such as Python, R, Scala and Java.",NLP for Social Media "Information about individuals can help to better understand what they say, particularly in social media where texts are short. Current approaches to modelling social media users pay attention to their social connections, but exploit this information in a static way, treating all connections uniformly. This ignores the fact, well known in sociolinguistics, that an individual may be part of several communities which are not equally relevant in all communicative situations. We present a model based on Graph Attention Networks that captures this observation. It dynamically explores the social graph of a user, computes a user representation given the most relevant connections for a target task, and combines it with linguistic information to make a prediction. We apply our model to three different tasks, evaluate it against alternative models, and analyse the results extensively, showing that it significantly outperforms other current methods.",NLP for Social Media "From the day internet came into existence, the era of social networking sprouted. In the beginning, no one may have thought the internet would be a host of numerous amazing services the social networking. Today we can say that online applications and social networking websites have become a non-separable part of one’s life. Many people from diverse age groups spend hours daily on such websites. Despite thoughtlet is emotionally connected through media, these facilities bring along big threats with them such as cyber-attacks, which includes include lying. As social networking sites are increasing, cyberbullying is increasing day by day. To identify word similarities in the tweets made by bullies and make use of machine learning and can develop an ML model that automatically detects social media bullying actions. However, many social media bullying detection techniques have been implemented, but many of them were textual based. Under this background and motivation, it can help to prevent the happen of cyberbullying if we can develop relevant techniques to discover cyberbullying in social media. A machine learning model is proposed to detect and prevent bullying on Twitter. Naïve Bayes is used for training and testing social media bullying content.",NLP for Social Media "Social media data become an integral part in the business data and should be integrated into the decisional process for better decision making based on information which reflects better the true situation of business in any field. However, social media data are unstructured and generated in very high frequency which exceeds the capacity of the data warehouse. In this work, we propose to extend the data warehousing process with a staging area which heart is a large scale system implementing an information extraction process using Storm and Hadoop frameworks to better manage their volume and frequency. Concerning structured information extraction, mainly events, we combine a set of techniques from NLP, linguistic rules and machine learning to succeed the task. Finally, we propose the adequate data warehouse conceptual model for events modeling and integration with enterprise data warehouse using an intermediate table called Bridge table. For application and experiments, we focus on drug abuse events extraction from Twitter data and their modeling into the Event Data Warehouse.",NLP for Social Media "Participatory moments on social media platforms increasingly add up to something more substantial. Communicating our thoughts and feelings about the book through shared observations, appraisals, and illustrative examples. For instance, the data posted on social media platforms like Twitter can be mined for insights into users' values, beliefs, and emotions. The author's perspective can be better understood through the lens of sentiment analysis. Almost all studies of social media's massive user base have looked at how users' sentiments can be broken down into positive, negative, and neutral categories. In this project, we've set out to define the phrases in terms of four distinct emotional states: joy, rage, fear, and melancholy. There have been a lot of approaches implemented in the field of dynamic textual sentiment recognition in the event of further interactions, but not nearly enough of them were based on intensive training. In this research, we elaborate on a game-changing deep learning-based method (RNN+LSTM) for dealing with a variety of problems associated with emotion distribution by making use of informative data. We present a novel method for translating it to a binary distribution and a standard machine-learning classification problem, and we employ a comprehensive knowledge technique to settle the reconstructed matter. In terms of classification accuracy, our hybrid approach will prevail over more conventional ml methods.",NLP for Social Media "Social media is an appropriate source for analyzing public attitudes towards the COVID-19 vaccine and various brands. Nevertheless, there are few relevant studies. In the research, we collected tweet posts by the UK and US residents from the Twitter API during the pandemic and designed experiments to answer three main questions concerning vaccination. To get the dominant sentiment of the civics, we performed sentiment analysis by VADER and proposed a new method that can count the individual's influence. This allows us to go a step further in sentiment analysis and explain some of the fluctuations in the data changing. The results indicated that celebrities could lead the opinion shift on social media in vaccination progress. Moreover, at the peak, nearly 40\% of the population in both countries have a negative attitude towards COVID-19 vaccines. Besides, we investigated how people's opinions toward different vaccine brands are. We found that the Pfizer vaccine enjoys the most popular among people. By applying the sentiment analysis tool, we discovered most people hold positive views toward the COVID-19 vaccine manufactured by most brands. In the end, we carried out topic modelling by using the LDA model. We found residents in the two countries are willing to share their views and feelings concerning the vaccine. Several death cases have occurred after vaccination. Due to these negative events, US residents are more worried about the side effects and safety of the vaccine.",NLP for Social Media "Profanity is socially offensive language, which may also be called cursing, cussing, swearing, or expletives. Nowadays where everything is digitally managed, there are lots of online platforms and forums which people use. If we take an example of any social media platform like Twitter, their privacy policy suggests that users cannot share or write any obscene/vulgar language on a public platform. Several corporate and research organizations discuss how such content is found and controlled, such as computer vision research has developed to detect illegal practices in public spaces, NLP has progressed to detect profanity in social media texts. However, existing profanity detection systems still remain flawed because of various factors. In this paper, we define and analyze the system which will use NLP and Machine learning approach to solve this. It is usually framed as a supervised learning problem. Generic features such as Bag-Of-Words or embeddings systematically deliver fair success in classification. Lexical resources in combination with models such as Linear Support Vector Machine (SVM); feature modeling specific linguistic constructs making it more effective in classification.",NLP for Social Media "Despite its relevance, the maturity of NLP for social media pales in comparison with general-purpose models, metrics and benchmarks. This fragmented landscape makes it hard for the community to know, for instance, given a task, which is the best performing model and how it compares with others. To alleviate this issue, we introduce a unified benchmark for NLP evaluation in social media, SuperTweetEval, which includes a heterogeneous set of tasks and datasets combined, adapted and constructed from scratch. We benchmarked the performance of a wide range of models on SuperTweetEval and our results suggest that, despite the recent advances in language modelling, social media remains challenging.",NLP for Social Media "The Amount of legal information that is being produced on a daily basis in the law courts is increasing enormously and nowadays this information is available in electronic form also. The application of various machine learning and deep learning methods for processing of legal documents has been receiving considerate attention over the last few years. Legal document classification, translation, summarization, contract review, case prediction and information retrieval are some of the tasks that have received concentrated efforts from the research community. In this survey, we have performed a comprehensive study of various deep learning methods applied in the legal domain and classified various legal tasks into three broad categories, viz. legal data search, legal text analytics and legal intelligent interfaces. The proposed study suggests that deep learning models like CNNs, RNNs, LSTM and GRU, and multi-task deep learning models are being used actively to solve wide variety of legal tasks and are giving state-of-the-art performance.",NLP for the Legal Domain "Claims, disputes, and litigations are major legal issues in construction projects, which often result in cost overruns, delays, and adverse working relationships among the contracting parties. Recent advances in natural language processing (NLP) techniques offer great potentials that can process voluminous unstructured data from legal documents to draw insightful information about the root causes of issues and prevention strategies. Several efforts have been undertaken in the last decades that used NLP to tackle a wide range of problems related to legal issues in construction such as the quality review of contracts and the identification of common patterns in legal cases. The research line on NLP-based techniques for analyzing legal texts of construction projects has progressed well recently; it, however, is still in the early stage. This paper aims to perform a critical review of recently published articles to analyze the achievements and limitations of the state of the art on NLP-based approaches to address common legal issues associated with legal documents arising across different project stages. The study also provides a roadmap for future research to expand the adoption of NLP for the processing of legal texts in construction.",NLP for the Legal Domain "LexNLP is an open source Python package focused on natural language processing and machine learning for legal and regulatory text. The package includes functionality to (i) segment documents, (ii) identify key text such as titles and section headings, (iii) extract over eighteen types of structured information like distances and dates, (iv) extract named entities such as companies and geopolitical entities, (v) transform text into features for model training, and (vi) build unsupervised and supervised models such as word embedding or tagging models. LexNLP includes pre-trained models based on thousands of unit tests drawn from real documents available from the SEC EDGAR database as well as various judicial and regulatory proceedings. LexNLP is designed for use in both academic research and industrial applications, and is distributed at the following GitHub repository: https://github.com/LexPredict/lexpredict-lexnlp.",NLP for the Legal Domain "With the evolution of time, problem, and expectation of human beings, advancement of science and technology has facilitated scientific analysis of bulk dataset to generate desired output. This approach of bulk data analysis may be specifically implemented using Machine Learning and Data Analytics, which are the sub-domains of Artificial Intelligence (AI). The application of this cutting-edge technology can improve the efficiency of multivariate service sectors having societal significance (like legal system, education system, public transportation, rural healthcare management, etc), which directly or indirectly affect the well-being and productivity of an individual and the society as a whole. For example, India being a developing nation suffers due to insufficient number of judges, advocates, infrastructure, etc, and for which people have to wait long time to cherish their desired justice. In this paper, authors have proposed Machine Learning and Text Analytics-based legal support system to assist judges and advocates for faster delivery of justice to citizen.",NLP for the Legal Domain "Natural language processing (NLP) methods for analyzing legal text offer legal scholars and practitioners a range of tools allowing to empirically analyze law on a large scale. However, researchers seem to struggle when it comes to identifying ethical limits to using NLP systems for acquiring genuine insights both about the law and the systems' predictive capacity. In this paper we set out a number of ways in which to think systematically about such issues. We place emphasis on three crucial normative parameters which have, to the best of our knowledge, been underestimated by current debates: (a) the importance of academic freedom, (b) the existence of a wide diversity of legal and ethical norms domestically but even more so internationally and (c) the threat of moralism in research related to computational law. For each of these three parameters we provide specific recommendations for the legal NLP community. Our discussion is structured around the study of a real-life scenario that has prompted recent debate in the legal NLP research community.",NLP for the Legal Domain "The EU-funded project Lynx focuses on the creation of a knowledge graph for the legal domain (Legal Knowledge Graph, LKG) and its use for the semantic processing, analysis and enrichment of documents from the legal domain. This article describes the use cases covered in the project, the entire developed platform and the semantic analysis services that operate on the documents.",NLP for the Legal Domain "In the last years, the legal domain has been revolutionized by the use of Information and Communication Technologies, producing large amount of digital information. Legal practitioners’ needs, then, in browsing these repositories has required to investigate more efficient retrieval methods, that assume more relevance because digital information is mostly unstructured. In this paper we analyze the state-of-the-art of artificial intelligence approaches for legal domain, focusing on Legal Information Retrieval systems based on Natural Language Processing, Machine Learning and Knowledge Extraction techniques. Finally, we also discuss challenges – mainly focusing on retrieving similar cases, statutes or paragraph for supporting latest cases’ analysis – and open issues about Legal Information Retrieval systems.",NLP for the Legal Domain "We present LEDGAR, a multilabel corpus of legal provisions in contracts. The corpus was crawled and scraped from the public domain (SEC filings) and is, to the best of our knowledge, the first freely available corpus of its kind. Since the corpus was constructed semi-automatically, we apply and discuss various approaches to noise removal. Due to the rather large labelset of over 12’000 labels annotated in almost 100’000 provisions in over 60’000 contracts, we believe the corpus to be of interest for research in the field of Legal NLP, (large-scale or extreme) text classification, as well as for legal studies. We discuss several methods to sample subcopora from the corpus and implement and evaluate different automatic classification approaches. Finally, we perform transfer experiments to evaluate how well the classifiers perform on contracts stemming from outside the corpus.",NLP for the Legal Domain "Legal documents are unstructured, use legal jargon, and have considerable length, making them difficult to process automatically via conventional text processing techniques. A legal document processing system would benefit substantially if the documents could be segmented into coherent information units. This paper proposes a new corpus of legal documents annotated (with the help of legal experts) with a set of 13 semantically coherent units labels (referred to as Rhetorical Roles), e.g., facts, arguments, statute, issue, precedent, ruling, and ratio. We perform a thorough analysis of the corpus and the annotations. For automatically segmenting the legal documents, we experiment with the task of rhetorical role prediction: given a document, predict the text segments corresponding to various roles. Using the created corpus, we experiment extensively with various deep learning-based baseline models for the task. Further, we develop a multitask learning (MTL) based deep model with document rhetorical role label shift as an auxiliary task for segmenting a legal document. The proposed model shows superior performance over the existing models. We also experiment with model performance in the case of domain transfer and model distillation techniques to see the model performance in limited data conditions.",NLP for the Legal Domain "We evaluated the capability of a state-of-the-art generative pretrained transformer (GPT) model to perform semantic annotation of short text snippets (one to few sentences) coming from legal documents of various types. Discussions of potential uses (e.g., document drafting, summarization) of this emerging technology in legal domain have intensified, but to date there has not been a rigorous analysis of these large language models' (LLM) capacity in sentence-level semantic annotation of legal texts in zero-shot learning settings. Yet, this particular type of use could unlock many practical applications (e.g., in contract review) and research opportunities (e.g., in empirical legal studies). We fill the gap with this study. We examined if and how successfully the model can semantically annotate small batches of short text snippets (10-50) based exclusively on concise definitions of the semantic types. We found that the GPT model performs surprisingly well in zero-shot settings on diverse types of documents (F1 = .73 on a task involving court opinions, .86 for contracts, and .54 for statutes and regulations). These findings can be leveraged by legal scholars and practicing lawyers alike to guide their decisions in integrating LLMs in wide range of workflows involving semantic annotation of legal texts.",NLP for the Legal Domain "Prompt engineering, as an efficient and effective way to leverage Large Language Models (LLM), has drawn a lot of attention from the research community. The existing research primarily emphasizes the importance of adapting prompts to specific tasks, rather than specific LLMs. However, a good prompt is not solely defined by its wording, but also binds to the nature of the LLM in question. In this work, we first quantitatively demonstrate that different prompts should be adapted to different LLMs to enhance their capabilities across various down-stream tasks in NLP. Then we novelly propose a model-adaptive prompt optimizer (MAPO) method that optimizes the original prompts for each specific LLM in downstream tasks. Extensive experiments indicate that the proposed method can effectively refine prompts for an LLM, leading to significant improvements over various downstream tasks.",Prompt Engineering "Software requirement classification is a longstanding and important problem in requirement engineering. Previous studies have applied various machine learning techniques for this problem, including Support Vector Machine (SVM) and decision trees. With the recent popularity of NLP technique, the state-of-the-art approach NoRBERT utilizes the pre-trained language model BERT and achieves a satisfactory performance. However, the dataset PROMISE used by the existing approaches for this problem consists of only hundreds of requirements that are outdated according to today’s technology and market trends. Besides, the NLP technique applied in these approaches might be obsolete. In this paper, we propose an approach of prompt learning for requirement classification using BERT-based pretrained language models (PRCBERT), which applies flexible prompt templates to achieve accurate requirements classification. Experiments conducted on two existing small-size requirement datasets (PROMISE and NFR-Review) and our collected large-scale requirement dataset NFR-SO prove that PRCBERT exhibits moderately better classification performance than NoRBERT and MLM-BERT (BERT with the standard prompt template). On the de-labeled NFR-Review and NFR-SO datasets, Trans_PRCBERT (the version of PRCBERT which is fine-tuned on PROMISE) is able to have a satisfactory zero-shot performance with 53.27% and 72.96% F1-score when enabling a self-learning strategy.",Prompt Engineering "In recent years, the advancement of Large Language Models (LLMs) has garnered significant attention in the field of Artificial Intelligence (AI), exhibiting exceptional performance across a wide variety of natural language processing (NLP) tasks. However, despite the high generality of LLMs, there exists a problem in controlling them to produce the desired output for each task. Fine-tuning is a conventional approach to improve performance for specific tasks, albeit at the expense of substantial time and computational resources. Prompt engineering serves as an effective alternative, steering models towards desired outputs for particular tasks, and has been validated to enhance the performance of LLMs. However, manual design of prompts is labor-intensive, which has increased interest in the automation of prompt engineering. In this study, we propose a method to automate prompt engineering optimization utilizing a genetic algorithm with novel genetic operators. Through experiments conducted to explore instructional prompts for solving Japanese multiple-choice questions, the efficacy of the proposed method was affirmed. The findings of this study underscore the feasibility of genetic algorithm-based automatic prompt engineering and genetic operators for prompts, and show their efficacy for Japanese, which has distinct linguistic characteristics compared to English and other languages.",Prompt Engineering "Abstract Previous work in prompt engineering for large language models has introduced different gradient-free probability-based prompt selection methods that aim to choose the optimal prompt among the candidates for a given task but have failed to provide a comprehensive and fair comparison between each other. In this paper, we propose a unified framework to interpret and evaluate the existing probability-based prompt selection methods by performing extensive experiments on 13 common and diverse NLP tasks. We find that each of the existing methods can be interpreted as some variant of the method that maximizes mutual information between the input and the predicted output (MI). Utilizing this finding, we develop several other combinatorial variants of MI and increase the effectiveness of the oracle prompt selection method from 87.79% to 94.98%, measured as the ratio of the performance of the selected prompt to that of the optimal oracle prompt. Furthermore, considering that all the methods rely on the output probability distribution of the model that might be biased, we propose a novel calibration method called Calibration by Marginalization (CBM) that is orthogonal to the existing methods and helps increase the prompt selection effectiveness of the best method to 96.85%, achieving 99.44% of the oracle prompt F1 without calibration.1",Prompt Engineering "In the domain of Natural Language Processing (NLP), the technique of prompt engineering is a strategic method utilized to guide the responses of models such as ChatGPT. This research explores the intricacies of prompt engineering, with a specific focus on its effects on the quality of summaries generated by ChatGPT 3.5, an openly accessible chatbot developed by OpenAI. The study encompasses a comprehensive examination of 110 summaries produced from ten diverse paragraphs, employing eleven distinct summarization prompts under zero-shot setting. Evaluation is conducted using the BERT Score, a metric that offers a more contextually relevant assessment of summary quality. This study introduces an innovative approach to appraising the quality of summaries, setting it apart from prior investigations and delivering valuable insights into the nuances of prompt engineering's role within the NLP landscape. Ultimately, this inquiry illuminates the strengths and weaknesses associated with various prompts and their influence on ChatGPT 3.5's summarization capabilities, thereby making a significant contribution to the constantly evolving field of NLP and automated text summarization.",Prompt Engineering "Inspired by human cognition, Jiang et al.(2023c) create a benchmark for assessing LLMs' lateral thinking-thinking outside the box. Building upon this benchmark, we investigate how different prompting methods enhance LLMs' performance on this task to reveal their inherent power for outside-the-box thinking ability. Through participating in SemEval-2024, task 9, Sentence Puzzle sub-task, we explore prompt engineering methods: chain of thoughts (CoT) and direct prompting, enhancing with informative descriptions, and employing contextualizing prompts using a retrieval augmented generation (RAG) pipeline. Our experiments involve three LLMs including GPT-3.5, GPT-4, and Zephyr-7B-beta. We generate a dataset of thinking paths between riddles and options using GPT-4, validated by humans for quality. Findings indicate that compressed informative prompts enhance performance. Dynamic in-context learning enhances model performance significantly. Furthermore, fine-tuning Zephyr on our dataset enhances performance across other commonsense datasets, underscoring the value of innovative thinking.",Prompt Engineering "Automated theorem proving can benefit a lot from methods employed in natural language processing, knowledge graphs and information retrieval: this non-trivial task combines formal languages understanding, reasoning, similarity search. We tackle this task by enhancing semantic similarity ranking with prompt engineering, which has become a new paradigm in natural language understanding. None of our approaches requires additional training. Despite encouraging results reported by prompt engineering approaches for a range of NLP tasks, for the premise selection task vanilla re-ranking by prompting GPT-3 doesn’t outperform semantic similarity ranking with SBERT, but merging of the both rankings shows better results.",Prompt Engineering "Foundation AI models have emerged as powerful pre-trained models on a large scale, capable of seamlessly handling diverse tasks across multiple domains with minimal or no fine-tuning. These models, exemplified by the impressive achievements of GPT-3 and BERT in natural language processing (NLP), as well as CLIP and DALL-E in computer vision, have garnered considerable attention for their exceptional performance. A noteworthy addition to the realm of image segmentation is the Segment Anything Model (SAM), a foundation AI model that revolutionizes image segmentation. With a single click or a natural language prompt, SAM exhibits the remarkable ability to segment any object within an image, marking a significant paradigm shift in medical image segmentation. Unlike conventional approaches that rely on labeled data and domain-specific knowledge, SAM breaks free from these constraints. Deep convolutional neural network (DCNN)-based, SAM comprises an image encoder, a prompt encoder, and a mask decoder, showcasing its efficient and flexible architecture. Medical image segmentation, in particular, benefits from SAM’s exceptional speed and high-quality segmentation. In this paper, we delve into the effectiveness of SAM for medical image segmentation shedding light on its capabilities. Moreover, our investigation explores the strengths and limitations of prompt engineering in medical computer vision applications, not only encompassing SAM but also other foundation AI models. Through this exploration, we unravel their immense potential in catalyzing a paradigm shift in the field of medical imaging.",Prompt Engineering "Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners. However, their effectiveness depends mainly on scaling the model parameters and prompt design, hindering their implementation in most real-world applications. This study proposes a novel pluggable, extensible, and efficient approach named DifferentiAble pRompT (DART), which can convert small language models into better few-shot learners without any prompt engineering. The main principle behind this approach involves reformulating potential natural language processing tasks into the task of a pre-trained language model and differentially optimizing the prompt template as well as the target label with backpropagation. Furthermore, the proposed approach can be: (i) Plugged to any pre-trained language models; (ii) Extended to widespread classification tasks. A comprehensive evaluation of standard NLP tasks demonstrates that the proposed approach achieves a better few-shot performance. Code is available in https://github.com/zjunlp/DART.",Prompt Engineering "State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo: http://prompt.vizhub.ai) and our workflow using several real-world use cases.",Prompt Engineering "Automatic identification and expansion of ambiguous abbreviations are essential for biomedical natural language processing applications, such as information retrieval and question answering systems. In this paper, we present DEep Contextualized Biomedical. Abbreviation Expansion (DECBAE) model. DECBAE automatically collects substantial and relatively clean annotated contexts for 950 ambiguous abbreviations from PubMed abstracts using a simple heuristic. Then it utilizes BioELMo to extract the contextualized features of words, and feed those features to abbreviation-specific bidirectional LSTMs, where the hidden states of the ambiguous abbreviations are used to assign the exact definitions. Our DECBAE model outperforms other baselines by large margins, achieving average accuracy of 0.961 and macro-F1 of 0.917 on the dataset. It also surpasses human performance for expanding a sample abbreviation, and remains robust in imbalanced, low-resources and clinical settings.",Acronyms and Abbreviations Detection and Expansion "Acronyms are commonly used in human language as alternative forms of concepts to increase recognition, to reduce duplicate references to the same concept, and to stress important concepts. There are no standard rules for acronym creation; therefore, both machine-based acronym identification and acronym resolution are highly prone to error. This might be resolved by a human computation approach, which can take advantage of knowledge external to the document collection. Using three text collections with different properties, we compare a machine-based algorithm with a crowdsourcing approach to identify acronyms. We then perform acronym resolution using these two approaches, plus a game-based approach. The crowd and game-based methods outperform the machine algorithm, even when external information is not used. Also, crowd and game formats offered similar performance with a difference in cost.",Acronyms and Abbreviations Detection and Expansion "Hypernym and synonym matching are one of the mainstream Natural Language Processing (NLP) tasks. In this paper, we present systems that attempt to solve this problem. We designed these systems to participate in the FinSim-3, a shared task of FinNLP workshop at IJCAI-2021. The shared task is focused on solving this problem for the financial domain. We experimented with various transformer based pre-trained embeddings by fine-tuning these for either classification or phrase similarity tasks. We also augmented the provided dataset with abbreviations derived from prospectus provided by the organizers and definitions of the financial terms from DBpedia [Auer et al., 2007], Investopedia, and the Financial Industry Business Ontology (FIBO). Our best performing system uses both FinBERT [Araci, 2019] and data augmentation from the afore-mentioned sources. We observed that term expansion using data augmentation in conjunction with semantic similarity is beneficial for this task and could be useful for the other tasks that deal with short phrases. Our best performing model (Accuracy: 0.917, Rank: 1.156) was developed by fine-tuning SentenceBERT [Reimers et al., 2019] (with FinBERT at the backend) over an extended labelled set created using the hierarchy of labels present in FIBO.",Acronyms and Abbreviations Detection and Expansion "The current study aimed to explore the linguistic analysis of neologism related to Coronavirus (COVID-19). Recently, a new coronavirus disease COVID-19 has emerged as a respiratory infection with significant concern for global public health hazards. However, with each passing day, more and more confirmed cases are being reported worldwide which has alarmed the global authorities including the World Health Organization (WHO). In this study, the researcher uses the term neologism which means the coinage of new words. Neologism played a significant role throughout the history of epidemic and pandemic. The focus of this study is on the phenomenon of neologism to explore the creation of new words during the outbreak of COVID-19. The theoretical framework of this study is based on three components of neologism, i.e. word formation, borrowing, and lexical deviation. The researcher used the model of neologism as a research tool which is presented by Krishnamurthy in 2010. The study is also compared with the theory of onomasiology by Pavol Stekauer (1998). The secondary data have been used in this study. The data were collected from articles, books, Oxford Corpus, social media, and five different websites and retrieved from January 2020 to April 2020. The findings of this study revealed that with the outbreak of COVID-19, the majority of the people on social media and state briefings, the word-formation is utilized in the form of nouns, adjectives, and verbs. The abbreviations and acronyms are also used which are related to the current situation of COVID-19. No doubt, neologisms present colorful portrayals of various social and cultural practices of respective societies the rationale behind them all remains the same.",Acronyms and Abbreviations Detection and Expansion "The prevalence of ambiguous acronyms make scientific documents harder to understand for humans and machines alike, presenting a need for models that can automatically identify acronyms in text and disambiguate their meaning. We introduce new methods for acronym identification and disambiguation: our acronym identification model projects learned token embeddings onto tag predictions, and our acronym disambiguation model finds training examples with similar sentence embeddings as test examples. Both of our systems achieve significant performance gains over previously suggested methods, and perform competitively on the SDU@AAAI-21 shared task leaderboard. Our models were trained in part on new distantly-supervised datasets for these tasks which we call AuxAI and AuxAD. We also identified a duplication conflict issue in the SciAD dataset, and formed a deduplicated version of SciAD that we call SciAD-dedupe. We publicly released all three of these datasets, and hope that they help the community make further strides in scientific document understanding.",Acronyms and Abbreviations Detection and Expansion "Abbreviations and acronyms are shortened forms of words or phrases that are commonly used in technical writing. In this study we focus specifically on abbreviations and introduce a corpus-based method for their expansion. The method divides the processing into three key stages: abbreviation identification, full form candidate extraction, and abbreviation disambiguation. First, potential abbreviations are identified by combining pattern matching and named entity recognition. Both acronyms and abbreviations exhibit similar orthographic properties, thus additional processing is required to distinguish between them. To this end, we implement a character-based recurrent neural network (RNN) that analyses the morphology of a given token in order to classify it as an acronym or an abbreviation. A siamese RNN that learns the morphological process of word abbreviation is then used to select a set of full form candidates. Having considerably constrained the search space, we take advantage of the Word Mover’s Distance (WMD) to assess semantic compatibility between an abbreviation and each full form candidate based on their contextual similarity. This step does not require any corpus-based training, thus making the approach highly adaptable to different domains. Unlike the vast majority of existing approaches, our method does not rely on external lexical resources for disambiguation, but with a macro F-measure of 96.27% is comparable to the state-of-the art.",Acronyms and Abbreviations Detection and Expansion "Acronyms are the short forms of phrases that facilitate conveying lengthy sentences in documents and serve as one of the mainstays of writing. Due to their importance, identifying acronyms and corresponding phrases (i.e., acronym identification (AI)) and finding the correct meaning of each acronym (i.e., acronym disambiguation (AD)) are crucial for text understanding. Despite the recent progress on this task, there are some limitations in the existing datasets which hinder further improvement. More specifically, limited size of manually annotated AI datasets or noises in the automatically created acronym identification datasets obstruct designing advanced high-performing acronym identification models. Moreover, the existing datasets are mostly limited to the medical domain and ignore other domains. In order to address these two limitations, we first create a manually annotated large AI dataset for scientific domain. This dataset contains 17,506 sentences which is substantially larger than previous scientific AI datasets. Next, we prepare an AD dataset for scientific domain with 62,441 samples which is significantly larger than the previous scientific AD dataset. Our experiments show that the existing state-of-the-art models fall far behind human-level performance on both datasets proposed by this work. In addition, we propose a new deep learning model that utilizes the syntactical structure of the sentence to expand an ambiguous acronym in a sentence. The proposed model outperforms the state-of-the-art models on the new AD dataset, providing a strong baseline for future research on this dataset.",Acronyms and Abbreviations Detection and Expansion "Nowadays, there is an increasing tendency for using acronyms in technical texts, which has led to ambiguous acronyms with different possible expansions. Diversity of expansions of a single acronym makes recognizing its expansion a challenging task. Replacing acronyms with incorrect expansions will lead to problems in text mining procedures, namely text normalization, summarization, machine translation, and tech-mining. Tech-mining involves exploring and analyzing technical texts to recognize the relations between technologies. This paper is aimed at proposing a method for building a dataset that meets the requirements for training acronym disambiguation models in technical texts. In this paper, challenges in automatic acronym disambiguation are presented. We have proposed a method for building the dataset and the accuracy of the acronym disambiguation model is 86%.",Acronyms and Abbreviations Detection and Expansion "In biomedical domain, abbreviations are appearing more and more frequently in various data sets, which has caused significant obstacles to biomedical big data analysis. The dictionary-based approach has been adopted to process abbreviations, but it cannot handle ad hoc abbreviations, and it is impossible to cover all abbreviations. To overcome these drawbacks, this paper proposes an automatic abbreviation expansion method called LMAAE (Language Model-based Automatic Abbreviation Expansion). In this method, the abbreviation is firstly divided into blocks; then, expansion candidates are generated by restoring each block; and finally, the expansion candidates are filtered and clustered to acquire the final expansion result according to the language model and clustering method. Through restrict the abbreviation to prefix abbreviation, the search space of expansion is reduced sharply. And then, the search space is continuous reduced by restrained the effective and the length of the partition. In order to validate the effective of the method, two types of experiments are designed. For standard abbreviations, the expansion results include most of the expansion in dictionary. Therefore, it has a high precision. For ad hoc abbreviations, the precisions of schema matching, knowledge fusion are increased by using this method to handle the abbreviations. Although the recall for standard abbreviation needs to be improved, but this does not affect the good complement effect for the dictionary method.",Acronyms and Abbreviations Detection and Expansion "The adoption of Electronic Health Record (EHR) and other e-health infrastructures over the years has been characterized by an increase in medical errors. This is primarily a result of the widespread usage of medical acronyms and abbreviations with multiple possible senses (i.e., ambiguous acronyms). The advent of Artificial Intelligence (AI) technology, specifically Natural Language Processing (NLP), has presented a promising avenue for tackling the intricate issue of automatic sense resolution of acronyms. Notably, the application of Machine Learning (ML) techniques has proven to be highly effective in the development of systems aimed at this objective, garnering significant attention and interest within the research and industry domains in recent years. The significance of automating the resolution of medical acronym senses cannot be overstated, especially in the context of modern healthcare delivery with the widespread use of EHR. However, it is disheartening to note that comprehensive studies examining the global adoption of EHR, assessing the impact of acronym usage on medical errors within EHR systems, and reporting on the latest trends and advancements in ML-based NLP solutions for disambiguating medical acronyms remain severely limited. In this current study, we present a detailed overview on medical error, its origins, unintended effects, and EHR-related errors as a subclass of clinical error. Furthermore, this paper investigates the adoption of EHR systems in developed and developing nations, as well as the review concludes with an examination of various artificial intelligence techniques, particularly machine learning algorithms for medical acronym and abbreviation disambiguation in EHRs.",Acronyms and Abbreviations Detection and Expansion "The article is focused on automatic development and ranking of a large corpus for Russian paraphrase generation which proves to be the first corpus of such type in Russian computational linguistics. Existing manually annotated paraphrase datasets for Russian are limited to small-sized ParaPhraser corpus and ParaPlag which are suitable for a set of NLP tasks, such as paraphrase and plagiarism detection, sentence similarity and relatedness estimation, etc. Due to size restrictions, these datasets can hardly be applied in end-to-end text generation solutions. Meanwhile, paraphrase generation requires a large amount of training data. In our study we propose a solution to the problem: we collect, rank and evaluate a new publicly available headline paraphrase corpus (ParaPhraser Plus), and then perform text generation experiments with manual evaluation on automatically ranked corpora using the Universal Transformer architecture.",Paraphrase and Rephrase Generation "Paraphrase generation is a fundamental problem in natural language processing. Due to the significant success of transfer learning, the “pre-training → fine-tuning” approach has become the standard. However, popular general pre-training methods typically require extensive datasets and great computational resources, and the available pre-trained models are limited by fixed architecture and size. The authors have proposed a simple and efficient approach to pre-training specifically for paraphrase generation, which noticeably improves the quality of paraphrase generation and ensures substantial enhancement of general-purpose models. They have used existing public data and new data generated by large language models. The authors have investigated how this pre-training procedure impacts neural networks of various architectures and demonstrated its efficiency across all architectures.",Paraphrase and Rephrase Generation "Paraphrasing is a process to restate the meaning of a text or a passage using different words in the same language to give a clearer understanding of the original sentence to the readers. Paraphrasing is important in many natural language processing tasks such as plagiarism detection, information retrieval, and machine translation. In this article, we describe our work in paraphrasing Chinese idioms by using the definitions from dictionaries. The definitions of the idioms will be reworded and then scored to find the best paraphrase candidates to be used for the given context. With the proposed approach to paraphrase Chinse idioms in sentences, the BLEU was 75.69%, compared to the baseline approach that was 66.34%.",Paraphrase and Rephrase Generation "Paraphrase generation is a fundamental and long-standing task in natural language processing. In this paper, we concentrate on two contributions to the task: (1) we propose Retrieval Augmented Prompt Tuning (RAPT) as a parameter-efficient method to adapt large pre-trained language models for paraphrase generation; (2) we propose Novelty Conditioned RAPT (NC-RAPT) as a simple model-agnostic method of using specialized prompt tokens for controlled paraphrase generation with varying levels of lexical novelty. By conducting extensive experiments on four datasets, we demonstrate the effectiveness of the proposed approaches for retaining the semantic content of the original text while inducing lexical novelty in the generation.",Paraphrase and Rephrase Generation "A noun compound is a sequence of contiguous nouns that acts as a single noun, although the predicate denoting the semantic relation between its components is dropped. Noun Compound Interpretation is the task of uncovering the relation, in the form of a preposition or a free paraphrase. Prepositional paraphrasing refers to the use of preposition to explain the semantic relation, whereas free paraphrasing refers to invoking an appropriate predicate denoting the semantic relation. In this paper, we propose an unsupervised methodology for these two types of paraphrasing. We use pre-trained contextualized language models to uncover the ‘missing’ words (preposition or predicate). These language models are usually trained to uncover the missing word/words in a given input sentence. Our approach uses templates to prepare the input sequence for the language model. The template uses a special token to indicate the missing predicate. As the model has already been pre-trained to uncover a missing word (or a sequence of words), we exploit it to predict missing words for the input sequence. Our experiments using four datasets show that our unsupervised approach (a) performs comparably to supervised approaches for prepositional paraphrasing, and (b) outperforms supervised approaches for free paraphrasing. Paraphrasing (prepositional or free) using our unsupervised approach is potentially helpful for NLP tasks like machine translation and information extraction.",Paraphrase and Rephrase Generation "This article presents a method extending an existing French corpus of paraphrases of medical terms ANONYMOUS with new data from Web archives created during the Covid-19 pandemic. Our method semi-automatically detects new terms and paraphrase markers introducing paraphrases from these Web archives, followed by a manual annotation step to identify paraphrases and their lexical and semantic properties. The extended large corpus LARGEMED could be used for automatic medical text simplification for patients and their families. To automatise data collection, we propose two experiments. The first experiment uses the new LARGEMED dataset to train a binary classifier aiming to detect new sentences containing possible paraphrases. The second experiment aims to use correct paraphrases to train a model for paraphrase generation, by adapting T5 Language Model to the paraphrase generation task using an adversarial algorithm.",Paraphrase and Rephrase Generation "Inducing diversity in the task of paraphrasing is an important problem in NLP with applications in data augmentation and conversational agents. Previous paraphrasing approaches have mainly focused on the issue of generating semantically similar paraphrases while paying little attention towards diversity. In fact, most of the methods rely solely on top-k beam search sequences to obtain a set of paraphrases. The resulting set, however, contains many structurally similar sentences. In this work, we focus on the task of obtaining highly diverse paraphrases while not compromising on paraphrasing quality. We provide a novel formulation of the problem in terms of monotone submodular function maximization, specifically targeted towards the task of paraphrasing. Additionally, we demonstrate the effectiveness of our method for data augmentation on multiple tasks such as intent classification and paraphrase recognition. In order to drive further research, we have made the source code available.",Paraphrase and Rephrase Generation "In this work, we propose TGLS, a novel framework for unsupervised Text Generation by Learning from Search. We start by applying a strong search algorithm (in particular, simulated annealing) towards a heuristically defined objective that (roughly) estimates the quality of sentences. Then, a conditional generative model learns from the search results, and meanwhile smooth out the noise of search. The alternation between search and learning can be repeated for performance bootstrapping. We demonstrate the effectiveness of TGLS on two real-world natural language generation tasks, unsupervised paraphrasing and text formalization. Our model significantly outperforms unsupervised baseline methods in both tasks. Especially, it achieves comparable performance to strong supervised methods for paraphrase generation.",Paraphrase and Rephrase Generation "In phrase generation (PG), a sentence in the natural language is changed into a new one with a different syntactic structure but having the same semantic meaning. The present sequence-to-sequence strategy aims to recall the words and structures from the training dataset rather than learning the words' semantics. As a result, the resulting statements are frequently grammatically accurate but incorrect linguistically. The neural machine translation approach suffers to handle unusual words, domain mismatch, and unfamiliar words, but it takes context well. This work presents a novel model for creating paraphrases that use neural-based statistical machine translation (NSMT). Our approach creates potential paraphrases for any source input, calculates the level of semantic similarity between text segments of any length, and encodes paraphrases in a continuous space. To evaluate the suggested model, Quora Question Pair and Microsoft Common Objects in Context benchmark datasets are used. We demonstrate that the proposed technique achieves cutting-edge performance on both datasets using automatic and human assessments. Experimental findings across tasks and datasets demonstrate that the suggested NSMT-based PG outperforms those achieved with traditional phrase-based techniques. We also show that the proposed technique may be used automatically for the development of paraphrases for a variety of languages.",Paraphrase and Rephrase Generation "Existing methods for Dialogue Response Generation (DRG) in Task-oriented Dialogue Systems (TDSs) can be grouped into two categories: template-based and corpus-based. The former prepare a collection of response templates in advance and fill the slots with system actions to produce system responses at runtime. The latter generate system responses token by token by taking system actions into account. While template-based DRG provides high precision and highly predictable responses, they usually lack in terms of generating diverse and natural responses when compared to (neural) corpus-based approaches. Conversely, while corpus-based DRG methods are able to generate natural responses, we cannot guarantee their precision or predictability. Moreover, the diversity of responses produced by today's corpus-based DRG methods is still limited. We propose to combine the merits of template-based and corpus-based DRGs by introducing a prototype-based, paraphrasing neural network, called P2-Net, which aims to enhance quality of the responses in terms of both precision and diversity. Instead of generating a response from scratch, P2-Net generates system responses by paraphrasing template-based responses. To guarantee the precision of responses, P2-Net learns to separate a response into its semantics, context influence, and paraphrasing noise, and to keep the semantics unchanged during paraphrasing. To introduce diversity, P2-Net randomly samples previous conversational utterances as prototypes, from which the model can then extract speaking style information. We conduct extensive experiments on the MultiWOZ dataset with both automatic and human evaluations. The results show that P2-Net achieves a significant improvement in diversity while preserving the semantics of responses.",Paraphrase and Rephrase Generation "Named entity recognition (NER) is a widely studied task in natural language processing. Recently, a growing number of studies have focused on the nested NER. The span-based methods, considering the entity recognition as a span classification task, can deal with nested entities naturally. But they suffer from the huge search space and the lack of interactions between entities. To address these issues, we propose a novel sequence-to-set neural network for nested NER. Instead of specifying candidate spans in advance, we provide a fixed set of learnable vectors to learn the patterns of the valuable spans. We utilize a non-autoregressive decoder to predict the final set of entities in one pass, in which we are able to capture dependencies between entities. Compared with the sequence-to-sequence method, our model is more suitable for such unordered recognition task as it is insensitive to the label order. In addition, we utilize the loss function based on bipartite matching to compute the overall training loss. Experimental results show that our proposed model achieves state-of-the-art on three nested NER corpora: ACE 2004, ACE 2005 and KBP 2017.",NER for Nested Entities "Here we describe a new clinical corpus rich in nested entities and a series of neural models to identify them. The corpus comprises de-identified referrals from the waiting list in Chilean public hospitals. A subset of 5,000 referrals (58.6% medical and 41.4% dental) was manually annotated with 10 types of entities, six attributes, and pairs of relations with clinical relevance. In total, there are 110,771 annotated tokens. A trained medical doctor or dentist annotated these referrals, and then, together with three other researchers, consolidated each of the annotations. The annotated corpus has 48.17% of entities embedded in other entities or containing another one. We use this corpus to build models for Named Entity Recognition (NER). The best results were achieved using a Multiple Single-entity architecture with clinical word embeddings stacked with character and Flair contextual embeddings. The entity with the best performance is abbreviation, and the hardest to recognize is finding. NER models applied to this corpus can leverage statistics of diseases and pending procedures. This work constitutes the first annotated corpus using clinical narratives from Chile and one of the few in Spanish. The annotated corpus, clinical word embeddings, annotation guidelines, and neural models are freely released to the community.",NER for Nested Entities "While named entity recognition (NER) is a key task in natural language processing, most approaches only target flat entities, ignoring nested structures which are common in many scenarios. Most existing nested NER methods traverse all sub-sequences which is both expensive and inefficient, and also don't well consider boundary knowledge which is significant for nested entities. In this paper, we propose a joint entity mention detection and typing model via prior boundary knowledge (BoningKnife) to better handle nested NER extraction and recognition tasks. BoningKnife consists of two modules, MentionTagger and TypeClassifier. MentionTagger better leverages boundary knowledge beyond just entity start/end to improve the handling of nesting levels and longer spans, while generating high quality mention candidates. TypeClassifier utilizes a two-level attention mechanism to decouple different nested level representations and better distinguish entity types. We jointly train both modules sharing a common representation and a new dual-info attention layer, which leads to improved representation focus on entity-related information. Experiments over different datasets show that our approach outperforms previous state of the art methods and achieves 86.41, 85.46, and 94.2 F1 scores on ACE2004, ACE2005, and NNE, respectively.",NER for Nested Entities "Named Entity Recognition (NER) is a well and widely studied task in natural language processing. Recently, the nested NER has attracted more attention since its practicality and difficulty. Existing works for nested NER ignore the recognition order and boundary position relation of nested entities. To address these issues, we propose a novel seq2seq model named GPRL, which formulates the nested NER task as an entity triplet sequence generation process. GPRL adopts the reinforcement learning method to generate entity triplets de-coupling the entity order in gold labels and expects to learn a reasonable recognition order of entities via trial and error. Based on statistics of boundary distance for nested entities, GPRL designs a Gaussian prior to represent the boundary distance distribution between nested entities and adjust the out-put probability distribution of nested boundary tokens. Experiments on three nested NER datasets demonstrate that GPRL outperforms previous nested NER models.",NER for Nested Entities "In this article, we propose a new encoding scheme for named entity recognition (NER) called Joined Type-Length encoding (JoinedTL). Unlike most existing named entity encoding schemes, which focus on flat entities, JoinedTL can label nested named entities in a single sequence. JoinedTL uses a packed encoding to represent both type and span of a named entity, which not only results in less tagged tokens compared to existing encoding schemes, but also enables it to support nested NER. We evaluate the effectiveness of JoinedTL for nested NER on three nested NER datasets: GENIA in English, GermEval in German, and PerNest, our newly created nested NER dataset in Persian. We apply CharLSTM+WordLSTM+CRF, a three-layer sequence tagging model on three datasets encoded using JoinedTL and two existing nested NE encoding schemes, i.e., JoinedBIO and JoinedBILOU. Our experiment results show that CharLSTM+WordLSTM+CRF trained with JoinedTL encoded datasets can achieve competitive F1 scores as the ones trained with datasets encoded by two other encodings, but with 27%–48% less tagged tokens. To leverage the power of three different encodings, i.e., JoinedTL, JoinedBIO, and JoinedBILOU, we propose an encoding-based ensemble method for nested NER. Evaluation results show that the ensemble method achieves higher F1 scores on all datasets than the three models each trained using one of the three encodings. By using nested NE encodings including JoinedTL with CharLSTM+WordLSTM+CRF, we establish new state-of-the-art performance with an F1 score of 83.7 on PerNest, 74.9 on GENIA, and 70.5 on GermEval, surpassing two recent neural models specially designed for nested NER.",NER for Nested Entities "Many recent named entity recognition (NER) studies criticize flat NER for its non-overlapping assumption, and switch to investigating nested NER. However, existing nested NER models heavily rely on training data annotated with nested entities, while labeling such data is costly. This study proposes a new subtask, nested-from-flat NER, which corresponds to a realistic application scenario: given data annotated with flat entities only, one may still desire the trained model capable of recognizing nested entities. To address this task, we train span-based models and deliberately ignore the spans nested inside labeled entities, since these spans are possibly unlabeled entities. With nested entities removed from the training data, our model achieves 54.8%, 54.2% and 41.1% F1 scores on the subset of spans within entities on ACE 2004, ACE 2005 and GENIA, respectively. This suggests the effectiveness of our approach and the feasibility of the task. In addition, the model's performance on flat entities is entirely unaffected. We further manually annotate the nested entities in the test set of CoNLL 2003, creating a nested-from-flat NER benchmark. Analysis results show that the main challenges stem from the data and annotation inconsistencies between the flat and nested entities.",NER for Nested Entities "Named entity recognition (NER) aims to extract entities from unstructured text, and a nested structure often exists between entities. However, most previous studies paid more attention to flair named entity recognition while ignoring nested entities. The importance of words in the text should vary for different entity categories. In this paper, we propose a head-to-tail linker for nested NER. The proposed model exploits the extracted entity head as conditional information to locate the corresponding entity tails under different entity categories. This strategy takes part of the symmetric boundary information of the entity as a condition and effectively leverages the information from the text to improve the entity boundary recognition effectiveness. The proposed model considers the variability in the semantic correlation between tokens for different entity heads under different entity categories. To verify the effectiveness of the model, numerous experiments were implemented on three datasets: ACE2004, ACE2005, and GENIA, with F1-scores of 80.5%, 79.3%, and 76.4%, respectively. The experimental results show that our model is the most effective of all the methods used for comparison.",NER for Nested Entities "Nested named entity recognition (NER) is a task in which named entities may overlap with each other. Span-based approaches regard nested NER as a two-stage span enumeration and classification task, thus having the innate ability to handle this task. However, they face the problems of error propagation, ignorance of span boundary, difficulty in long entity recognition and requirement on large-scale annotated data. In this paper, we propose Extract-Select, a span selection framework for nested NER, to tackle these problems. Firstly, we introduce a span selection framework in which nested entities with different input categories would be separately extracted by the extractor, thus naturally avoiding error propagation in two-stage span-based approaches. In the inference phase, the trained extractor selects final results specific to the given entity category. Secondly, we propose a hybrid selection strategy in the extractor, which not only makes full use of span boundary but also improves the ability of long entity recognition. Thirdly, we design a discriminator to evaluate the extraction result, and train both extractor and discriminator with generative adversarial training (GAT). The use of GAT greatly alleviates the stress on the dataset size. Experimental results on four benchmark datasets demonstrate that Extract-Select outperforms competitive nested NER models, obtaining state-of-the-art results. The proposed model also performs well when less labeled data are given, proving the effectiveness of GAT.",NER for Nested Entities "Nested named entity recognition (Nested NER) aims to identify entities with nested structures from the given text, which is a fundamental task in Natural Language Processing. The region-based approach is the current mainstream approach, which first generates candidate spans and then classifies them into predefined categories. However, this method suffers from several drawbacks, including over-reliance on span representation, vulnerability to unbalanced category distribution, and inaccurate span boundary detection. To address these problems, we propose to model the nested NER problem into a head-tail mapping problem, namely, HTMapper, which detects head boundaries first and then models a conditional mapping from head to tail under a given category. Based on this mapping, we can find corresponding tails under different categories for each detected head by enumerating all entity categories. Our approach directly models the head boundary and tail boundary of entities, avoiding over-reliance on the span representation. Additionally, Our approach utilizes category information as an indicator signal to address the imbalance of category distribution during category prediction. Furthermore, our approach enhances the detection of span boundaries by capturing the correlation between head and tail boundaries. Extensive experiments on three nested NER datasets and two flat NER datasets demonstrate that our HTMapper achieves excellent performance with F1 scores of 89.09%, 88.30%, 81.57% on ACE2004,ACE2005, GENIA, and 94.26%, 91.40% on CoNLL03, OntoNotes, respectively.",NER for Nested Entities "We propose two neural network architectures for nested named entity recognition (NER), a setting in which named entities may overlap and also be labeled with more than one label. We encode the nested labels using a linearized scheme. In our first proposed approach, the nested labels are modeled as multilabels corresponding to the Cartesian product of the nested labels in a standard LSTM-CRF architecture. In the second one, the nested NER is viewed as a sequence-to-sequence problem, in which the input sequence consists of the tokens and output sequence of the labels, using hard attention on the word whose label is being predicted. The proposed methods outperform the nested NER state of the art on four corpora: ACE-2004, ACE-2005, GENIA and Czech CNEC. We also enrich our architectures with the recently published contextual embeddings: ELMo, BERT and Flair, reaching further improvements for the four nested entity corpora. In addition, we report flat NER state-of-the-art results for CoNLL-2002 Dutch and Spanish and for CoNLL-2003 English.",NER for Nested Entities