ID large_stringlengths 10 67 | year int64 1.96k 2.03k | title large_stringlengths 4 560 | abstract large_stringlengths 0 12.8k |
|---|---|---|---|
gibert-etal-2025-mind | 2,025 | Mind the Gap: Diverse NMT Models for Resource-Constrained Environments | We present fast Neural Machine Translation models for 17 diverse languages, developed using Sequence-level Knowledge Distillation. Our selected languages span multiple language families and scripts, including low-resource languages. The distilled models achieve comparable performance while being 10x times faster than t... |
glisic-etal-2025-testing | 2,025 | Testing relevant linguistic features in automatic CEFR skill level classification for Icelandic | This paper explores the use of various linguistic features to develop models for automatic classification of language proficiency on the CEFR scale for Icelandic, a low-resourced and morphologically complex language. We train two classifiers to assess skill level of learner texts. One is used as a baseline and takes in... |
goot-etal-2025-morsed | 2,025 | MorSeD: Morphological Segmentation of Danish and its Effect on Language Modeling | Current language models (LMs) mostly exploit subwords as input units based on statistical co-occurrences of characters. Adjacently, previous work has shown that modeling morphemes can aid performance for Natural Language Processing (NLP) models. However, morphemes are challenging to obtain as there is no annotated data... |
haglund-bjorklund-2025-opinion | 2,025 | Opinion Units: Concise and Contextualized Representations for Aspect-Based Sentiment Analysis | We introduce opinion units, a contribution to the field Aspect-Based Sentiment Analysis (ABSA) that extends aspect- sentiment pairs by including substantiating excerpts, derived through hybrid abstractive-extractive summarisation. The goal is to provide fine-grained information without sacrificing succinctness and abst... |
hardarson-etal-2025-aligning | 2,025 | Aligning Language Models for Icelandic Legal Text Summarization | The integration of language models in the legal domain holds considerable promise for streamlining processes and improving efficiency in managing extensive workloads. However, the specialized terminology, nuanced language, and formal style of legal texts can present substantial challenges. This study examines whether p... |
heinecke-etal-2025-question | 2,025 | Question-parsing with Abstract Meaning Representation enhanced by adding small datasets | Abstract Meaning Representation (AMR) is a graph-based formalism for representing meaning in sentences. As the annotation is quite complex, few annotated corpora exist. The most well-known and widely-used corpora are LDC`s AMR 3.0 and the datasets available on the new AMR website. Models trained on the LDC corpora work... |
henriksson-etal-2025-finerweb | 2,025 | FinerWeb-10BT: Refining Web Data with LLM-Based Line-Level Filtering | Data quality is crucial for training Large Language Models (LLMs). Traditional heuristic filters often miss low-quality text or mistakenly remove valuable content. In this paper, we introduce an LLM-based line-level filtering method to enhance training data quality. We use GPT-4o mini to label a 20,000-document sample ... |
jorgensen-breitung-2025-margins | 2,025 | Margins in Contrastive Learning: Evaluating Multi-task Retrieval for Sentence Embeddings | This paper explores retrieval with sentence embeddings by fine-tuning sentence-transformer models for classification while preserving their ability to capture semantic similarity. To evaluate this balance, we introduce two opposing metrics {--} polarity score and semantic similarity score {--} that measure the model`s ... |
kalnaca-etal-2025-database | 2,025 | Database of Latvian Morphemes and Derivational Models: ideas and expected results | In this paper, we describe {\textquotedblleft}The Database of Latvian Morphemes and Derivational Models{\textquotedblright} {--} a large-scale corpus-based and manually validated database of Latvian derivational morphology currently in development at the University of Latvia. The database contains morpheme-level data {... |
kapociute-dzikiene-etal-2025-localizing | 2,025 | Localizing AI: Evaluating Open-Weight Language Models for Languages of Baltic States | Although large language models (LLMs) have transformed our expectations of modern language technologies, concerns over data privacy often restrict the use of commercially available LLMs hosted outside of EU jurisdictions. This limits their application in governmental, defense, and other data-sensitive sectors. In this ... |
kaukonen-etal-2025-aunt | 2,025 | How Aunt-Like Are You? Exploring Gender Bias in the Genderless Estonian Language: A Case Study | This paper examines gender bias in Estonian, a grammatically genderless Finno-Ugric language, which doesn`t have gendered noun system nor any gendered pronouns, but expresses gender through vocabulary. In this work, we focus on the male-female compound words ending with -t{\"a}di {\textquoteleft}aunt' and -onu {\textqu... |
kiissel-etal-2025-estonian | 2,025 | Estonian isolated-word text-to-speech synthesiser | This paper presents the development and evaluation of an Estonian isolated-word text-to-speech (TTS) synthesiser. Unlike conventional TTS systems that convert continuous text into speech, this system focuses on the synthesis of isolated words, which is crucial for applications such as pronunciation training, speech the... |
kukk-etal-2025-biaswe | 2,025 | BiaSWE: An Expert Annotated Dataset for Misogyny Detection in Swedish | In this study, we introduce the process for creating BiaSWE, an expert-annotated dataset tailored for misogyny detection in the Swedish language. To address the cultural and linguistic specificity of misogyny in Swedish, we collaborated with experts from the social sciences and humanities. Our interdisciplinary team de... |
kunilovskaya-etal-2025-predictability | 2,025 | Predictability of Microsyntactic Units across Slavic Languages: A translation-based Study | The paper presents the results of a free translation experiment, which was set up to explore Slavic cross-language intelligibility. In the experiment, native speakers of Russian were asked to read a sentence in one of the five Slavic languages and return a Russian translation of a highlighted item. The experiment is fo... |
kunz-2025-train | 2,025 | Train More Parameters But Mind Their Placement: Insights into Language Adaptation with PEFT | Smaller LLMs still face significant challenges even in medium-resourced languages, particularly when it comes to language-specific knowledge {--} a problem not easily resolved with machine-translated data. In this case study on Icelandic, we aim to enhance the generation performance of an LLM by specialising it using u... |
kurfali-etal-2025-swesat | 2,025 | SweSAT-1.0: The Swedish University Entrance Exam as a Benchmark for Large Language Models | This introduces SweSAT-1.0, a new benchmark dataset created from the Swedish university entrance exam (H{\"o}gskoleprovet) to assess large language models in Swedish. The current version of the benchmark includes 867 questions across six different tasks, including reading comprehension, mathematical problem solving, an... |
kuulmets-etal-2025-well | 2,025 | How Well do LLMs know Finno-Ugric Languages? A Systematic Assessment | We present a systematic evaluation of multilingual capabilities of open large language models (LLMs), specifically focusing on five Finno-Ugric (FiU) languages. Our investigation covers multiple prompting strategies across several benchmarks and reveals that Llama-2 7B and Llama-2 13B perform weakly on most FiU languag... |
lag-etal-2025-mapping | 2,025 | Mapping Faroese in the Multilingual Representation Space: Insights for ASR Model Optimization | ASR development for low-resource languages like Faroese faces significant challenges due to the scarcity of large, diverse datasets. While fine-tuning multilingual models using related languages is a common practice, there is no standardized method for selecting these auxiliary languages, leading to a computationally e... |
lokmane-etal-2025-towards | 2,025 | Towards a Derivational Semantics Resource for Latvian | In this paper we describe the implementation of the first structured resource of semantic derivational links for Latvian, basing it on the largest online dictionary T{\={e}}zaurs.lv and linking it to the Latvian WordNet. We separate two kinds of derivational links: semantic derivation links between senses and morpholog... |
luukkonen-etal-2025-poro | 2,025 | Poro 34B and the Blessing of Multilinguality | The pretraining of state-of-the-art large language models now requires trillions of words of text, which is orders of magnitude more than available for the vast majority of languages. While including text in more than one language is an obvious way to acquire more pretraining data, multilinguality is often seen as a cu... |
magnifico-barbu-2025-summarization | 2,025 | Can summarization approximate simplification? A gold standard comparison | This study explores the overlap between text summarization and simplification outputs. While summarization evaluation methods are streamlined, simplification lacks cohesion, prompting the question: how closely can abstractive summarization resemble gold-standard simplification? We address this by applying two BART-base... |
mannisto-etal-2025-comparative | 2,025 | A Comparative Study of PEFT Methods for Python Code Generation | Fine-tuning language models incurs high costs in training, inference and storage. Parameter-efficient fine-tuning (PEFT) methods have emerged as a more cost-effective alternative to full fine-tuning. However, limited work has compared different PEFT approaches for tasks like code generation. In this study, we examine t... |
mikhailov-etal-2025-collection | 2,025 | A Collection of Question Answering Datasets for Norwegian | This paper introduces a new suite of question answering datasets for Norwegian; NorOpenBookQA, NorCommonSenseQA, NorTruthfulQA, and NRK-Quiz-QA. The data covers a wide range of skills and knowledge domains, including world knowledge, commonsense reasoning, truthfulness, and knowledge about Norway. Covering both of the ... |
nieminen-etal-2025-incorporating | 2,025 | Incorporating Target Fuzzy Matches into Neural Fuzzy Repair | Neural fuzzy repair (NFR) is a simple implementation of retrieval-augmented translation (RAT), based on data augmentation. In NFR, a translation database is searched for translation examples where the source sentence is similar to the sentence being translated, and the target side of the example is concatenated with th... |
nivre-2025-constructions | 2,025 | Constructions and Strategies in Universal Dependencies | Is the framework of Universal Dependencies (UD) compatible with findings from linguistic typology? One way to find out is to investigate whether UD can adequately represent constructions of the world`s languages, as described in William Croft`s recent book Morphosyntax. This paper discusses how such an investigation co... |
nuutinen-etal-2025-finnish | 2,025 | Finnish SQuAD: A Simple Approach to Machine Translation of Span Annotations | We apply a simple method to machine translate datasets with span-level annotation using the DeepL MT service and its ability to translate formatted documents. Using this method, we produce a Finnish version of the SQuAD2.0 question answering dataset and train QA retriever models on this new dataset. We evaluate the qua... |
oji-kunz-2025-tune | 2,025 | How to Tune a Multilingual Encoder Model for Germanic Languages: A Study of PEFT, Full Fine-Tuning, and Language Adapters | This paper investigates the optimal use of the multilingual encoder model mDeBERTa for tasks in three Germanic languages {--} German, Swedish, and Icelandic {--} representing varying levels of presence and likely data quality in mDeBERTas pre-training data. We compare full fine-tuning with the parameter-efficient fine-... |
parsons-etal-2025-match | 2,025 | Match \textquoteleftem: Multi-Tiered Alignment for Error Analysis in ASR | We introduce {\textquotedblleft}Match {\textquoteleft}em{\textquotedblright}: a new framework for aligning output from automatic speech recognition (ASR) with reference transcriptions. This allows a more detailed analysis of errors produced by end-to-end ASR systems compared to word error rate (WER). Match {\textquotel... |
parsons-etal-2025-adding | 2,025 | Adding Metadata to Existing Parliamentary Speech Corpus | Parliamentary proceedings are convenient data sources for creating corpora for speech technology. Given its public nature, there is an abundance of extra information about the speakers that can be legally and ethically harvested to enrich this kind of corpora. This paper describes the methods we have used to add speake... |
pashchenko-etal-2025-paragraph | 2,025 | Paragraph-Level Machine Translation for Low-Resource Finno-Ugric Languages | We develop paragraph-level machine translation for four low-resource Finno-Ugric languages: Proper Karelian, Livvi, Ludian, and Veps. The approach is based on sentence-level pre-trained translation models, which are fine-tuned with paragraph-parallel data. This allows the resulting model to develop a native ability to ... |
pedersen-etal-2025-evaluating | 2,025 | Evaluating LLM-Generated Explanations of Metaphors -- A Culture-Sensitive Study of Danish | In this study, we examine how well Danish culture-specific metaphors are explained by two of the best performing language models for Danish, namely ChatGPT and Llama. For comparison, the explana- tions are measured against how well cross- lingual (or `universal') metaphors are ex- plained by the models; referring here ... |
ploeger-etal-2025-tokenization | 2,025 | Tokenization on Trial: The Case of Kalaallisut--Danish Legal Machine Translation | The strengths of subword tokenization have been widely demonstrated when applied to higher-resourced, morphologically simple languages. However, it is not self-evident that these results transfer to lower-resourced, morphologically complex languages. In this work, we investigate the influence of different subword segme... |
poelman-lhoneux-2025-roles | 2,025 | The Roles of English in Evaluating Multilingual Language Models | Multilingual natural language processing is getting increased attention, with numerous models, benchmarks, and methods being released for many languages. English is often used in multilingual evaluation to prompt language models (LMs), mainly to overcome the lack of instruction tuning data in other languages. In this p... |
politov-etal-2025-revisiting | 2,025 | Revisiting Projection-based Data Transfer for Cross-Lingual Named Entity Recognition in Low-Resource Languages | Cross-lingual Named Entity Recognition (NER) leverages knowledge transfer between languages to identify and classify named entities, making it particularly useful for low-resource languages. We show that the data-based cross-lingual transfer method is an effective technique for cross-lingual NER and can outperform mult... |
reguera-gomez-etal-2025-empathy | 2,025 | Empathy vs Neutrality: Designing and Evaluating a Natural Chatbot for the Healthcare Domain | As lifestyle-related diseases rise due to unhealthy habits such as smoking, poor diet, lack of exercise, and alcohol consumption, the role of Conversational AI in healthcare is increasingly significant. This study provides an empirical study on the design and evaluation of a natural and intuitive healthcare chatbot, sp... |
richter-etal-2025-assessed | 2,025 | Assessed and Annotated Vowel Lengths in Spoken Icelandic Sentences for L1 and L2 Speakers: A Resource for Pronunciation Training | We introduce a dataset of time-aligned phonetic transcriptions focusing on vowel length (quantity) in Icelandic. Ultimately, this aims to support computer assisted pronunciation training (CAPT) software, to automatically assess length and possible errors in Icelandic learners' pronunciations. The dataset contains a ran... |
riess-jorgensen-2025-brage | 2,025 | The BRAGE Benchmark: Evaluating Zero-shot Learning Capabilities of Large Language Models for Norwegian Customer Service Dialogues | This study explores the capabilities of open-weight Large Language Models in a zero-shot learning setting, testing their ability to classify the content of customer service dialogues in Norwegian from a single instruction, named the BRAGE benchmark. By comparing results against widely used downstream tasks such as ques... |
ronningstad-etal-2025-mixed | 2,025 | Mixed Feelings: Cross-Domain Sentiment Classification of Patient Feedback | Sentiment analysis of patient feedback from the public health domain can aid decision makers in evaluating the provided services. The current paper focuses on free-text comments in patient surveys about general practitioners and psychiatric healthcare, annotated with four sentence-level polarity classes - positive, neg... |
rosa-etal-2025-impact | 2,025 | The Impact of Copyrighted Material on Large Language Models: A Norwegian Perspective | The use of copyrighted materials in training language models raises critical legal and ethical questions. This paper presents a framework for and the results of empirically assessing the impact of publisher-controlled copyrighted corpora on the performance of generative large language models (LLMs) for Norwegian. When ... |
saattrup-nielsen-etal-2025-encoder | 2,025 | Encoder vs Decoder: Comparative Analysis of Encoder and Decoder Language Models on Multilingual NLU Tasks | This paper explores the performance of encoder and decoder language models on multilingual Natural Language Understanding (NLU) tasks, with a broad focus on Germanic languages. Building upon the ScandEval benchmark, initially restricted to evaluating encoder models, we extend the evaluation framework to include decoder... |
samuel-etal-2025-small | 2,025 | Small Languages, Big Models: A Study of Continual Training on Languages of Norway | Training large language models requires vast amounts of data, posing a challenge for less widely spoken languages like Norwegian and even more so for truly low-resource languages like Northern S{\'a}mi. To address this issue, we present a novel three-stage continual training approach that substantially improves the dow... |
scalvini-etal-2025-rethinking | 2,025 | Rethinking Low-Resource MT: The Surprising Effectiveness of Fine-Tuned Multilingual Models in the LLM Age | This study challenges the current paradigm shift in machine translation, where large language models (LLMs) are gaining prominence over traditional neural machine translation models, with a focus on English-to-Faroese translation. We compare the performance of various models, including fine-tuned multilingual models, L... |
scalvini-etal-2025-prompt | 2,025 | Prompt Engineering Enhances Faroese MT, but Only Humans Can Tell | This study evaluates GPT-4`s English-to-Faroese translation capabilities, comparing it with multilingual models on FLORES-200 and Sprotin datasets. We propose a prompt optimization strategy using Semantic Textual Similarity (STS) to improve translation quality. Human evaluation confirms the effectiveness of STS-based f... |
scherrer-kuparinen-2025-interactive | 2,025 | Interactive maps for corpus-based dialectology | Traditional data collection methods in dialectology rely on structured surveys, whose results can be easily presented on printed or digital maps. But in recent years, corpora of transcribed dialect speech have become a precious alternative data source for data-driven linguistic analysis. For example, topic models can b... |
schuster-etal-2025-profiling | 2,025 | Profiling Bias in LLMs: Stereotype Dimensions in Contextual Word Embeddings | Large language models (LLMs) are the foundation of the current successes of artificial intelligence (AI), however, they are unavoidably biased. To effectively communicate the risks and encourage mitigation efforts these models need adequate and intuitive descriptions of their discriminatory properties, appropriate for ... |
shastry-etal-2025-entailment | 2,025 | Entailment Progressions: A Robust Approach to Evaluating Reasoning Within Larger Discourse | Textual entailment, or the ability to deduce whether a proposed hypothesis is logically supported by a given premise, has historically been applied to the evaluation of language modelling efficiency in tasks like question answering and text summarization. However, we hypothesize that these zero-shot entailment evaluati... |
souza-etal-2025-generative | 2,025 | Generative AI for Technical Writing: Comparing Human and LLM Assessments of Generated Content | Large language models (LLMs) have recently gained significant attention for their capabilities in natural language processing (NLP), particularly generative artificial intelligence (AI). LLMs can also be useful tools for software documentation technical writers. We present an assessment of technical documentation conte... |
steingrimsson-etal-2025-mc | 2,025 | MC-19: A Corpus of 19th Century Icelandic Texts | We present MC-19, a new Icelandic historical corpus containing texts from the period 1800-1920. We describe approaches for enhancing a corpus of historical texts, by preparing the texts so that they can be processed using state-of-the-art NLP tools. We train encoder-decoder models to reduce the number of OCR errors whi... |
stenlund-etal-2025-surface | 2,025 | Surface-Level Morphological Segmentation of Low-resource Inuktitut Using Pre-trained Large Language Models | Segmenting languages based on morpheme boundaries instead of relying on language independent segmenting algorithms like Byte-Pair Encoding (BPE) has shown to benefit downstream Natural Language Processing (NLP) task performance. This can however be tricky for polysynthetic languages like Inuktitut due to a high morphem... |
szawerna-etal-2025-devils | 2,025 | The Devil`s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling | In this paper, we experiment with the effect of different levels of detailedness or granularity{---}understood as i) the number of classes, and ii) the classes' semantic depth in the sense of hypernym and hyponym relations {---} of the annotation of Personally Identifiable Information (PII) on automatic detection and l... |
tannander-edlund-2025-braxen | 2,025 | Braxen 1.0 | With this paper, we release a Swedish pronunciation lexicon resource, Braxen 1.0, which is the result of almost 20 years development carried out at the Swedish Agency for Accessible Media (MTM). The lexicon originated with a basic word list, but has continuously been exanded with new entries, mainly acquired from unive... |
terenziani-2025-temporal | 2,025 | Temporal Relation Classification: An XAI Perspective | Temporal annotations are used to identify and mark up temporal information, offering definition into how it is expressed through linguistic properties in text. This study investigates various discriminative pre-trained language models of differing sizes on a temporal relation classification task. We define valid reason... |
touileb-etal-2025-benchmarking | 2,025 | Benchmarking Abstractive Summarisation: A Dataset of Human-authored Summaries of Norwegian News Articles | We introduce a dataset of high-quality human-authored summaries of news articles in Norwegian. The dataset is intended for benchmarking of the abstractive summarisation capabilities of generative language models. Each document in the dataset is provided with three different candidate gold-standard summaries written by ... |
vaaben-bornerup-hardmeier-2025-efficient | 2,025 | Efficient Elicitation of Fictitious Nursing Notes from Volunteer Healthcare Professionals | Reliable automatic solutions to extract structured information from free-text nursing notes could bring important efficiency gains in healthcare, but their development is hampered by the sensitivity and limited availability of example data. We describe a method for eliciting fictitious nursing documentation and associa... |
vahtola-etal-2025-analyzing | 2,025 | Analyzing the Effect of Linguistic Instructions on Paraphrase Generation | Recent work has demonstrated that large language models can often generate fluent and linguistically correct text, adhering to given instructions. However, to what extent can they execute complex instructions requiring knowledge of fundamental linguistic concepts and elaborate semantic reasoning? Our study connects an ... |
vakili-etal-2025-sweclineval | 2,025 | SweClinEval: A Benchmark for Swedish Clinical Natural Language Processing | The lack of benchmarks in certain domains and for certain languages makes it difficult to track progress regarding the state-of-the-art of NLP in those areas, potentially impeding progress in important, specialized domains. Here, we introduce the first Swedish benchmark for clinical NLP: {\_}SweClinEval{\_}. The first ... |
vakirtzian-etal-2025-dialectal | 2,025 | Dialectal treebanks and their relation with the standard variety: The case of East Cretan and Standard Modern Greek | We report on the development of the first treebank and parser for Eastern Cretan in the framework of Universal Dependencies (UD). Eastern Cretan is a living but under-resourced dialect of Modern Greek. We have worked on the transcription of oral material and relied on active annotation and knowledge transfer from GUD, ... |
vejlgaard-holm-etal-2025-danoliteracy | 2,025 | Danoliteracy of Generative Large Language Models | The language technology moonshot moment of Generative Large Language Models (GLLMs) was not limited to English: These models brought a surge of technological applications, investments, and hype to low-resource languages as well. However, the capabilities of these models in languages such as Danish were, until recently,... |
you-etal-2025-noreventgen | 2,025 | NorEventGen: generative event extraction from Norwegian news | In this work, we approach event extraction from Norwegian news text using a generation-based approach which formulates the task as text-to-structure generation. We present experiments assessing the effect of different modeling configurations and provide an analysis of the model predictions and typical system errors. Fi... |
zhang-etal-2025-snakmodel | 2,025 | SnakModel: Lessons Learned from Training an Open Danish Large Language Model | We present SnakModel, a Danish large language model (LLM) based on Llama2-7B, which we continuously pre-train on 13.6B Danish words, and further tune on 3.7M Danish instructions. As best practices for creating LLMs for smaller language communities have yet to be established, we examine the effects of early modeling and... |
zosa-etal-2025-got | 2,025 | Got Compute, but No Data: Lessons From Post-training a Finnish LLM | As LLMs gain more popularity as chatbots and general assistants, methods have been developed to enable LLMs to follow instructions and align with human preferences. These methods have found success in the field, but their effectiveness has not been demonstrated outside of high-resource languages. In this work, we discu... |
glazkova-zakharova-2025-data | 2,025 | From Data to Grassroots Initiatives: Leveraging Transformer-Based Models for Detecting Green Practices in Social Media | Green practices are everyday activities that support a sustainable relationship between people and the environment. Detecting these practices in social media helps track their prevalence and develop recommendations to promote eco-friendly actions. This study compares machine learning methods for identifying mentions of... |
peura-etal-2025-perspectives | 2,025 | Perspectives on Forests and Forestry in Finnish Online Discussions - A Topic Modeling Approach to Suomi24 | This paper explores how forests and forest industry are perceived on the largest online discussion forum in Finland, Suomi24 ({\textquoteleft}Finland24'). Using 30,636 posts published in 2014{--}2020, we investigate what kind of topics and perspectives towards forest management can be found. We use BERTopic as our topi... |
dsouza-etal-2025-mining | 2,025 | Mining for Species, Locations, Habitats, and Ecosystems from Scientific Papers in Invasion Biology: A Large-Scale Exploratory Study with Large Language Models | This study explores the use of large language models (LLMs), specifically GPT-4o, to extract key ecological entities{---}species, locations, habitats, and ecosystems{---}from invasion biology literature. This information is critical for understanding species spread, predicting future invasions, and informing conservati... |
volkanovska-2025-large | 2,025 | Large Language Models as Annotators of Named Entities in Climate Change and Biodiversity: A Preliminary Study | This paper examines whether few-shot techniques for Named Entity Recognition (NER) utilising existing large language models (LLMs) as their backbone can be used to reliably annotate named entities (NEs) in scientific texts on climate change and biodiversity. A series of experiments aim to assess whether LLMs can be int... |
bosco-etal-2025-communicating | 2,025 | Communicating urgency to prevent environmental damage: insights from a linguistic analysis of the WWF24 multilingual corpus | Contemporary environmental discourse focuses on effectively communicating ecological vulnerability to raise public awareness and encourage positive actions. Hence there is a need for studies to support accurate and adequate discourse production, both by humans and computers. Two main challenges need to be tackled. On t... |
beckles-heidke-2025-thematic | 2,025 | Thematic Categorization on Pineapple Production in Costa Rica: An Exploratory Analysis through Topic Modeling | Costa Rica is one of the largest producers and exporters of pineapple in the world. This status has encouraged multinational companies to use plantations in this Central American country for experimentation and the cultivation of new varieties, such as the Pinkglow pineapple. However, pineapple monoculture has signific... |
castle-moreno-schneider-2025-entity | 2,025 | Entity Linking using LLMs for Automated Product Carbon Footprint Estimation | Growing concerns about climate change and sustainability are driving manufacturers to take significant steps toward reducing their carbon footprints. For these manufacturers, a first step towards this goal is to identify the environmental impact of the individual components of their products. We propose a system levera... |
haider-etal-2025-quantification | 2,025 | Quantification of Biodiversity from Historical Survey Text with LLM-based Best-Worst-Scaling | In this study, we evaluate methods to determine the frequency of species via quantity estimation from historical survey text. To that end, we formulate classification tasks and finally show that this problem can be adequately framed as a regression task using Best-Worst Scaling (BWS) with Large Language Models (LLMs). ... |
barz-etal-2025-analyzing | 2,025 | Analyzing the Online Communication of Environmental Movement Organizations: NLP Approaches to Topics, Sentiment, and Emotions | This project employs state-of-the-art Natural Language Processing (NLP) techniques to analyze the online communication of international Environmental Movement Organizations (EMOs). First, we introduce our overall EMO dataset and describe it through topic modeling. Second, we evaluate current sentiment and emotion class... |
longo-longo-2025-ai | 2,025 | No AI on a Dead Planet: Sentiment and Emotion Analysis Across Reddit Communities on AI and the Environment | This paper investigates how different online communities perceive and discuss the environmental impact of AI through sentiment analysis and emotion detection. We analyze Reddit discussion from r/artificial and r/climatechange, using pre-trained models fine-tuned on social media data. Our analysis reveals distinct patte... |
grasso-etal-2025-towards | 2,025 | Towards Addressing Anthropocentric Bias in Large Language Models | The widespread use of Large Language Models (LLMs), particularly among non-expert users, has raised ethical concerns about the propagation of harmful biases. While much research has addressed social biases, few works, if any, have examined anthropocentric bias in Natural Language Processing (NLP) technology. Anthropoce... |
brinner-zarriess-2025-efficient | 2,025 | Efficient Scientific Full Text Classification: The Case of EICAT Impact Assessments | This study explores strategies for efficiently classifying scientific full texts using both small, BERT-based models and local large language models like Llama-3.1 8B. We focus on developing methods for selecting subsets of input sentences to reduce input size while simultaneously enhancing classification performance. ... |
heiman-2025-accuracy | 2,025 | The Accuracy, Robustness, and Readability of LLM-Generated Sustainability-Related Word Definitions | A common language with shared standard definitions is essential for effective climate conversations. However, there is concern that LLMs may misrepresent and/or diversify climate-related terms. We compare 305 official IPCC glossary definitions with those generated by OpenAI`s GPT-4o-mini and investigate their adherence... |
fang-etal-2025-comparative | 2,025 | A Comparative Analysis of Word Segmentation, Part-of-Speech Tagging, and Named Entity Recognition for Historical Chinese Sources, 1900-1950 | This paper compares large language models (LLMs) and traditional natural language processing (NLP) tools for performing word segmentation, part-of-speech (POS) tagging, and named entity recognition (NER) on Chinese texts from 1900 to 1950. Historical Chinese documents pose challenges for text analysis due to their logo... |
henriksson-etal-2025-analyzing | 2,025 | Analyzing register variation in web texts through automatic segmentation | This study introduces a novel method for analyzing register variation in web texts through classification-based register segmentation. While traditional text-linguistic register analysis treats web documents as single units, we present a recursive binary segmentation approach that automatically identifies register shif... |
dinu-etal-2025-analyzing | 2,025 | Analyzing Large Language Models' pastiche ability: a case study on a 20th century Romanian author | This study evaluated the ability of several Large Language Models (LLMs) to pastiche the literary style of the Romanian 20th century author Mateiu Caragiale, by continuing one of his novels left unfinished upon his death. We assembled a database of novels consisting of six texts by Mateiu Caragiale, including his unfin... |
miyagawa-2025-rag | 2,025 | RAG-Enhanced Neural Machine Translation of Ancient Egyptian Text: A Case Study of THOTH AI | This paper demonstrates how Retrieval-Augmented Generation (RAG) significantly improves translation accuracy for Middle Egyptian, a historically rich but low-resource language. We integrate a vectorized Coptic-Egyptian lexicon and morphological database into a specialized tool called THOTH AI. By supplying domain-speci... |
rueter-partanen-2025-restructuring | 2,025 | Restructuring and visualising dialect dictionary data: Report on Erzya and Moksha materials | There are a number of Uralic dialect dictionaries based on fieldwork documentation of individual minority languages from the Pre-Soviet Era. The first of these published by the Finno-Ugrian Society features the Mordvin languages, Erzya and Moksha.In this article, we describe the possibility of reusing XML dialect dicti... |
balci-etal-2025-podcast | 2,025 | Podcast Outcasts: Understanding Rumble`s Podcast Dynamics | The rising popularity of podcasts as an emerging medium opens new avenues for digital humanities research, particularly when examining video-based media on alternative platforms. We present a novel data analysis pipeline for analyzing over 13K podcast videos (526 days of video content) from Rumble and YouTube that inte... |
jacobsen-kristensen-mclachlan-2025-read | 2,025 | I only read it for the plot! Maturity Ratings Affect Fanfiction Style and Community Engagement | We consider the textual profiles of different fanfiction maturity ratings, how they vary across fan groups, and how this relates to reader engagement metrics. Previous studies have shown that fanfiction writing is motivated by a combination of admiration for and frustration with the fan object. These findings emerge wh... |
retkowski-etal-2025-ai | 2,025 | The AI Co-Ethnographer: How Far Can Automation Take Qualitative Research? | Qualitative research often involves labor-intensive processes that are difficult to scale while preserving analytical depth. This paper introduces The AI Co-Ethnographer (AICoE), a novel end-to-end pipeline developed for qualitative research and designed to move beyond the limitations of simply automating code assignme... |
shmidman-etal-2025-irony | 2,025 | Irony Detection in Hebrew Documents: A Novel Dataset and an Evaluation of Neural Classification Methods | This paper focuses on the use of single words in quotation marks in Hebrew, which may or may not be an indication of irony. Because no annotated dataset yet exists for such cases, we annotate a new dataset consisting of over 4000 cases of words within quotation marks from Hebrew newspapers. On the basis of this dataset... |
alperin-etal-2025-masks | 2,025 | Masks and Mimicry: Strategic Obfuscation and Impersonation Attacks on Authorship Verification | The increasing use of Artificial Intelligence(AI) technologies, such as Large LanguageModels (LLMs) has led to nontrivial improvementsin various tasks, including accurate authorshipidentification of documents. However,while LLMs improve such defense techniques,they also simultaneously provide a vehicle formalicious act... |
stepankova-rosa-2025-song | 2,025 | Song Lyrics Adaptations: Computational Interpretation of the Pentathlon Principle | Songs are an integral part of human culture, and they often resonate the most when we can sing them in our native language. However, translating song lyrics presents a unique challenge: maintaining singability, naturalness, and semantic fidelity. In this work, we computationally interpret Low`s Pentathlon Principle of ... |
nehrdich-etal-2025-mitra | 2,025 | MITRA-zh-eval: Using a Buddhist Chinese Language Evaluation Dataset to Assess Machine Translation and Evaluation Metrics | With the advent of large language models, machine translation (MT) has become a widely used, but little understood, tool for accessing historical and multilingual texts. While models like GPT, Claude, and Deepseek increasingly enable translation of low-resource and ancient languages, critical questions remain about the... |
bizzoni-etal-2025-effects | 2,025 | Effects of Publicity and Complexity in Reader Polarization | We investigate how Goodreads rating distributions reflect variations in audience reception across literary works. By examining a large-scale dataset of novels, we analyze whether metrics such as the entropy or standard deviation of rating distributions correlate with textual features {--} including perplexity, nominal ... |
bhandarkar-etal-2025-psytex | 2,025 | PsyTEx: A Knowledge-Guided Approach to Refining Text for Psychological Analysis | LLMs are increasingly applied for tasks requiring deep interpretive abilities and psychological insights, such as identity profiling, mental health diagnostics, personalized content curation, and human resource management. However, their performance in these tasks remains inconsistent, as these characteristics are not ... |
arnold-etal-2025-advances | 2,025 | Advances and Challenges in the Automatic Identification of Indirect Quotations in Scholarly Texts and Literary Works | Literary scholars commonly refer to the interpreted literary work using various types of quotations. Two main categories are direct and indirect quotations. In this work we focus on the automatic identification of two subtypes of indirect quotations: paraphrases and summaries. Our contributions are twofold. First, we p... |
li-etal-2025-assessing | 2,025 | Assessing Crowdsourced Annotations with LLMs: Linguistic Certainty as a Proxy for Trustworthiness | Human-annotated data is fundamental for training machine learning models, yet crowdsourced annotations often contain noise and bias. In this paper, we investigate the feasibility of employing large language models (LLMs), specifically GPT-4, as evaluators of crowdsourced annotations using a zero-shot prompting strategy... |
ingason-mechler-2025-evolution | 2,025 | The evolution of relative clauses in the IcePaHC treebank | We examine how the elements that introduce relative clauses, namely relative complementizers and relative pronouns, evolve over the history of Icelandic using the phrase structure analysis of the IcePaHC treebank. The rate of these elements changes over time and, in the case of relative pronouns, is subject to effects ... |
hamalainen-2025-psychology | 2,025 | On Psychology of AI -- Does Primacy Effect Affect ChatGPT and Other LLMs? | We study the primacy effect in three commercial LLMs: ChatGPT, Gemini and Claude. We do this by repurposing the famous experiment Asch (1946) conducted using human subjects. The experiment is simple, given two candidates with equal descriptions which one is preferred if one description has positive adjectives first bef... |
toro-isaza-kopp-2025-literary | 2,025 | The Literary Canons of Large-Language Models: An Exploration of the Frequency of Novel and Author Generations Across Gender, Race and Ethnicity, and Nationality | Large language models (LLMs) are an emerging site for computational literary and cultural analysis. While such research has focused on applying LLMs to the analysis of literary text passages, the probabilistic mechanism used by these models for text generation lends them to also understanding literary and cultural tren... |
rehbein-etal-2025-moral | 2,025 | Moral reckoning: How reliable are dictionary-based methods for examining morality in text? | Due to their availability and ease of use, dictionary-based measures of moral values are a popular tool for text-based analyses of morality that examine human attitudes and behaviour across populations and cultures. In this paper, we revisit the construct validity of different dictionary-based measures of morality in t... |
backer-hyman-2025-bootstrapping | 2,025 | Bootstrapping AI: Interdisciplinary Approaches to Assessing OCR Quality in English-Language Historical Documents | New LLM-based OCR and post-OCR correction methods promise to transform computational historical research, yet their efficacy remains contested. We compare multiple correction approaches, including methods for {\textquotedblleft}bootstrapping{\textquotedblright} fine-tuning with LLM-generated data, and measure their eff... |
chatzikyriakidis-natsina-2025-poetry | 2,025 | Poetry in RAGs: Modern Greek interwar poetry generation using RAG and contrastive training | In this paper, we discuss Modern Greek poetry generation in the style of lesser known Greek poets of the interwar period. The paper proposes the use of Retrieval-Augmented Generation (RAG) to automatically generate poetry using Large Language Models (LLMs). A corpus of Greek interwar poetry is used and prompts exemplif... |
teng-ohman-2025-using | 2,025 | Using Multimodal Models for Informative Classification of Ambiguous Tweets in Crisis Response | Social media platforms like X provide real-time information during crises but often include noisy, ambiguous data, complicating analysis. This study examines the effectiveness of multimodal models, particularly a cross-attention-based approach, in classifying tweets about the California wildfires as {\textquotedblleft}... |
messner-lippincott-2025-transferring | 2,025 | Transferring Extreme Subword Style Using Ngram Model-Based Logit Scaling | We present an ngram model-based logit scaling technique that effectively transfers extreme subword stylistic variation to large language models at inference time. We demonstrate its efficacy by tracking the perplexity of generated text with respect to the ngram interpolated and original versions of an evaluation model.... |
piper-wu-2025-evaluating | 2,025 | Evaluating Large Language Models for Narrative Topic Labeling | This paper evaluates the effectiveness of large language models (LLMs) for labeling topics in narrative texts, comparing performance across fiction and news genres. Building on prior studies in factual documents, we extend the evaluation to narrative contexts where story content is central. Using a ranked voting system... |
mohamed-eida-habash-2025-beyond | 2,025 | Beyond Cairo: Sa`idi Egyptian Arabic Corpus Construction and Analysis | Egyptian Arabic (EA) NLP resources have mainly focused on Cairene Egyptian Arabic (CEA), leaving sub-dialects like Sa`idi Egyptian Arabic (SEA) underrepresented. This paper introduces the first SEA corpus {--} an open-source, 4-million-word literary dataset of a dialect spoken by {\textasciitilde}30 million Egyptians. ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.