Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 2 new columns ({'id', 'source'})

This happened while the json dataset builder was generating data using

/tmp/hf-datasets-cache/medium/datasets/62377710116975-config-parquet-and-info-ziqinghuang-uncheatable-7641f8f9/hub/datasets--ziqinghuang--uncheatable/snapshots/18a066490333e0732f9f843d64451c0ade84b7be/arxiv_computer_science_20250901to20250914.jsonl.gz

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              id: string
              text: string
              source: string
              to
              {'text': Value('string')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 2 new columns ({'id', 'source'})
              
              This happened while the json dataset builder was generating data using
              
              /tmp/hf-datasets-cache/medium/datasets/62377710116975-config-parquet-and-info-ziqinghuang-uncheatable-7641f8f9/hub/datasets--ziqinghuang--uncheatable/snapshots/18a066490333e0732f9f843d64451c0ade84b7be/arxiv_computer_science_20250901to20250914.jsonl.gz
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
Similarity-based Outlier Detection for Noisy Object Re-Identification Using Beta Mixtures § Abstract Object re-identification (Re-ID) methods are highly sensitive to label noise, which typically leads to significant performance degradation. We address this challenge by reframing Re-ID as a supervised image similarity task and adopting a Siamese network architecture trained to capture discriminative pairwise relationships. Central to our approach is a novel statistical outlier detection (OD) framework, termed Beta-SOD (Beta mixture Similarity-based Outlier Detection), which models the distribution of cosine similarities between embedding pairs using a two-component Beta distribution mixture model. We establish a novel identifiability result for mixtures of two Beta distributions, ensuring that our learning task is well-posed. The proposed OD step complements the Re-ID architecture combining binary cross-entropy, contrastive, and cosine embedding losses that jointly optimize feature-level similarity learning. We demonstrate the effectiveness of Beta-SOD in de-noising and Re-ID tasks for person Re-ID, on CUHK03 and Market-1501 datasets, and vehicle Re-ID, on VeRi-776 dataset. Our method shows superior performance compared to the state-of-the-art methods across various noise levels (10-30%), demonstrating both robustness and broad applicability in noisy Re-ID scenarios. The implementation of Beta-SOD is available at: <https://github.com/waqar3411/Beta-SOD>., Evan Murphy0009-0007-7121-2982, and Vladimir A. Krylov0000-0002-9734-5974, Senior Member, IEEE The authors are with the School of Mathematical Sciences, Dublin City University, Dublin, Ireland.... Object Re-Identification, image similarity, statistical filtering, Beta distribution mixtures, identifiability. § INTRODUCTION Object re-identification (Re-ID) is a fundamental task in computer vision that seeks to match instances of the same object across different cameras or viewpoints. Person Re-ID has been extensively studied, tackling pose variation, occlusion, illumination, and background clutter. Equally, vehicle Re-ID has gained prominence in intelligent transportation systems by confronting inter-class similarity, intra-class variability, and occlusion challenges. Both subfields fundamentally aim to learn discriminative features that generalize across varied conditions. A further complication arises in practical systems from noisy labels: in object Re-ID, even small annotation errors due to mislabelled identities or detector misalignment can significantly distort learned representation boundaries. Indeed, recent work demonstrates that learning robust person Re-ID models under label noise poses a significant challenge, especially when each identity has limited samples, and naïve training can result in large degradation in accuracy and generalisability. To address the negative impact of label noise on deep Re-ID models, a variety of strategies have been developed in recent years. One line of work focuses on robust loss functions, since conventional objectives such as cross-entropy (CE) or mean squared error (MSE) are highly sensitive to mislabeled samples. A second line of research assigns adaptive importance weights to training examples, down-weighting the contribution of noisy labels while emphasizing reliable ones. Another family of approaches explicitly identifies mislabeled samples and either corrects them or removes them from training, thereby purifying the dataset. Collectively, these techniques aim to mitigate the effects of corrupted annotations, and improve the stability and generalization of Re-ID systems under realistic conditions, with loss-function design playing a particularly central role. Among these strategies, the choice of loss function is especially critical, as it directly determines how the model learns from both clean and noisy samples. Standard CE loss, while common, is highly sensitive to label noise because of its strong penalties on misclassifications, often leading Convolutional Neural Networks (CNNs) to overfit corrupted labels. Similarly, contrastive loss (CL), commonly applied in Re-ID tasks to enforce distance-based similarity constraints, is vulnerable to noisy annotations since mislabeled pairs are penalized even when the underlying match is correct. Alternatives such as cosine-embedding loss replace Euclidean distance with angular similarity, offering improved robustness in high-dimensional spaces, yet they too remain affected by label noise due to their dependence on angular separation. To overcome these limitations, several robust loss formulations have been proposed, including generalized cross-entropy, bootstrapped loss, and noise-corrected contrastive objectives, which aim to reduce sensitivity to corrupted labels while preserving discriminative feature learning. Nonetheless, loss-based remedies address the complexities of noisy Re-ID data only partially, motivating complementary approaches that filter or adjust corr
Modality-Agnostic Input Channels Enable Segmentation of Brain lesions in Multimodal MRI with Sequences Unavailable During Training § Abstract Segmentation models are important tools for the detection and analysis of lesions in brain MRI. Depending on the type of brain pathology that is imaged, MRI scanners can acquire multiple, different image modalities (contrasts). Most segmentation models for multimodal brain MRI are restricted to fixed modalities and cannot effectively process new ones at inference. Some models generalize to unseen modalities but may lose discriminative modality-specific information. This work aims to develop a model that can perform inference on data that contain image modalities unseen during training, previously seen modalities, and heterogeneous combinations of both, thus allowing a user to utilize any available imaging modalities. We demonstrate this is possible with a simple, thus practical alteration to the U-net architecture, by integrating a modality-agnostic input channel or pathway, alongside modality-specific input channels. To train this modality-agnostic component, we develop an image augmentation scheme that synthesizes artificial MRI modalities. Augmentations differentially alter the appearance of pathological and healthy brain tissue to create artificial contrasts between them while maintaining realistic anatomical integrity. We evaluate the method using 8 MRI databases that include 5 types of pathologies (stroke, tumours, traumatic brain injury, multiple sclerosis and white matter hyperintensities) and 8 modalities (T1, T1+contrast, T2, PD, SWI, DWI, ADC and FLAIR). The results demonstrate that the approach preserves the ability to effectively process MRI modalities encountered during training, while being able to process new, unseen modalities to improve its segmentation. Project code: https://github.com/Anthony-P-Addison/AGN-MOD-SEGModality-Agnostic Input Channels 0009-0008-4938-6123 Felix Wagner10009-0004-6683-171X Wentian Xu10009-0009-2440-007X Natalie Voets20000-0001-8078-4471 Konstantinos Kamnitsas10000-0003-3281-6509 A.P.Addison et al. Department of Engineering Science, University of Oxford, Oxford, UK Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK anthony.addison@eng.ox.ac.uk § INTRODUCTION Magnetic resonance imaging (MRI) is an invaluable tool for the diagnosis and analysis of lesions. MRI scanners can be configured with different pulse sequences to acquire varying contrasts between tissues. A contrast setting defines an MRI "modality". Different MRI modalities provide enhanced visualization of various brain tissues and lesions. Thus multimodal MRI uses a set of modalities for comprehensive brain analysis. Depending on the pathology studied and the clinical practice in the specific imaging center, a different set of MRI modalities are acquired for the patient. Neural networks are effective models for the segmentation of brain lesions. These models are typically trained for one type of lesion at a time, using a predefined set of MRI modalities. Therefore, they cannot process a different set of modalities, or a new modality that has not been seen in the training data. Given that different sets of modalities are acquired at different imaging centers, even for the study of the same type of lesion, we need more flexible tools for multimodal brain MRI segmentation. Therefore, the objective of this work is to develop a multimodal model that can segment brain lesions using any MRI modalities available to the user, even types that have not been encountered in the training data. A significant amount of work has been done to develop methods that can handle missing modalities at inference time. However, these methods cannot deal with new modalities at inference time. Domain adaptation methods adapt a model trained on a source data domain to effectively generalize across a new target data domain. This approach was initially demonstrated using adversarial networks to adapt to a target set of MRI modalities. Fine-tuning has also been applied to adapt to new brain MRI datasets. The drawback of these methods is the requirement to pre-collect data from the target domain to retrain (adapt) model weights, which can be time-consuming, resource-intensive, and often impractical. Instead, here, our aim is to develop models that better generalize to new modalities directly. Domain Generalization (DG) aims to train a model that generalizes to unseen domains without further adaptation, such as through contrastive learning or non-linear intensity augmentations. In principle, such DG methods learn generic, modality-agnostic representations, but may compromise model ability to extract modality-specific, discriminative representations, useful in multimodal MRI processing. Our work alleviates this by integrating modality-agnostic filters, trained via augmentations appropriate for modality generalization, into a model that preserves modality-specific filters.
§ Abstract Generative AI (GenAI) is reshaping work, but adoption remains largely individual and experimental rather than integrated into collaborative routines. Whether GenAI can move from individual use to collaborative work is a critical question for future organizations. Journalism offers a compelling site to examine this shift: individual journalists have already been disrupted by GenAI tools; yet newswork is inherently collaborative relying on shared routines and coordinated workflows. We conducted 27 interviews with newsrooms managers, editors, and front-line journalists in China. We found that journalists frequently used GenAI to support daily tasks, but value alignment was safeguarded mainly through individual discretion. At the organizational level, GenAI use remained disconnected from team workflows, hindered by structural barriers and cultural reluctance to share practices. These findings underscore the gap between individual and collective adoption, pointing to the need for accounting for organizational structures, cultural norms, and workflow integration when designing GenAI for collaborative work.Integrating GenAI into Collaborative Newsroom Routines]Can GenAI Move from Individual Use to Collaborative Work? Experiences, Challenges, and Opportunities of Integrating GenAI into Collaborative Newsroom Routines 10003120.10003121.10003129 Human-centered computing Computer supported cooperative work 500 10003120.10003130.10011762 Human-centered computing Empirical studies in collaborative and social computing 500 10003120.10003121.10011748 Human-centered computing Empirical studies in HCI 300 [500]Human-centered computing Computer supported cooperative work [500]Human-centered computing Empirical studies in collaborative and social computing [300]Human-centered computing Empirical studies in HCI § INTRODUCTION Organizations today face a profound question: How should today’s organizations adopt generative artificial intelligence (GenAI), and how could GenAI become a routine part of collaborative work in future organizations? In 2025, Stanford HAI and Google DeepMind invited researchers worldwide to explore future collaborative work within organizations, emphasizing that “AI is transforming how we work, make decisions, and collaborate. But while our tools are evolving fast, our organizational models are overdue for reinvention.” Despite its transformative potential, GenAI has entered most workplaces slowly and cautiously. Early adoption often takes the form of individual experiments and tentative trials, rather than sweeping integration led by organizations. Across industries, organizations are grappling with a familiar tension in technology adoption: the promise of innovation collides with the uncertainty of organizational change, as leaders and practitioners question how GenAI will reshape collaborative routines and align, or misalign, with teamwork norms. In this study, we investigate how GenAI might move beyond fragmented individual use to become an integral part of organizational collaboration, and what challenges and opportunities this transition creates for reconfiguring collaborative work around GenAI in organizations. In this paper, we use the newsroom as our primary empirical site to examine what collaborative work with GenAI entails in practice. News production is rarely the effort of a single individual: front-line journalists gather information, draft stories, and exchange leads with peers; their work is then reviewed, edited, and sometimes rewritten by editors; and senior managers provide strategic oversight, shape policies, and liaise with external stakeholders. Collaboration in newsrooms therefore spans both inter-role and intra-role interactions, anchored in shared routines and collective values such as editorial responsibility and mutual trust. We distinguish between three key roles around collaborative newswork in our study: front-line journalists, who are primarily responsible for story development and on-the-ground reporting; editors, who oversee content refinement, ensure editorial consistency, and coordinate team workflows; and senior managers, the newsroom leaders responsible for organizational decision-making and long-term strategy. While journalism scholarship often uses the umbrella term journalists to refer broadly to various news practitioners, we adopt this convention as well. In our analysis of how GenAI reshapes collaborative practices and team dynamics, we therefore attend to collaboration within roles as well as across roles. While these roles structure collaboration in conventional newsroom routines, the rise of GenAI introduces new uncertainties about how such collaboration should unfold. Recently, on the one hand, many individual journalists are experimenting with GenAI in tentative, ad hoc ways, often relying on personal judgment to ensure that its use remains consistent with professional norms. On the other hand, major news organizations have started to articulate
§ Abstract Large language models can influence users through conversation, creating new forms of dark patterns that differ from traditional UX dark patterns. We define LLM dark patterns as manipulative or deceptive behaviors enacted in dialogue. Drawing on prior work and AI incident reports, we outline a diverse set of categories with real-world examples. Using them, we conducted a scenario-based study where participants (N=34) compared manipulative and neutral LLM responses. Our results reveal that recognition of LLM dark patterns often hinged on conversational cues such as exaggerated agreement, biased framing, or privacy intrusions, but these behaviors were also sometimes normalized as ordinary assistance. Users’ perceptions of these dark patterns shaped how they respond to them. Responsibilities for these behaviors were also attributed in different ways, with participants assigning it to companies and developers, the model itself, or to users. We conclude with implications for design, advocacy, and governance to safeguard user autonomy.The Siren Song of LLMs: Dark Patterns in Large Language Models]The Siren Song of LLMs: How Users Perceive and Respond to Dark Patterns in Large Language Models 10003120.10003121.10011748 Human-centered computing Empirical studies in HCI 500 [500]Human-centered computing Empirical studies in HCI ACM-Reference-Format § USER STUDY SCENARIOS Here we provide the remaining nine scenarios used in our human studies. 0.45 Interaction Padding 0.45 Excessive Flattery 0.45 Simulated Emotional & Sexual Intimacy 0.45 Ideological Steering 0.45 Unprompted Intimacy Probing 0.45 Behavioral Profiling via Dialogue Six of the eleven scenarios used in our user study, each illustrating a distinct category of LLM dark pattern introduced in. Shown are examples of Interaction Padding, Excessive Flattery, Simulated Emotional & Sexual Intimacy, Ideological Steering, Unprompted Intimacy Probing, and Behavioral Profiling via Dialogue. 0.45 Simulated Authority 0.45 Opaque Training Data Sources 0.45 Opaque Reasoning Process Three of the eleven scenarios used in our user study. Each corresponds to a distinct category of LLM dark pattern introduced in. Shown are examples of Simulated Authority, Opaque Training Data Sources, and Opaque Reasoning Process.
When FinTech Meets Privacy: Securing Financial LLMs with Differential Private Fine-Tuning § Abstract The integration of Large Language Models (LLMs) into financial technology (FinTech) has revolutionized the analysis and processing of complex financial data, driving advancements in real-time decision-making and analytics. With the growing trend of deploying AI models on edge devices for financial applications, ensuring the privacy of sensitive financial data has become a significant challenge. To address this, we propose DPFinLLM, a privacy-enhanced, lightweight LLM specifically designed for on-device financial applications. DPFinLLM combines a robust differential privacy mechanism with a streamlined architecture inspired by state-of-the-art models, enabling secure and efficient processing of financial data. This proposed DPFinLLM can not only safeguard user data from privacy breaches but also ensure high performance across diverse financial tasks. Extensive experiments on multiple financial sentiment datasets validate the effectiveness of DPFinLLM, demonstrating its ability to achieve performance comparable to fully fine-tuned models, even under strict privacy constraints., Hoyeung Leung1, Xiaoyi Wang2, Jia Wei3, Honghui Xu34 1Georgia Institute of Technology, Atlanta, GA, USA 2Sichuan University of Media and Communications, Chengdu, Sichuan, China 3Kennesaw State University, Marietta, GA, USA 4Corresponding author: Email: hxu10@kennesaw.edu FinTech, Differential Privacy, Financial LLM § INTRODUCTION The proliferation of LLMs has revolutionized natural language understanding and generation, driving significant advancements in the FinTech sector. These models excel in processing complex financial data, enabling applications such as sentiment analysis, risk management, fraud detection, and credit scoring. By providing actionable insights and facilitating real-time decision-making, LLMs have become indispensable tools in modern financial services, empowering FinTech solutions to deliver smarter, faster, and more secure financial operations. Recently, there has been a growing trend toward deploying AI models on edge devices for financial applications. In light of this, on-device financial LLMs will offer several advantages, including real-time data processing, reduced dependency on cloud services, and enhanced user data privacy—critical components in the evolving FinTech landscape. However, this trend also introduces significant challenges. On-device models must achieve a delicate balance between computational efficiency, privacy protection, and task performance, making their design and training particularly complex. Addressing these challenges is essential to advancing the integration of LLMs into secure, efficient, and innovative FinTech ecosystems. The sensitive nature of financial data amplifies the challenges associated with deploying on-device financial LLMs. Membership inference attacks and model inversion attacks on these AI models pose significant risks, enabling unauthorized access to private financial information. While existing differential privacy techniques have been explored across three key phases, input dataset preparation, model training, and model output generation, their application to on-device financial LLMs remains under-researched. Bridging this gap is crucial to ensuring the secure and effective deployment of financial LLMs on edge devices, safeguarding sensitive data while maintaining financial LLMs' performance. To address these challenges, we introduce DPFinLLM, a novel on-device financial large language model that integrates a lightweight architectural design with a robust differential privacy mechanism. DPFinLLM employs a privacy-enhanced training pipeline to safeguard sensitive financial data while maintaining high performance across various financial tasks. Drawing inspiration from state-of-the-art models like Llama2 and ChatGLM2, DPFinLLM features a streamlined architecture optimized for edge devices. By incorporating Low-Rank Adaptation (LoRA) for fine-tuning, the model achieves computational efficiency and supports task-specific optimization with minimal resource requirements. The key contributions of this paper are summarized as follows: * We propose DPFinLLM, a privacy-enhanced on-device financial LLM that integrates differential privacy techniques with a lightweight architectural design tailored for edge devices. * A robust differential privacy mechanism is incorporated into the fine-tuning process to protect sensitive financial data from privacy breaches. * Comprehensive experiments on multiple financial sentiment datasets validate the effectiveness of DPFinLLM, demonstrating performance comparable to baseline models even under strict privacy constraints. The remainder of this paper is structured as follows: Section reviews existing research on financial LLMs and privacy-preserving techniques. Section details the architectural design and privacy-preserving training framewo
In-Context Learning Enhanced Credibility Transformer § Abstract The starting point of our network architecture is the Credibility Transformer which extends the classical Transformer architecture by a credibility mechanism to improve model learning and predictive performance. This Credibility Transformer learns credibilitized CLS tokens that serve as learned representations of the original input features. In this paper we present a new paradigm that augments this architecture by an in-context learning mechanism, i.e., we increase the information set by a context batch consisting of similar instances. This allows the model to enhance the CLS token representations of the instances by additional in-context information and fine-tuning. We empirically verify that this in-context learning enhances predictive accuracy by adapting to similar risk patterns. Moreover, this in-context learning also allows the model to generalize to new instances which, e.g., have feature levels in the categorical covariates that have not been present when the model was trained – for a relevant example, think of a new vehicle model which has just been developed by a car manufacturer. Keywords. Transformer, Credibility Transformer, In-Context Learning, Attention Layer, Foundation Model, Insurance Pricing.§ INTRODUCTION A fundamental challenge in actuarial modeling lies in balancing predictive performance, explainability and reliable model estimation. This is especially true for risks for which one does not have a long observation history. Classical credibility theory, formalized by and, provides an elegant framework for addressing this challenge through an optimal linear combination of instance individual and collective claims experience. In parallel, the machine learning community has developed sophisticated approaches for learning from limited data, culminating in the emergence of in-context learning (ICL) capabilities in large-scale Transformer models. The discovery that large-scale Transformers can adapt to new tasks through demonstration examples, revolutionized our understanding of few-shot learning. That is, based on off-line training, a large-scale Transformer architecture can perform new tasks if sufficient context to these new tasks is provided, in particular, these tasks are done without retraining the Transformer architecture to the new situation. In some sense, this resembles the capability of being able to perform transfer learning. Recent theoretical work has established that this phenomenon can be understood as implicit gradient descent, Bayesian inference and ICL on tabular data and. Recent work by introduced the Credibility Transformer, demonstrating how classical credibility concepts can be embedded within modern Transformer architectures helping to improve the predictive performance of Transformers on tabular input data. Their approach leverages the CLS token mechanism of to implement a learnable prior that is combined with feature-specific information through an attention-weighted averaging – similarly to linear credibility – achieving state-of-the-art performance on insurance datasets while maintaining explainability. A main benefit of this credibility mechanism that acts similar to drop-out during model training is the stabilization of the training algorithm leading to delayed early stopping points and better feature extraction. However, existing approaches, including the Credibility Transformer, evaluate each risk in isolation during inference, neglecting potentially valuable information from similar observations within the portfolio. This limitation becomes particularly relevant in practical scenarios where insurers must price policies for risk profiles with limited historical experience while having access to recent claims data from similar risks, just think of expanding an existing insurance product to new geographical regions. The ability to leverage this contextual information dynamically, without retraining the model, may significantly enhance predictive accuracy and adapt to risk patterns of similar experience. §.§ Literature review and background §.§.§ Classical credibility theory Credibility theory provides the theoretical foundation for combining individual experience with collective information in actuarial applications. The seminal work of and established the mathematical framework for an optimal linear prediction taking into account both sources of information. Generally, combining different sources of information involves intractable Bayesian posterior computations, and coping with this intractability, proposed a best-linear approximation by minimizing the mean squared error (MSE). This linear approximation takes the form μ = α Y + (1 - α) μ_0, where Y reflects the individual experience, μ_0 is the collective information, and α∈ [0,1] is the credibility weight that weighs these two components. The credibility weight takes the form α = w/w + κ, where w>0 is a case weight indicating the amount of infor
Cross-Layer Attention Probing for Fine-Grained Hallucination Detection § Abstract With the large-scale adoption of Large Language Models (LLMs) in various applications, there is a growing reliability concern due to their tendency to generate inaccurate text, i.e. hallucinations. In this work, we propose Cross-Layer Attention Probing (CLAP), a novel activation probing technique for hallucination detection, which processes the LLM activations across the entire residual stream as a joint sequence. Our empirical evaluations using five LLMs and three tasks show that CLAP improves hallucination detection compared to baselines on both greedy decoded responses as well as responses sampled at higher temperatures, thus enabling fine-grained detection, i.e. the ability to disambiguate hallucinations and non-hallucinations among different sampled responses to a given prompt. This allows us to propose a detect-then-mitigate strategy using CLAP to reduce hallucinations and improve LLM reliability compared to direct mitigation approaches. Finally, we show that CLAP maintains high reliability even when applied out-of-distribution.2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). TRUST-AI: The European Workshop on Trustworthy AI. Organized as part of the European Conference of Artificial Intelligence - ECAI 2025. October 2025, Bologna, Italy. 1]Malavika Suresh[ email=m.suresh@rgu.ac.uk, ] [1] [1]Robert Gordon University, Aberdeen, United Kingdom 2]Rahaf Aljundi[ email=rahaf.al.jundi@toyota-europe.com, ] [2]Toyota Motor Europe, Brussels, Belgium 1]Ikechukwu Nkisi-Orji[ email=i.nkisi-orji@rgu.ac.uk, ] 1]Nirmalie Wiratunga[ email=n.wiratunga@rgu.ac.uk, ] [1]Corresponding author. hallucination detection activation probing large language models § INTRODUCTION Large Language Models (LLMs) have become increasingly accessible and scalable for commercial use, largely due to the API-based access offered by several LLM platform providers. From AI-generated summaries in search engines to chatbots in various health and business sector applications, LLM generated text is being widely consumed by a large proportion of the population. Such widespread adoption increases the risk of spreading misinformation and causing harm to users through factually incorrect LLM-generated text, i.e. hallucinations. Improving the trustworthiness by detecting and mitigating hallucinations is therefore an important research objective. Current approaches for tackling LLM hallucinations fall under three broad categories: black-box, grey-box and open-box methods. Among open-box methods, some recent works have established the potential of building binary hallucination detectors using raw LLM activations as input, termed activation probing. Other works have shown that hallucinations can be mitigated by directly editing activations or output probabilities at generation time. However, the effect of activation-editing on other model capabilities is not well understood. While prior work on activation probing has focused on individual layers, in this work we introduce Cross-Layer Attention Probing (CLAP), a fundamentally different approach that utilises the full residual stream (i.e. activations from all LLM layers) to probe model behaviour more comprehensively. We hypothesise that the contribution of activations at different layers to hallucination detection varies for different tasks. To extract the most relevant information across layers, our method constructs a sequence of tokens by considering the activations at each LLM layer as an input token, and employs an attention mechanism over the sequence input. The design of our proposed probing technique is motivated by prior studies investigating the role of different LLM layers in language generation and hallucinations. We model hallucination detection as a supervised classification problem, which is supported by recent work that shows that automatic hallucination detection is not possible without both positive and negative examples. First, across five LLMs and three tasks (two factual question-answering tasks and one chain-of-thought reasoning task), we show that our method improves over uncertainty baselines and activation probing methods that consider only individual layers. Next, we build on the observation that different responses sampled for a given prompt can vary, with some being hallucinated and others not. We leverage the responses in the sampled space to augment the training data. We find that our proposed method, by learning to attend to different layers, can leverage this fine-grained supervision signal better to provide improved fine-grained detection performance compared to baselines. We further explore the integration of our method with hallucination mitigation pipelines, such as DoLa. Noting that mitigation methods can adversely affect originally non-hallucinated samples, we combine CLAP with DoLa. Our results demonstr
FINITE SCALAR QUANTIZATION ENABLES REDUNDANT AND TRANSMISSION-ROBUST NEURAL AUDIO COMPRESSION AT LOW BIT-RATES § Abstract Neural Audio Codecs (NACs) have become increasingly adopted in speech processing tasks due to their excellent rate-distortion performance and compatibility with Large Language Models (LLMs) as discrete feature representations for audio generation. While most existing codecs rely on Residual Vector Quantization (RVQ), Finite Scalar Quantization (FSQ) has recently emerged as a compelling alternative that simplifies training and natively supports single codebooks. We introduce NeuCodec, an FSQ-based NAC, and show that FSQ encodes baked-in redundancy which produces an encoding which is robust when transmitted through noisy channels. First, through an encoder distillation experiment, we show that two different encoders can learn to encode identical audio into vastly different code sequences whilst maintaining comparable reconstruction quality with the same quantizer and decoder. Second, we demonstrate that FSQ has vastly superior bit-level perturbation robustness by comparing the performance of RVQ and FSQ codecs when simulating the transmission of code sequences through a noisy channel.Audio Compression, Neural Compression, Neural Audio Codec, Residual Vector Quantization, Finite Scalar Quantization. § INTRODUCTION Recently, Neural Audio Codecs (NACs) have gained widespread usage in speech processing, due to their ability to compress speech into ultra-low bitrate discrete code sequences whilst maintaining high perceptual quality when reconstructing these sequences back into waveforms. The autoencoding task used to train NACs embeds a compressed latent representation of speech features into discrete sequences of codes, which are useful for training autoregressive transformers to complete downstream audio tasks such as Text-to-Speech (TTS), Automatic Speech Recognition (ASR) and Full Duplex Speech Modeling; they can also be used as a domain-specific tokenized vocabulary that Large Language Models (LLMs) can be adapted to use for audio generation. Conventionally, the most widely used NACs have utilized Residual Vector Quantization (RVQ), where at each encoder output timestep, the encoded feature representation is quantized by a top-level `coarse' codebook, and additional codebooks quantize the residual error from each prior quantization operation. Although effective, RVQ presents training challenges, as propagating gradients to the codeword vectors to align them with the unquantized encoder outputs necessitates the use of auxiliary loss functions. This creates a delicate optimization problem that often leads to codebook collapse where only a subset of codewords is used. Additionally, RVQ also requires a comparatively complicated downstream modeling setup, as the sequence length is expanded by the number of quantized residuals; mechanisms to model the hierarchical nature of RVQ codes commonly rely on two separate transformers that operate globally and locally. Finite Scalar Quantization (FSQ), a method that uses a simple fixed-grid for partitioning the codebook, constructs a single codebook by quantizing each output vector dimension, treating each dimension as an implicit codebook, rather than quantizing an entire latent vector as a whole. Using FSQ results in almost complete codebook utilization, requires no auxiliary losses to train and affords simpler downstream architectures due to the usage of a single codebook, rather than multiple recursively dependent codes needing to be predicted per timestep. Through experimentation with our codec, NeuCodec, we show that FSQ-based codecs also exhibit an additional perturbation robustness property in their code sequences. First, we introduce NeuCodec, our FSQ-based codec model. Second, via an encoder distillation experiment with NeuCodec, we show that two encoders can learn to encode the same audio in very different code sequences given a fixed quantizer and decoder, yet the sequence can be reconstructed to a similar perceptual fidelity from both sequences; analyzing the differences between the representations suggests the learned encoding is localized and has redundancy baked-in. Third, via a perturbation experiment where we simulate transmission of codes from various FSQ and RVQ codecs through a noisy channel, we show that FSQ-based codecs exhibit better performance under reasonably large levels of perturbation. We offer explanations for this phenomenon and speculate on future applications of FSQ-based codecs in light of this property. § BACKGROUND RVQ discretizes an embedding space through first performing Vector Quantization over a finite codebook, after which discretization errors (e.g. the distance between the scalar vector and the nearest neighbor codeword embedding) are obtained and discretized again, a process that continues for a predetermined number of codebooks. This means that scalar embeddings can be accurately represented through a hierar
LEARNING-BASED PLANNING FOR IMPROVING SCIENCE RETURN OF EARTH OBSERVATION SATELLITES § Abstract Earth observing satellites are powerful tools for collecting scientific information about our planet, however they have limitations: they cannot easily deviate from their orbital trajectories, their sensors have a limited field of view, and pointing and operating these sensors can take a large amount of the spacecraft’s resources. It is important for these satellites to optimize the data they collect and include only the most important or informative measurements. Dynamic targeting is an emerging concept in which satellite resources and data from a lookahead instrument are used to intelligently reconfigure and point a primary instrument. Simulation studies have shown that dynamic targeting increases the amount of scientific information gathered versus conventional sampling strategies. In this work, we present two different learning-based approaches to dynamic targeting, using reinforcement and imitation learning, respectively. These learning methods build on a dynamic programming solution to plan a sequence of sampling locations. We evaluate our approaches against existing heuristic methods for dynamic targeting, showing the benefits of using learning for this application. Imitation learning performs on average 10.0% better than the best heuristic method, while reinforcement learning performs on average 13.7% better. We also show that both learning methods can be trained effectively with relatively small amounts of data.Copyright 2024. All rights reserved. § INTRODUCTION An Earth observation satellite with a primary sensor and a lookahead sensor that gives information about the environment that lies on the path ahead. The goal of this work is to determine where to point the primary instrument to measure the highest value scientific targets along the path. Earth observing satellites serve an important purpose in acquiring scientific data, with applications in various fields such as geology, meteorology, and climate science. However, these satellites have limitations in terms of data collection. They are confined to predetermined orbital paths, possess sensors with restricted fields of view, and necessitate a significant portion of the spacecraft's energy for sensor operation and positioning. Therefore, it is imperative for these satellites to optimize their data collection by focusing on the most informative measurements. The primary objective of this work is to develop strategies for determining the most efficient way for Earth observation satellites to sample their surroundings while considering practical constraints, particularly power consumption. We concentrate on the concept of dynamic targeting, i.e. developing intelligent methods for orienting satellite instruments to enhance their scientific output. Prior research has indicated that employing dynamic targeting methods results in a higher volume of informative scientific data compared to uniform or random sampling techniques. Previous approaches to this challenge have used various methodologies such as greedy algorithms, local search techniques, constraint programming, and dynamic programming. In this work, we introduce a novel approach to dynamic targeting that leverages two learning frameworks to address this problem, reinforcement and imitation learning. § BACKGROUND §.§ Related Work Satellites equipped with agile (pointable) instruments offer a more efficient approach to data collection compared to those featuring static “pushbroom" sensors. The Eagle Eye project has developed planning and scheduling models for observations of pointable 2D satellite instruments, and Lemaître et al. focus on scheduling for Agile Earth Observing Satellites (AEOS) which have three degrees of freedom for image acquisition. However, selecting observations for highly-agile instruments is an ongoing challenge; various techniques have been proposed to address this problem, including greedy algorithms, constraint programming approaches, dynamic programming, and local search methods.Existing works on agile instruments generally do not leverage past observations. Liao and Yang present a strategy for scheduling the order of imaging operations for FormoSat-2, a low-earth-orbit remote sensing satellite that previously imaged Taiwan. This approach takes into account current and upcoming weather conditions to provide a plan, which is periodically rescheduled using a rolling horizon scheme. The Autonomous Sciencecraft (ASE) was used to analyze acquired imagery and schedule future observations on the Earth Observing One spacecraft for over a decade. ASE operated on the orbital timescale (roughly 90 minutes), whereas ours and other methods can respond within minutes or faster. A prototype of a heuristic-based spacecraft pointing scheduler is presented by Chien and Troesch, which operates on the order of one overflight (about 8 minutes). The German FireBIRD autonomy mission concept involv
Difficulty-Aware Agent Orchestration in LLM-Powered Workflows § Abstract Large Language Model (LLM)-based agentic systems have shown strong capabilities across various tasks. However, existing multi-agent frameworks often rely on static or task-level workflows, which either over-process simple queries or underperform on complex ones, while also neglecting the efficiency-performance trade-offs across heterogeneous LLMs. To address these limitations, we propose Difficulty-Aware Agentic Orchestration (DAAO), a dynamic framework that adapts workflow depth, operator selection, and LLM assignment based on the difficulty of each input query. DAAO comprises three interdependent modules: a variational autoencoder (VAE) for difficulty estimation, a modular operator allocator, and a cost- and performance-aware LLM router. By leveraging heterogeneous LLMs and dynamically tailoring workflows, DAAO enables fine-grained, query-specific reasoning strategies. DAAO outperforms prior multi-agent systems in both accuracy and inference efficiency across six benchmarks. We will release our code and implementation details upon publication.The overall framework of our proposed DAAO. § INTRODUCTION Large Language Model (LLM)-based agents have exhibited remarkable capabilities across a wide spectrum of tasks, including question answering, data analysis, decision-making, code generation and web navigation. Building upon the success of single agents, recent advancements reveal that organizing multiple LLM-based agents into structured agentic workflows can significantly enhance task performance. In such workflows, agents can interact either cooperatively or competitively depending on the task context. These multi-agent systems can overcome the cognitive and functional limitations of individual models, thereby exhibiting collective intelligence similar to human collaboration in a society of agents. In recent years, the research community has focused on automating multi-agent system design. For instance, DsPy and EvoPrompting automate prompt optimization, GPTSwarm optimizing inter-agent communication, and EvoAgent self-evolving agent profiling. However, these systems are often constrained by limited search spaces and rigid representation paradigms, resulting in marginal performance gains and limited adaptability to diverse task requirements. Subsequently, ADAS and AFlow employ code as representation for workflow, facilitating robust and flexible workflow searches through different paradigms, with ADAS utilizing heuristic search and AFlow adopting Monte Carlo tree search. MaAS proposes an agentic supernet to generate a query-specific multi-agent system for each user query. Despite their success, existing automation pipelines often lack complex adaptivity and LLM heterogeneity. Task-level workflows are typically built as uniform multi-agent systems for entire task categories, achieving strong metrics like accuracy and pass@k but relying on heavy pipelines with excessive LLM calls and tool usage. This design over-processes simple queries, wasting resources and overlooking factors like token cost and latency. Query-level workflows introduce input-specific adaptation, but their granularity is often insufficient, leading to suboptimal or oversimplified workflows for difficult inputs. These limitations motivate a difficulty-adaptive framework that dynamically balances complexity and cost. Furthermore, most workflows rely on a single large LLM (e.g., GPT-4o ), ignoring recent findings that different LLMs exhibit complementary capabilities. Smaller models can outperform larger ones on specific tasks while significantly reducing cost. Thus, there is increasing support for agentic workflows that integrate diverse LLMs of varying sizes, leveraging their specialization to improve performance and efficiency. To address the challenges of adapting reasoning strategies to queries with varying difficulty and domain characteristics, we propose Difficulty-Aware Agentic Orchestration (DAAO). DAAO is a dynamic framework that estimates the difficulty of each incoming query and composes optimized workflows by selecting suitable agentic operators—such as Chain-of-Thought (CoT), Multi-Agent Debate, and ReAct —and assigning them to appropriate LLMs. By tailoring both workflow depth and model assignment based on query characteristics, DAAO balances reasoning effectiveness and computational cost. Furthermore, the system refines its predictions over time through feedback, enabling continual adaptation. Technically, DAAO decomposes the orchestration process into three interdependent modules. It begins with a query difficulty estimator, implemented as a variational autoencoder (VAE), which encodes each input query into a latent representation reflecting its difficulty. This difficulty signal then guides the agentic operator allocator, which determines the depth of the workflow and selects suitable reasoning operators based on both the query and its inferred complexity.
Compressing CNN models for resource-constrained systems by channel and layer pruning § Abstract Convolutional Neural Networks (CNNs) have achieved significant breakthroughs in various fields. However, these advancements have led to a substantial increase in the complexity and size of these networks. This poses a challenge when deploying large and complex networks on edge devices. Consequently, model compression has emerged as a research field aimed at reducing the size and complexity of CNNs. One prominent technique in model compression is model pruning. This paper will present a new technique of pruning that combines both channel and layer pruning in what is called a "hybrid pruning framework". Inspired by EfficientNet, a renowned CNN architecture known for scaling up networks from both channel and layer perspectives, this hybrid approach applies the same principles but in reverse, where it scales down the network through pruning. Experiments on the hybrid approach demonstrated a notable decrease in the overall complexity of the model, with only a minimal reduction in accuracy compared to the baseline model. This complexity reduction translates into reduced latency when deploying the pruned models on an NVIDIA JETSON TX2 embedded AI device.Compressing CNN models for resource-constrained systems, Di Liu 2 A. Sadaqa and D. Liu Norwegian University of Science and Technology, NO-7491 Trondheim, Norway ahmedsad@stud.ntnu.no Norwegian University of Science and Technology, NO-7491 Trondheim, Norway di.liu@ntnu.no § INTRODUCTION §.§ Overview In the last years, Convolutional Neural Networks (CNNs) have achieved significant performance in various tasks, including Object Detection, Image Classification, Semantic Segmentation, and Image Captioning. These advancements require models to become more complex to capture intricate patterns in the data. However, large and complex models consume substantial memory and computation time. For example, VGG16, with its 16 layers and over 130 million parameters, requires 96MB of memory storage per image for a single forward pass, and this requirement doubles when considering the backward pass. Another example is ResNet50, which requires 29 hours of training on ImageNet using an Nvidia Tesla P100. Large and complex models are typically used to predict an output using new data, a computational process known as DNN inference. Due to their considerable size and complexity, inference is usually performed in the cloud. In this scenario, data generated by the edge device is sent to the cloud for inference, and the result is sent back to the edge device. This process of transferring data back and forth increases latency, which can be problematic for real-time applications. To address this issue, research indicates that deploying models on edge devices for inference can reduce latency and result in faster inference times. Deploying large models on edge devices is not feasible due to their limited memory and computational capacity. Model Compression emerges as a field aimed at transforming large models into smaller ones that can achieve performance similar to the larger models. A common technique for model compression, and the one discussed in this paper, is model pruning. Model pruning aims to reduce the size and complexity of the model by removing redundant structures such as neurons, channels, or layers. By removing these structures, the number of parameters is reduced, resulting in a decrease in model complexity and size. In practice, channel and layer pruning stands out as the most prominent pruning techniques, and they will be the focus of this paper. §.§ Main Contributions This paper proposes a novel pruning framework that combines channel and layer pruning to achieve maximal compression while maintaining the model's predictive capacity. This will be achieved by using existing channel pruning algorithms in addition to an effective layer pruning algorithm that will be proposed in the following sections. The novel approach is not the layer or channel pruning algorithm per se but rather the concept of applying them sequentially to reduce the network's width and depth. Furthermore, the paper tries to argue that utilizing existing criteria in research and applying them in a certain way can yield better results especially in terms on reducing complexity. The framework operates in two phases: it begins by executing the channel pruning algorithm, which iteratively removes redundant channels. Next, the pruned model is passed through the layer pruning algorithm, which prunes entire blocks of layers in a one-shot manner. This pruning framework reduces both the width (channels) and depth (layers) of the network. This framework is inspired by EfficientNet, a CNN architecture developed by Google for image classification. The key innovation of EfficientNet is its method of scaling up the network's depth and width. The depth of the network refers to the number of layers, while the width refers to the
Mechanism Design with Outliers and Predictions § Abstract We initiate the study of mechanism design with outliers, where the designer can discard z agents from the social cost objective. This setting is particularly relevant when some agents exhibit extreme or atypical preferences. As a natural case study, we consider facility location on the line: n strategic agents report their preferred locations, and a mechanism places a facility to minimize a social cost function. In our setting, the z agents farthest from the chosen facility are excluded from the social cost. While it may seem intuitive that discarding outliers improves efficiency, our results reveal that the opposite can hold. We derive tight bounds for deterministic strategyproof mechanisms under the two most-studied objectives: utilitarian and egalitarian social cost. Our results offer a comprehensive view of the impact of outliers. We first show that when z ≥ n/2, no strategyproof mechanism can achieve a bounded approximation for either objective. For egalitarian cost, selecting the (z + 1)-th order statistic is strategyproof and 2-approximate. In fact, we show that this is best possible by providing a matching lower bound. Notably, this lower bound of 2 persists even when the mechanism has access to a prediction of the optimal location, in stark contrast to the setting without outliers. For utilitarian cost, we show that strategyproof mechanisms cannot effectively exploit outliers, leading to the counterintuitive outcome that approximation guarantees worsen as the number of outliers increases. However, in this case, access to a prediction allows us to design a strategyproof mechanism achieving the best possible trade-off between consistency and robustness. Finally, we also establish lower bounds for randomized mechanisms that are truthful in expectation.1]Argyrios Deligkas 1]Eduard Eiben 2,3]Sophie Klumper 2,3] Guido Schäfer 4]Artem Tsikiridis [1]Royal Holloway, University of London, United Kingdom [2]Centrum Wiskunde & Informatica (CWI), The Netherlands [3]University of Amsterdam, The Netherlands [4]Technical University of Munich, Germany § INTRODUCTION You are the coordinator of the annual social event of your department and your task is to choose the venue. Of course, you could decide on your own without asking your colleagues, making you really unpopular within your department. Ideally, though, you would like to choose a venue that is aligned with the preferences of your colleagues—it is known that everyone wants the venue to be as close as possible to their place. However, you are facing two major issues you need to overcome. First, you do not want to be manipulated by your colleagues and you want to incentivize them to declare their true preferences. In addition, you know from last year that due to stubborn-Joe—who lives 3 hours away from the department—everyone else had to commute at least one hour to reach the chosen venue. You have decided that this year you will leave out this type of “outliers” in order to make a better decision for the majority of your colleagues. After all, not considering some outliers should simplify the problem, shouldn't it? Problems like the one described above fall into the category of truthful facility location problems, which have been extensively studied for more than 45 years. Furthermore, the seminal paper of established facility location problems as the paradigm for mechanism design without money. Since then, a wide variety of models, settings, and mechanisms have been studied; see, for example the survey of for an overview of different models. In this work, we revisit the two foundational models of truthful facility location, the one of and the one of, and study them under the presence of outliers (see ). Although outliers have been a popular consideration in algorithm design in general, to the best of our knowledge, they have not been considered in the context of mechanism design. In this paper, we take the first steps towards understanding the impact of outliers on a fundamental mechanism design problem. §.§ Our Contribution Our main contributions are as follows. * We introduce the notion of outliers in mechanism design problems. We envision that outliers can be meaningfully incorporated into any mechanism design problem, which opens up a whole new domain of mechanism design problems with outliers. Studying the impact of outliers is not only theoretically appealing but also practically relevant, especially in applications involving agents with extreme or atypical preferences. * We use single facility location on the real line as a first, natural test case for our setting of mechanism design with outliers. We derive tight bounds for deterministic strategyproof mechanisms for the two most-studied objectives, i.e., utilitarian and egalitarian social cost. We provide a complete picture of the impact of outliers and our results reveal some counter-intuitive phenomena. We also extend our analysis to randomiz
Diffusion-Guided Multi-Arm Motion Planning § Abstract Multi-arm motion planning is fundamental for enabling arms to complete complex long-horizon tasks in shared spaces efficiently but current methods struggle with scalability due to exponential state-space growth and reliance on large training datasets for learned models. Inspired by Multi-Agent Path Finding (MAPF), which decomposes planning into single-agent problems coupled with collision resolution, we propose a novel diffusion-guided multi-arm planner (DG-MAP) that enhances scalability of learning-based models while reducing their reliance on massive multi-arm datasets. Recognizing that collisions are primarily pairwise, we train two conditional diffusion models, one to generate feasible single-arm trajectories, and a second, to model the dual-arm dynamics required for effective pairwise collision resolution. By integrating these specialized generative models within a MAPF-inspired structured decomposition, our planner efficiently scales to larger number of arms. Evaluations against alternative learning-based methods across various team sizes demonstrate our method's effectiveness and practical applicability. Project website: <https://diff-mapf-mers.csail.mit.edu>§ INTRODUCTION The ability of multiple arms to effectively coordinate in shared spaces is crucial for solving complex tasks beyond the capability of individual arms. Humans naturally perform many such sophisticated tasks by leveraging social interaction, shared information, and distributed responsibilities. Enabling robotic systems, particularly those with multiple manipulators operating in close proximity, to achieve similar efficient, collision-free coordination remains a central challenge. This involves handling high-dimensional joint configuration spaces inherent to multi-arm systems. Traditional sampling-based motion planners (SMPs), such as RRT and PRM, face significant challenges when applied to multi-arm systems due to the curse of dimensionality in the joint space. Alternatively, optimization-based planners can efficiently refine trajectories but are susceptible to local minima and often require good initial seeds. Recently, learning-based methods have emerged that can guide such planners with a primary focus on enhancing single-arm planning. Extending them to multi-arm setups would require expensive multi-arm training data and solving large, non-convex optimization problems. To address the multi-agent scalability more directly, other methods have integrated SMPs with Multi-Agent Path Finding (MAPF) decomposition techniques, constructing individual roadmaps for each arm and coordinating them using MAPF-like solvers. Nonetheless, scalability remains constrained, as the computational cost of generating initial SMP roadmaps becomes a major bottleneck, particularly as the number of arms increases. To address these scalability challenges, end-to-end learning-based approaches have gained increasing attention. In particular, Multi-Agent Reinforcement Learning (MARL) methods have emerged, training decentralized policies using expert demonstrations from SMPs such as BiRRT. While they have been shown to be scalable, MARL policies often depend heavily on diverse training data, and those trained primarily on simpler interactions involving only two arms frequently fail to generalize to more complex, larger team scenarios. An alternative line of work leverages diffusion models, which offer a promising combination of generative flexibility and explicit constraint handling. For instance, approaches such as Multi-robot Multi-model planning Diffusion (MMD) integrate single-robot diffusion models within MAPF frameworks to enable collision-free navigation. Nonetheless, encoding complex multi-agent geometric constraints, particularly for articulated manipulators, within diffusion models remains a substantial challenge. Furthermore, current end-to-end multi-agent diffusion policies require centralized training with full-team data, limiting their scalability to unseen team compositions and motivating the need for novel, multi-arm-specific approaches. Multi-Arm Motion Planning: Timestamped snapshots compare an end-to-end learning method (top) with our DG-MAP approach (bottom) on a multi-arm pick-and-place task. Both were trained using only lower-order interaction data (single/dual-arm trajectories). The end-to-end method fails due to collision with another arm in the shared workspace, while DG-MAP successfully completes the task by leveraging specialized diffusion models combined with MAPF-inspired structured decomposition. Achieving effective multi-arm coordination requires planners that are scalable to large teams, adaptable to dynamic layouts, cooperative in goal achievement, and capable of closed-loop operation for collision avoidance, as highlighted by Ha et al. and illustrated in Figure. Motivated by these challenges, we propose a novel closed-loop multi-arm motion planning framework called DG-MAP that
Improving LLM Safety and Helpfulness using SFT and DPO: A Study on OPT-350M § Abstract This research investigates the effectiveness of alignment techniques, Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and a combined SFT+DPO approach—on improving the safety and helpfulness of the OPT-350M language model. Utilizing the Anthropic Helpful-Harmless RLHF dataset, we train and evaluate four models: the base OPT-350M, an SFT model, a DPO model, and a model trained with both SFT and DPO. We introduce three key evaluation metrics: Harmlessness Rate (HmR), Helpfulness Rate (HpR), and a Combined Alignment Score (CAS), all derived from reward model outputs. The results show that while SFT outperforms DPO, The combined SFT+DPO model outperforms all others across all metrics, demonstrating the complementary nature of these techniques. Our findings also highlight challenges posed by noisy data, limited GPU resources, and training constraints. This study offers a comprehensive view of how fine-tuning strategies affect model alignment and provides a foundation for more robust alignment pipelines in future work.§ INTRODUCTION The advent of large language models (LLMs) has transformed natural language processing (NLP), powering applications from conversational agents to code generation and creative writing. Models like GPT-3, OPT, and PaLM have demonstrated remarkable capabilities in understanding and generating human-like text across a variety of tasks. Despite their impressive performance, these models often struggle with safety, alignment, and controllability. Left unchecked, they may generate content that is factually incorrect, biased, toxic, or even harmful. This has raised critical concerns about their deployment in real-world scenarios, especially when the models are used in unsupervised environments where humans might trust their outputs by default. The safety and usability of language models is typically assessed in terms of two major factors — helpfulness and harmlessness. A helpful model is one that provides accurate, informative, and context-aware responses to user queries. A harmless model, on the other hand, avoids producing responses that are toxic, offensive, or otherwise damaging to users or society at large. While some early work approached these issues through simple rule-based filters or dataset curation, recent advances have focused on more robust methods, such as alignment through human feedback, reinforcement learning, and preference modeling [3], [8]. Among these, Reinforcement Learning from Human Feedback (RLHF) has been widely adopted as a standard technique for aligning LLMs with human preferences. However, RLHF often requires complex infrastructure, reward modeling, and high computational cost, making it difficult to apply broadly to smaller models or more constrained settings. To address the limitations of RLHF, Direct Preference Optimization (DPO) has emerged as a promising alternative [1]. DPO is a technique that directly optimizes a model using ranked human preferences, eliminating the need for an explicit reward model or reinforcement learning loop. Instead of requiring reward functions or sampling-based training, DPO treats the preference data as implicit supervision, adjusting the model parameters to increase the probability of preferred responses over non-preferred ones. This approach has shown significant promise in early research, with simpler implementation and competitive results compared to RLHF [1], [10]. In particular, DPO aligns well with practical constraints in academic and research settings, where compute and data labeling resources may be limited. This paper investigates the use of SFT and DPO, both separately and together, to improve both the helpfulness and harmlessness of a smaller LLM—OPT-350M, a compact yet representative model released by Meta AI [2]. Unlike larger models that require hundreds of gigabytes of GPU memory and are expensive to train or fine-tune, OPT-350M is computationally feasible for experimentation while still demonstrating typical LLM behaviors, including hallucination, bias, and unsafe generation under certain conditions. As the base model, it provides a realistic testbed for alignment techniques without the need for extreme scaling. For fine-tuning, we use the Anthropic Helpful and Harmless (HH-RLHF) dataset[3]. The dataset is specifically designed for alignment training, making it suitable for both supervised fine-tuning (SFT) and preference-based training such as DPO. This research project explores multiple configurations: Evaluation of the base model, fine-tuning the base model with SFT alone, training with DPO directly, and applying SFT followed by DPO. This multi-phase design allows us to evaluate the effectiveness of DPO as both a standalone alignment technique and as a complement to traditional fine-tuning. To assess the effectiveness of each training method, we propose and employ three evaluation metrics for this researc
LLM Ensemble for RAG: Role of Context Length in Zero-Shot Question Answering for BioASQ Challenge § Abstract Biomedical question answering (QA) poses significant challenges due to the need for precise interpretation of specialized knowledge drawn from a vast, complex, and rapidly evolving corpus. In this work, we explore how large language models (LLMs) can be used for information retrieval (IR), and an ensemble of zero-shot models can accomplish state-of-the-art performance on a domain-specific Yes/No QA task. Evaluating our approach on the BioASQ challenge tasks, we show that ensembles can outperform individual LLMs and in some cases rival or surpass domain-tuned systems - all while preserving generalizability and avoiding the need for costly fine-tuning or labeled data. Our method aggregates outputs from multiple LLM variants, including models from Anthropic and Google, to synthesize more accurate and robust answers. Moreover, our investigation highlights a relationship between context length and performance: while expanded contexts are meant to provide valuable evidence, they simultaneously risk information dilution and model disorientation. These findings emphasize IR as a critical foundation in Retrieval-Augmented Generation (RAG) approaches for biomedical QA systems. Precise, focused retrieval remains essential for ensuring LLMs operate within relevant information boundaries when generating answers from retrieved documents. Our results establish that ensemble-based zero-shot approaches, when paired with effective RAG pipelines, constitute a practical and scalable alternative to domain-tuned systems for biomedical question answering.2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CLEF 2025 Working Notes, 9 – 12 September 2025, Madrid, Spain mode=sub]Notebook for the BioASQ Lab at CLEF 2025 1]Dima Galat[ orcid=0000-0003-3825-2142, email=dima.galat [@] student.uts.edu.au, ] [1]University of Technology Sydney (UTS), Australia 2]Diego Molla-Aliod[ orcid=0000-0003-4973-0963, email=diego.molla-aliod [@] mq.edu.au, ] [2]Macquarie University, Australia large language model biomedical question answering information retrieval natural language processing BioASQ CEUR-WS
Are Humans as Brittle as Large Language Models? § Abstract The output of large language models (LLM) is unstable, due to both non-determinism of the decoding process as well as to prompt brittleness. While the intrinsic non-determinism of LLM generation may mimic existing uncertainty in human annotations through distributional shifts in outputs, it is largely assumed, yet unexplored, that the prompt brittleness effect is unique to LLMs. This raises the question: do human annotators show similar sensitivity to instruction changes? If so, should prompt brittleness in LLMs be considered problematic? One may alternatively hypothesize that prompt brittleness correctly reflects human annotation variances. To fill this research gap, we systematically compare the effects of prompt modifications on LLMs and identical instruction modifications for human annotators, focusing on the question of whether humans are similarly sensitive to prompt perturbations. To study this, we prompt both humans and LLMs for a set of text classification tasks conditioned on prompt variations. Our findings indicate that both humans and LLMs exhibit increased brittleness in response to specific types of prompt modifications, particularly those involving the substitution of alternative label sets or label formats. However, the distribution of human judgments is less affected by typographical errors and reversed label order than that of LLMs.§ INTRODUCTION Large language models (LLMs) have shown impressive capabilities in automatic data annotation tasks. However, practical applications are often hindered by variability in model outputs, leading to inconsistent predictions. This variability encompasses both the inherent nondeterminism in probabilistic models as well as prompt brittleness, wherein minor changes in prompt phrasing leads to significant differences in outputs. Of course, variability is also observed in human annotations. Annotators’ individual experiences, sociodemographic backgrounds, and moral values shape how they interpret and label content. Moreover, the uncertainty in human annotation behavior may increase with the inclusion of additional annotators, posing challenges for explanation and identification. Traditionally, such disagreement among annotators has been treated as noise or bias, often resolved through techniques like majority voting to produce a single “gold” label. Recent research suggests that variability in human annotations should not be seen as a “problem” but as a reflection of diverse human interpretations. For example, in an emotion labeling task, the text `I ran into my partner when I got home at 2 a.m.' may be labeled as joy or surprise, depending on the subjective interpretation. As LLMs are trained to model human text, it is reasonable that their output distributions should somehow reflect the variability present in human annotations. However, while recent work has acknowledged the importance of human label variation, little attention has been paid to how this variation behaves across different instruction perturbations, particularly whether changes in annotator instructions affect the distribution of human responses analogously LLM's prompt brittleness. In this work, we draw a connection between human label variation and LLM predictions. Specifically, we investigate whether prompt brittleness is unique to LLMs, or if humans exhibit similar sensitivity to task instruction variations. We propose a systematic method that formalizes prompt perturbation and investigates the distributional effects of such pertubations on both LLMs and human annotators. This method supports our exploration of the following research questions: RQ1: How do distributional shifts of LLM outputs reveal prompt brittleness? RQ2: Are humans susceptible to prompt brittleness in ways comparable to LLMs? RQ3: Which types of prompt variants yield similar responses between humans and LLMs? The experimental repository and supplementary materials are available at https://www.uni-bamberg.de/en/nlproc/projects/inprompt/. § RELATED WORK In the following, we discuss related work on variability in LLM outputs and human annotations. §.§ Variability in LLMs §.§.§ Prompt Brittleness LLMs exhibit high sensitivity to variations in a prompt, even when changes are minimal. This phenomenon is commonly referred to as prompt brittleness. Prior studies have examined a variety of prompt perturbations that can lead to substantial changes in model outputs. One prominent issue is position bias, where LLMs exhibit preferences for labels appearing in specific positions within a prompt. Altering the order of answer options can substantially affect model behavior. Similarly, the position of the set of possible answers as a whole affects the output. Lexical perturbations such as typos or synonym substitutions also affect the output. While some findings suggest that LLMs are relatively robust to typographical errors, other work highlights brittleness in response to p
KV Cache Eviction from the Information Loss Perspective § Abstract KV Cache is commonly used to accelerate LLM inference with long contexts, yet its high memory demand drives the need for cache compression. Existing compression methods, however, are largely heuristic and lack dynamic budget allocation. To address this limitation, we introduce a unified framework for cache compression by minimizing information loss in Transformer residual streams. Building on it, we analyze the layer attention output loss and derive a new metric to compare cache entries across heads, enabling layer-wise compression with dynamic head budgets. Additionally, by contrasting cross-layer information, we also achieve dynamic layer budgets. LAVa is the first unified strategy for cache eviction and dynamic budget allocation that, unlike prior methods, does not rely on training or the combination of multiple strategies. Experiments with benchmarks (LongBench, Needle-In-A-Haystack, Ruler, and InfiniteBench) demonstrate its superiority. Moreover, our experiments reveal a new insight: dynamic layer budgets are crucial for generation tasks (e.g., code completion), while dynamic head budgets play a key role in extraction tasks (e.g., extractive QA). As a fully dynamic compression method, LAVa consistently maintains top performance across task types. Our code is available at https://github.com/MGDDestiny/Lava.[1]This work was done during Shen's internship at Stepfun. [2]Corresponding author § INTRODUCTION Large language models (LLMs) have shown remarkable capability in handling long-text scenarios, enabling advancements in tasks such as question answering, code generation, and multi-turn dialogues. To further enhance external knowledge integration, state-of-the-art models like Claude 3.5, GPT-4, and Qwen2.5 Max have extended their context lengths beyond 128K tokens. However, supporting such long contexts comes with increased computational challenges. One common approach to accelerating LLM inference is caching Key and Value vectors (KV Cache), but its high memory demand necessitates efficient cache compression techniques. While existing compression methods have shown promise, they are largely heuristic, relying on statistical measures such as accumulated attention scores. These metrics are derived from empirical observations rather than a theoretical foundation. Additionally, although dynamic head allocation and dynamic layer allocation have been explored, no method, to our knowledge, fully adapts head and layer budgets. Information flow in decoder-only LLMs. The decoding process can be seen as operating on the current residual stream. Each residual stream (red lines) corresponds to one token, and is considered as a communication channel. Attention heads copy information from past residual streams to the current one (green lines). To address this gap, we propose a unified framework for cache compression, which is formulated through the lens of minimizing information loss in Transformer residual streams (see Figure, and Sec. 3). Many existing methods can be formulated within our framework. Specifically, context compression methods aim to minimize global information loss at the logits layer. KV Cache compression methods primarily focus on local information loss at the head or layer levels. Our framework provides a principled approach to designing new algorithms. This paper introduces a novel method based on Layer Attention Output Loss, which measures the impact of compression on the information retained in each layer after multi-head attention. The layer-wise loss function provides a balanced perspective on both local information within layers and global information flow across layers. Within each layer, the loss function guides the design of a scoring mechanism to assess token importance across heads, allowing for simultaneous head budget allocation and cache eviction. Across layers, it enables dynamic layer budget allocation by comparing information between layers. Our method is theoretically grounded, and significantly simpler than CAKE, the only training-free method with dynamic layer budgets. Extensive experiments were conducted using various LLM series on the LongBench and Needle in a Haystack benchmarks. The results consistently demonstrate LAVa's strong ability to preserve the model's long-text comprehension under various memory constraints. Additionally, compared to a full cache implementation of FlashAttention-2, LAVa significantly reduces memory consumption while simultaneously reducing latency (9× faster decoding for 128K-token sequences). Our empirical findings highlight that dynamic layer budgets are essential for generation tasks, while dynamic head budgets are crucial for text extraction tasks. Achieving dynamic budget allocation at both the head and layer levels is key to optimizing performance across different tasks. Our Contributions: 1) We introduce a principled framework for KV Cache eviction by analyzing the information
Large Language Model Hacking: § Abstract Large language models (LLMs) are rapidly transforming social science research by enabling the automation of labor-intensive tasks like data annotation and text analysis. However, LLM outputs vary significantly depending on the implementation choices made by researchers (e.g., model selection, prompting strategy, or temperature settings). Such variation can introduce systematic biases and random errors, which propagate to downstream analyses and cause Type I (false positive), Type II (false negative), Type S (wrong sign for significant effect), or Type M (correct but exaggerated effect) errors. We call this LLM hacking. We quantify the risk of LLM hacking by replicating 37 data annotation tasks from 21 published social science research studies with 18 different models. Analyzing 13 million LLM labels, we test 2,361 realistic hypotheses to measure how plausible researcher choices affect statistical conclusions. We find incorrect conclusions based on LLM-annotated data in approximately one in three hypotheses for state-of-the-art (SOTA) models, and in half the hypotheses for small language models. While our findings show that higher task performance and better general model capabilities reduce LLM hacking risk, even highly accurate models do not completely eliminate it. The risk of LLM hacking decreases as effect sizes increase, indicating the need for more rigorous verification of findings near significance thresholds. Our extensive analysis of LLM hacking mitigation techniques emphasizes the importance of human annotations in reducing false positive findings and improving model selection. Surprisingly, common regression estimator correction techniques are largely ineffective in reducing LLM hacking risk, as they heavily trade off Type I vs. Type II errors. Beyond accidental errors, we find that intentional LLM hacking is unacceptably simple. With few LLMs and just a handful of prompt paraphrases, anything can be presented as statistically significant. Overall, our findings advocate for a fundamental shift in LLM-assisted research practices, from viewing LLMs as convenient black-box annotators to seeing them as complex instruments that require rigorous validation. Based on our findings, we publish a list of practical recommendations to limit accidental and deliberate LLM hacking for various common tasks.We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks. For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations. Then, we collect 13 million LLM annotations across plausible LLM configurations. These annotations feed into 1.4 million regressions testing the hypotheses. For a hypothesis with no true effect (ground truth p > 0.05), different LLM configurations yield conflicting conclusions. Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking – incorrect conclusions due to annotation errors. Across all experiments, LLM hacking occurs in 31-50% of cases even with highly capable models. Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant. § INTRODUCTION Large language models (LLMs) can easily be instructed to process unstructured data for virtually any analytical purpose. It is thus tempting for researchers to outsource time-consuming tasks like data annotation or text analysis to these systems. For data-driven analyses in particular, the ability to extract insights from vast amounts of unstructured text through automatic annotation enables research at unprecedented scales. We tend to forget that high-performing models paired with careful prompt engineering can still produce bad science. The integration of LLMs into scientific research workflows represents one of the most significant recent methodological shifts in the social sciences and other scientific disciplines. Unfortunately, beneath this excitement for using LLMs in research lies a fundamental threat to scientific validity that has remained largely unexamined. Traditionally, computational social scientists used trained student assistants or domain experts to annotate unstructured data to feed their quantitative downstream analyses of interest. As the volume of available text data grew exponentially, researchers increasingly turned to computational methods that could scale analysis, either through unsupervised techniques or by training supervised machine learning models on human-labeled data. In recent years, a large part of the community has enthusiastically embraced the use of LLMs for automated scientific analyses, with many reporting that LLMs can match or even exceed human performance on various annotation tasks. Yet this same literature almost entirely overlooks how annotation errors propagate through statistical an
Augment to Segment: Tackling Pixel-Level Imbalance in Wheat Disease and Pest Segmentation § Abstract Accurate segmentation of foliar diseases and insect damage in wheat is crucial for effective crop management and disease control. However, the insect damage typically occupies only a tiny fraction of annotated pixels. This extreme pixel-level imbalance poses a significant challenge to the segmentation performance, which can result in overfitting to common classes and insufficient learning of rare classes, thereby impairing overall performance. In this paper, we propose a Random Projected Copy-and-Paste (RPCP) augmentation technique to address the pixel imbalance problem. Specifically, we extract rare insect-damage patches from annotated training images and apply random geometric transformations to simulate variations. The transformed patches are then pasted in appropriate regions while avoiding overlaps with lesions or existing damaged regions. In addition, we apply a random projection filter to the pasted regions, refining local features and ensuring a natural blend with the new background. Experiments show that our method substantially improves segmentation performance on the insect damage class, while maintaining or even slightly enhancing accuracy on other categories. Our results highlight the effectiveness of targeted augmentation in mitigating extreme pixel imbalance, offering a straightforward yet effective solution for agricultural segmentation problems.Augment to Segment Xin Yu1 Zhi Chen2 Scott Chapman1 Zi Huang1 T. Wei et al. The University of Queensland, Brisbane, Australia {tianqi.wei,xin.yu,scott.chapman,helen.huang}@uq.edu.au University of Southern Queensland, Toowoomba, Australia Zhi.Chen@unisq.edu.au § INTRODUCTION Wheat is one of the most widely cultivated crops and a key source of dietary calories worldwide. However, its yield and grain quality are often threatened by a variety of diseases and insect pests, leading to substantial economic losses and posing serious challenges to global food security. Early and accurate detection of these threats is essential for effective crop protection, timely intervention, and sustainable management practices. Recent advances in deep learning have enabled automated perception and analysis in agricultural vision tasks, providing an efficient and scalable alternative to traditional manual inspection. Among various computer vision approaches, semantic segmentation has emerged as a powerful tool for pixel-wise recognition of disease and pest symptoms, enabling fine-grained characterization of lesion morphology and spatial distribution. Despite its potential, applying semantic segmentation to wheat foliar disease datasets remains challenging due to large intra-class variation and severe class imbalance. Visual symptoms can differ significantly in size and appearance, complicating pixel-wise recognition. Rare classes, such as insect damage, often occupy only a tiny fraction of annotated pixels. This extreme pixel-level imbalance results in biased optimization, leading models to overfit on dominant classes and neglect rare classes. Even state-of-the-art models, such as SegFormer achieve high accuracy on common classes like healthy tissue and STB lesions, yet their performance on insect damage regions remains substantially lower. To address these challenges, we propose a targeted data augmentation strategy called Random Projected Copy-and-Paste (RPCP), which explicitly increases the representation of rare classes for training. RPCP first identifies insect-damage regions in annotated images and crops them according to their masks. The patches undergo geometric transformations, including random rotation and scaling, to generate variations. The transformed patches are then pasted into the appropriate regions of other training images while avoiding overlaps with existing damaged regions and lesions. After placement, a random projection filter refines local textures, blending the pasted regions naturally into the background and reducing visual artifacts. This straightforward yet effective pipeline mitigates extreme pixel-level imbalance by enriching the rare-class pixels. Moreover, it can integrate seamlessly into training workflows of any existing models without altering model architectures. We conduct extensive experiments on a public wheat foliar disease segmentation dataset containing 3 classes: healthy leaves, Septoria tritici blotch (STB) lesions, and insect damage. The dataset exhibits a long-tailed pixel distribution, with insect damage accounting for only a small fraction of annotated pixels. To assess the generality of our approach, we evaluate multiple representative segmentation models, comparing each with and without RPCP. The results show that the RPCP augmentation consistently improves the performance of the rare class without degrading that of common classes. To summarise, our main contributions are as follows: * We propose Random Projected Copy-and-Pa
PAnDA § Abstract Metric Differential Privacy (mDP) extends the local differential privacy (LDP) framework to metric spaces, enabling more nuanced privacy protection for data such as geo-locations. However, existing mDP optimization methods, particularly those based on linear programming (LP), face scalability challenges due to the quadratic growth in decision variables. In this paper, we propose Perturbation via Anchor-based Distributed Approximation (PAnDA), a scalable two-phase framework for optimizing metric differential privacy (mDP). To reduce computational overhead, PAnDA allows each user to select a small set of anchor records, enabling the server to solve a compact linear program over a reduced domain. We introduce three anchor selection strategies, exponential decay (PAnDA-e), power-law decay (PAnDA-p), and logistic decay (PAnDA-l), and establish theoretical guarantees under a relaxed privacy notion called probabilistic mDP (PmDP). Experiments on real-world geo-location datasets demonstrate that PAnDA scales to secret domains with up to 5,000 records, two times larger than prior LP-based methods, while providing theoretical guarantees for both privacy and utility. = -1fancy § INTRODUCTION Local Differential Privacy (LDP) has emerged as a preferred paradigm for privacy-preserving data collection, especially in scenarios where users do not fully trust centralized aggregators. By ensuring that each user's reported data is statistically indistinguishable across all possible inputs, LDP offers strong and provable privacy guarantees. However, LDP requires a uniform level of indistinguishability across all input pairs, which limits its applicability in contexts that demand more nuanced privacy control. For instance, in location-based services (LBSs), the objective is often to obscure a user’s exact location within a certain geographic range. Standard LDP fails to capture such contextual nuances, for example, it does not differentiate between a user being within 1 kilometer or 100 kilometers of a reference point, even though these cases may warrant vastly different levels of perturbation. Moreover, the indiscriminate protection of all input pairs as equally sensitive often results in excessive noise, severely degrading the utility of the perturbed data for downstream tasks. To address these limitations, metric Differential Privacy (mDP) was introduced as a generalization of LDP that enables more nuanced levels of indistinguishability between inputs. Instead of applying a uniform privacy guarantee, mDP incorporates a distance metric to modulate the strength of protection: inputs that are close under the metric must remain highly indistinguishable, while those that are farther apart may be more readily distinguished. This distance-aware relaxation enables privacy mechanisms to inject less noise, thereby improving utility while still offering meaningful privacy guarantees. This enhancement broadens the flexibility and applicability of LDP across various data domains, including geo-location perturbation in LBSs and text perturbation in natural language processing (NLP). Related work. Compared to traditional LDP, optimizing for mDP introduces additional complexity due to its non-uniform privacy constraints and the diverse utility loss under perturbation. Specifically, mDP requires varying privacy guarantees between any two records, and utility loss can depend heavily not only on the magnitude but also the direction of the perturbation. While predefined noise distribution mechanisms such as the Laplace mechanism and the exponential mechanism (EM) can satisfy mDP, they typically generate noise based on the perturbation magnitude. This approach often overlooks directional variations in utility loss across the output space, leading to suboptimal utility performance. To better account for utility sensitivity, recent research on mDP has increasingly focused on optimization-based mechanisms, particularly those using linear programming (LP). These approaches discretize both the input (secret) domain 𝒳 and the output (perturbed) domain 𝒴 into finite sets, allowing for explicit modeling of utility loss for each possible perturbation. The perturbation mechanism is then optimized by solving an LP that minimizes the expected utility loss while satisfying mDP constraints for all neighboring record pairs. This formulation requires optimizing a probability distribution over perturbed outputs for each real input, resulting in |𝒳||𝒴| decision variables. However, this scalability poses a major computational bottleneck, for instance, optimizing the perturbation distributions for thousands of discrete locations in a small geographic region can lead to millions of LP variables, resulting in prohibitively high computational overhead. = -1 0.45 Scalability comparison of related works and ours. CCS 2014, ICDM 2016, WWW 2017, NDSS 2017, ICDCS 2019, CIKM 2020, TMC 2022, SIGSPATIAL 2022, UAI 2022, EDBT 2023, EDBT 2024, IJCAI 2024. Due to
Sharing is Caring: Efficient LM Post-Training with Collective RL Experience Sharing § Abstract Post-training language models (LMs) with reinforcement learning (RL) can enhance their complex reasoning capabilities without supervised fine-tuning, as demonstrated by DeepSeek-R1-Zero <cit.>. However, effectively utilizing RL for LMs requires significant parallelization to scale-up inference, which introduces non-trivial technical challenges (e.g. latency, memory, and reliability) alongside ever-growing financial costs. We present Swarm sAmpling Policy Optimization (), a fully decentralized and asynchronous RL post-training algorithm. is designed for decentralized networks of heterogenous compute nodes, where each node manages its own policy model(s) while “sharing” rollouts with others in the network; no explicit assumptions about latency, model homogeneity, or hardware are required and nodes can operate in silo if desired. As a result, the algorithm avoids common bottlenecks in scaling RL post-training while also allowing (and even encouraging) new possibilities. By sampling rollouts “shared” across the network, it enables “Aha moments” to propagate, thereby bootstrapping the learning process. In this paper we show achieved cumulative reward gains of up to 94% in controlled experiments. We also share insights from tests on a network with thousands of nodes contributed by Gensyn community members running the algorithm on diverse hardware and models during an open-source demo.^†Primary contributors. Authors are listed in alphabetical order. gabriel@gensyn.ai & semih@gensyn.ai Acknowledgments: We thank all Gensyn community members who have contributed to our testnet. Your support makes it possible for us to iterate and experiment at unprecedented scales – we hope you will continue to support us as we work together to democratize AI, do science in the open, and strive to build a future we all deserve. § INTRODUCTION Improving the capabilities of language models (LMs) after pre-training has become a central goal in AI research. Reinforcement learning (RL) has emerged as a powerful tool for this purpose, allowing models to improve through trial and error, rather than relying solely on supervised data. Recent efforts to scale RL for LMs have largely focused on distributed systems that orchestrate large GPU clusters that need to keep policy weights synchronized during training. Although effective, these approaches are expensive, introduce communication bottlenecks, and often require carefully engineered infrastructure to remain stable and efficient. To address these challenges, we introduce Swarm sAmpling Policy Optimization (SAPO). is a distributed RL algorithm built for decentralized networks of heterogeneous compute nodes. In this setup, which we call swarm, each node trains its own policy (or model) while sharing decoded rollouts (e.g., in plain text), enabling lightweight exchange of experience. This simple mechanism makes the framework independent of model architecture, learning algorithm, and hardware, allowing heterogeneous contributors to participate without synchronization overhead. As a natural consequence, the system behaves like a multi-agent setup, where diverse models and abundant data enhance exploration and improve generalization. In controlled experiments, we observed that delivers higher sample efficiency and stronger task performance—improving cumulative rewards by up to 94% while sidestepping the costs, bottlenecks, and fragility of conventional distributed RL methods. While can be applied in any RL setting, including the fine-tuning of large LMs (LLMs), in this work we focus on small LMs (SLMs), i.e., LMs with fewer than 10B parameters. We chose SLMs because swarms are most often implemented on local or edge devices, which typically run smaller models rather than large ones. A concrete example is Gensyn’s RLSwarm, which allows thousands of heterogeneous SLMs running locally on consumer grade hardware (e.g., MacBooks) to interact and train collectively. Thanks to contributions from thousands of Gensyn community members, we conducted an open-source demo of that produced the empirical insights in. Overall, the demo showed that collective training with shared experience makes SLMs learn much faster. § RELATED WORK Reinforcement Learning for LM Fine-Tuning RL has become a central technique for fine-tuning LMs, aligning their behavior with human preferences, and enabling improvements in factual accuracy, code generation and reasoning beyond what supervised learning alone can achieve. Unlike supervised methods, RL optimizes models through trial-and-error, with RL from human feedback (RLHF) and RL with verifiable rewards (RLVR) emerging as the dominant paradigms. RLHF trains a reward model on human preference data, while RLVR leverages rule-based, programmatically verifiable reward functions. In both cases, the resulting rewards are used for fine-tuning LMs via policy-gradient algorithms such as Proximal
RL Fine-Tuning Heals OOD Forgetting in SFT § Abstract The two-stage fine-tuning paradigm of Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) has empirically shown better reasoning performance than one-stage SFT for the post-training of Large Language Models (LLMs). However, the evolution and mechanism behind the synergy of SFT and RL are still under-explored and inconclusive. To figure out this issue, we dissect the Out-Of-Distribution (OOD) and In-Distribution (ID) reasoning performance of LLaMA-3.2-11B and Qwen-2.5-7B at different checkpoints of the fine-tuning (full-parameter, rather than LoRA) process, and conduct fine-grained analysis. We find the well-known claim "SFT memorizes, RL generalizes" is over-simplified, and discover that: (1) OOD performance peaks at the early stage of SFT and then declines (OOD forgetting), the best SFT checkpoint cannot be captured by training/test loss; (2) the subsequent RL stage does not generate fundamentally better OOD capability, instead it plays an OOD restoration role, recovering the lost reasoning ability during SFT; (3) The recovery ability has boundaries, if SFT trains for too short or too long, RL cannot recover the lost OOD ability; (4) To uncover the underlying mechanisms behind the forgetting and restoration process, we employ SVD analysis on parameter matrices, manually edit them, and observe their impacts on model performance. Unlike the common belief that the shift of model capacity mainly results from the changes of singular values, we find that they are actually quite stable throughout fine-tuning. Instead, the OOD behavior strongly correlates with the rotation of singular vectors. In a nutshell, SFT performs hard alignment of the crucial parameter directions to the target tasks, leading to rapid and greedy adjustment, but also quick forgetting; RL then conditionally re-aligns singular vectors softly and slowly towards a more robust configuration, healing the forgetting and learning the downstream tasks simultaneously. Our findings re-identify the roles of SFT and RL in the two-stage fine-tuning and discover the rotation of singular vectors as the key mechanism. Code is available at <https://github.com/xiaodanguoguo/RL_Heals_SFT>§ INTRODUCTION Supervised Fine-Tuning (SFT) is the most widely used method for the post-training of Large Language Models (LLMs). Recent work demonstrates that Reinforcement Learning (RL) fine-tuning, especially when applying after SFT, can achieve much better performance on complex reasoning tasks, such as symbolic math reasoning, code generation, embodied tasks, video prediction, Such two-stage fine-tuning paradigm has rapidly become popular because of its advantages over the one-stage SFT. Numerous studies have explored how RL helps SFT in post-training: a growing body of work argues that SFT tends to memorize or overfit the training distribution, whereas RL yields better out-of-distribution (OOD) generalization; others emphasize that KL-regularized RL counteracts SFT's drift from the base model, and that rule-based or structure-aware RL can significantly strengthen reasoning. The authors in noted that SFT pulls the policy of a model away from its base initialization, and specific RL recipes can boost reasoning. These empirical findings help to partially explore the high-level picture of two-stage fine-tuning, however, the understanding on the synergy of SFT and RL is still inconclusive. In addition, the evolution of OOD performance during the two-stage fine-tuning also lacks a deeper investigation. To fill the gaps in the above issues, we perform full-parameter SFT and RL fine-tuning and analyze the Out-Of-Distribution (OOD) and In-Distribution (ID) reasoning behaviors of two popular open-sourced models: LLaMA-3.2-11B and Qwen-2.5-7B. Specifically, we continuously track their ID and OOD performance at different checkpoints on the GeneralPoints card-game benchmark [See additional results on navigation task in Appendix.], a controlled test of arithmetic reasoning and generalization capacity. This controlled environment allows us to monitor and disentangle the evolution of model performance and investigate the roles of SFT and RL in the whole process. During fine-tuning, we observed that: (1) OOD reasoning performance will peak rapidly in very early stage of SFT and then degrades slowly as SFT continues. Such OOD forgetting is hard to capture by the traditional overfitting detection methods, as the learning curves for ID training/test loss will continue to decline. (2) RL is not black magic for reasoning. It can recover the OOD forgetting in SFT instead of surpassing its peak performance, and the recovery is only effective within a certain range of SFT checkpoints. To uncover the underlying factors that have high impacts on the fine-tuned models, we analyze the Singular-Value Decomposition (SVD) of parameter matrices and conduct ablation studies on their influence to model performance. Unlike some recent st
Semi-interval Comparison Constraints in Query Containment and Their Impact on Certain Answer Computation § Abstract We consider conjunctive queries with arithmetic comparisons (CQAC) and investigate the computational complexity of the problem: Given two CQAC queries, Q and Q', is Q' contained in Q? We know that, for CQAC queries, the problem of testing containment is Π_2 ^p -complete. However, there are broad classes of queries with semi-interval arithmetic comparisons in the containing query that render the problem solvable in NP. In all cases examined the contained query is allowed to be any CQAC. Interestingly, we also prove that there are simple cases where the problem remains Π_2 ^p -complete. We also investigate the complexity of computing certain answers in the framework of answering CQAC queries with semi-interval comparisons using any CQAC views. We prove that maximally contained rewritings in the language of union of CQACs always compute exactly all certain answers. We find cases where we can compute certain answers in polynomial time using maximally contained rewritings.afrati@gmail.com National Technical University of Athens mgdamig@gmail.com Ionian University, Corfu query containment, query rewriting, computing certain answers, conjunctive queries with arithmetic comparisons, complexity of query containment, maximally contained rewritings § INTRODUCTION A conjunctive query with arithmetic comparisons (CQAC) is a select-project-join query in SQL. Query containment and query equivalence play a prominent role in efficient query processing, e.g., for minimizing the number of joins in a query. Query equivalence can be reduced to a query containment problem. Data integration is often put into the framework of answering queries using views via contained rewritings that are found based on properties of query containment. Recently, the problem of determinacy has attracted the interest of researchers and query containment tests offer tools for its investigation. For conjunctive queries (CQs), the query containment problem is shown to be NP-complete. Membership in NP is proven via a containment mapping from the variables of one query to the variables of the other which preserves the relation symbol, i.e., it is a homomorphism. For conjunctive queries with arithmetic comparisons the query containment problem is Π^p_2-complete. We denote conjunctive queries with arithmetic comparisons (CQAC) by Q=Q_0+β where Q_0 denotes the relational subgoals and β the arithmetic comparison subgoals. The containment test now uses all containment mappings μ_1,…, μ_k from the variables of one query to the variables of the other. The containment test decides whether Q_2⊑ Q_1 by checking whether the following containment entailment is true: ϕ: β_2 ⇒μ_1(β_1) ∨⋯∨μ_k(β_1).[we assume the queries are normalized; see definition shortly.] Previous work has considered CQACs with only left semi-interval arithmetic comparisons or only right semi-interval ACs. A left semi-interval (LSI) AC is an AC of type var≤ const or var const. The LSI AC with ≤ is called closed (CLSI) and the LSI AC with noticed that if only LSI (RSI respectively) are used then one containment mapping suffices to prove query containment. Further work in considers certain broader classes of CQACs with LSI (RSI respectively) and shows that a single containment mapping suffices to prove query containment (in this case, we say that the homomorphism property holds). In a more elaborate approach that works on the containment entailment is taken to prove that for certain cases of queries with both LSI and RSI ACs, we can check containment in non-deterministic polynomial time although we may need more than one containment mappings to make the containment entailment true. Contained Query Containing Query norm Complexity Reference OSI and ≠ OSI n/a Π^p_2-complete Theorem LSI OLSI, constant n/a Π^p_2-complete Theorem CLSI, ≠ OLSI n/a Π^p_2-complete Theorem ≠ ≠ n/a Π^p_2-complete Theorem, any one AC no NP Theorem any CLSI no NP/HP Theorem, any^c LSI no NP/HP Theorem, SI CLSI 1CRSI yes NP any closed ACs CLSI 1CRSI no NP any CLSI 1CRSI no NP Theorem any^c CLSI 1ORSI no NP Theorem any^c OLSI 1ORSI no NP Theorem any^c OLSI 1CRSI no NP Theorem Complexity of query containment. In data integration, in the local-as-view approach, views are maintained over the sources in order to be used to answer queries. Query answering is usually made possible via a rewriting that expresses the query in terms of the views. Since the views do not provide all the information that the base relations that form the query would require, we are looking into computing certain answers, i.e., all the answers that we are certain would be in the answers of the query given a specific view instance. For conjunctive queries, it is shown that maximally contained rewritings (MCR) in the language of union of conjunctive queries compute all certain answers in polynomial time. For CQAC queries and views, this may not alw
ASL360: AI-Enabled Adaptive Streaming of Layered 360^∘ Video over UAV-assisted Wireless Networks § Abstract We propose ASL360, an adaptive deep reinforcement learning-based scheduler for on-demand video streaming to mobile VR users in next generation wireless networks. We aim to maximize the overall Quality of Experience (QoE) of the users served over a UAV-assisted 5G wireless network. Our system model comprises a macro base station (MBS) and a UAV-mounted base station which both deploy mm-Wave transmission to the users. The video is encoded into dependent layers and segmented tiles, allowing a user to schedule downloads of each layer's segments. Furthermore, each user utilizes multiple buffers to store the corresponding video layer's segments. We model the scheduling decision as a Constrained Markov Decision Process (CMDP), where the agent selects Base or Enhancement layers to maximize the QoE and use a policy gradient-based method (PPO) to find the optimal policy. Additionally, we implement a dynamic adjustment mechanism for cost components, allowing the system to adaptively balance and prioritize the video quality, buffer occupancy, and quality change based on real-time network and streaming session conditions. We demonstrate that ASL360 significantly improves the QoE, achieving approximately 2 dB higher average video quality, 80% lower average rebuffering time, and 57% lower video quality variation, relative to competitive baseline methods. Our results show the effectiveness of our layered and adaptive approach in enhancing the QoE in immersive video streaming applications, particularly in dynamic and challenging network environments., Jacob Chakareski1, Nicholas Mastronarde2 1College of Computing, New Jersey Institute of Technology; 2Dept. of Electrical Engineering, University of BuffaloThis work has been supported in part by NSF awards CNS-2032033, CNS-2106150 and CNS-2346528, and in part by NIH award R01EY030470 § INTRODUCTION Recent advances in wireless networks and virtual reality (VR) have significantly increased the demand for efficient video streaming. These services introduce unique challenges in resource allocation and network management. Cellular networks often struggle to meet these stringent requirements, particularly under dynamic network conditions, which motivates the integration of Unmanned Aerial Vehicles (UAVs) to augment existing network infrastructure with flexible, high-capacity communications. UAV-assisted heterogeneous networks have emerged as a promising solution for supporting bandwidth-intensive services. These systems combine conventional macro base stations (MBSs) with UAV-mounted base stations employing mm-Wave communication links to enhance data rates and optimize the scheduling algorithm, thus significantly improving the VR users (VUs) QoE. To optimize the video streaming experience, various techniques have been proposed. For instance, viewport-dependent streaming methods such as tiled streaming based on Dynamic Adaptive Streaming over HTTP (DASH), divide panoramic video into spatial and temporal tiles. Moreover, video content can be divided into scalable layers that can be efficiently and separately scheduled to increase the flexibility and performance of the system. The frames are encoded as the Base Layer (BL) which does not depend on any other layer and provides a basic quality, and the Enhancement Layers (ELs) which reference the BL, providing improved video quality. Prior studies have investigated methods of optimizing video quality, considering practical factors like rebuffering or stall events. However, existing solutions often overlook the dynamics of wireless communication and video streaming requirements. Consequently, data driven approaches like Deep Reinforcement Learning (DRL) have shown significant promise in addressing such dynamic optimization problems due to their ability to learn optimal policies directly from complex environments without explicitly modeling the environmental dynamics (see, e.g.,, ). Our MPEG-DASH based video streaming system. Motivated by these, we propose a DRL-based scheduling method specifically tailored for UAV-assisted wireless networks delivering layered video. Our proposed approach dynamically adjusts its decision-making strategy based on real-time network conditions, buffer status, and video-related QoE metrics. The contributions of our work includes layered video streaming system modeling and analysis, dynamic adaptation of reward weights to capture video streaming specific requirements over time, comprehensive modeling of user-centric QoE, and an evaluation of our approach against established baselines. [This paper has been accepted for presentation at IEEE GLOBECOM 2025.] § SYSTEM MODEL In this section, we introduce a UAV-assisted wireless network to serve layered video to VUs. The network comprises a macro base station (MBS) and a UAV-mounted base station. Let ℬ denote the set of base stations and 𝒱 represent the set of VU
§ Abstract As Large Language Models (LLMs) are increasingly adopted as automated judges in benchmarking and reward modeling, ensuring their reliability, efficiency, and robustness has become critical. In this work, we present a systematic comparison of “thinking” and “non-thinking” LLMs in the LLM-as-a-judge paradigm using open-source Qwen 3 models of relatively small sizes (0.6B, 1.7B, and 4B parameters). We evaluate both accuracy and computational efficiency (FLOPs) on RewardBench tasks, and further examine augmentation strategies for non-thinking models, including in-context learning, rubric-guided judging, reference-based evaluation, and n-best aggregation. Our results show that despite these enhancements, non-thinking models generally fall short of their thinking counterparts. Our results show that thinking models achieve approximately 10% points higher accuracy with little overhead (under 2x), in contrast to augmentation strategies like few-shot learning, which deliver modest gains at a higher cost (>8x). Bias and robustness analyses further demonstrate that thinking models maintain significantly greater consistency under a variety of bias conditions such as positional, bandwagon, identity, diversity, and random biases (∼6% higher on average). We further extend our experiments to the multilingual setting and our results confirm that explicit reasoning extends its benefits beyond English. Overall, our work results in several important findings that provide systematic evidence that explicit reasoning offers clear advantages in the LLM-as-a-judge paradigm not only in accuracy and efficiency but also in robustness [ Equal Contribution * The work does not relate to authors' position at Amazon].§ INTRODUCTION Large Language Models (LLMs) are increasingly being adopted as automated judges in benchmarking, evaluation, and reward modeling, collectively known as the LLM-as-a-judge paradigm. By providing scalable, adaptable, and reproducible assessments of generated responses, these models have become central to modern evaluation pipelines. However, the reliability of these judgments depends not only on model scale but also on how the model internally reasons about the candidates to be evaluated. In particular, “thinking” models (those that generate explicit intermediate reasoning traces before producing a verdict) have been emerging as a promising approach for enhancing evaluation fidelity. Despite this growing interest, a systematic comparison of “thinking” and “non-thinking” models in the LLM-as-a-judge setting remains underexplored including critical questions about accuracy, efficiency, and robustness trade-offs between the two paradigms. For instance, while non-thinking models can be augmented with in-context examples, rubrics, or reference-based judging, it is unclear whether these strategies suffice to close the gap with reasoning-enabled models. Moreover, the behavior of these two paradigms under bias-inducing conditions such as positional effects, bandwagon influence, or identity cues remains to be systematically studied. This is crucial as these factors can undermine the reliability of automated evaluations. To address the abovementioned gaps, we present a systematic study of Qwen 3 models of varying scales (0.6B, 1.7B, and 4B parameters) in the LLM-as-a-judge paradigm using the individual tasks of the RewardBench benchmark, namely, `Chat', `Chat Hard', `Safety', and `Reasoning'. We compare thinking and non-thinking variants across multiple evaluation dimensions: accuracy, computational efficiency (measured in FLOPs), and robustness to a variety of biases. For non-thinking models, we further examine several augmentation strategies, including in-context learning with different numbers of examples, rubric-guided judging, reference-based evaluation, and n-best aggregation. In addition, we extend our study to multilingual reward evaluation to test the generality of the observed trends beyond English. Our results reveal the following key findings: Demonstrating Qwen-3 4B as a judge under thinking vs. non-thinking mode with various augmentations. While 7-shot in-context learning (ICL 7) yields modest accuracy gains (+4.5 pts) at high computational cost (8.16× FLOPs), thinking mode delivers larger improvements (+10.5 pts) with far lower computational overhead (1.82× FLOPs), highlighting its superior efficiency. * Thinking models achieve superior accuracy than their non-thinking counterparts: Our experiments show that while prompting strategies can enhance non-thinking models, they remain significantly less effective and efficient than reasoning-enabled models. For example, 7-shot ICL is 4.5 times more computationally expensive than the thinking mode, yet delivers less than half the accuracy improvement (+4.5 points vs. +10.5 points), highlighting the superior accuracy-cost trade-off of explicit reasoning. * Thinking models are more robust to biases: Our robustness analysis shows that thinking models maintain
TANGO: T § Abstract Visual navigation in robotics traditionally relies on globally-consistent 3D maps or learned controllers, which can be computationally expensive and difficult to generalize across diverse environments. In this work, we present a novel RGB-only, object-level topometric navigation pipeline that enables zero-shot, long-horizon robot navigation without requiring 3D maps or pre-trained controllers. Our approach integrates global topological path planning with local metric trajectory control, allowing the robot to navigate towards object-level sub-goals while avoiding obstacles. We address key limitations of previous methods by continuously predicting local trajectory using monocular depth and traversability estimation, and incorporating an auto-switching mechanism that falls back to a baseline controller when necessary. The system operates using foundational models, ensuring open-set applicability without the need for domain-specific fine-tuning. We demonstrate the effectiveness of our method in both simulated environments and real-world tests, highlighting its robustness and deployability. Our approach outperforms existing state-of-the-art methods, offering a more adaptable and effective solution for visual navigation in open-set environments. The source code is made publicly available: <https://github.com/podgorki/TANGO>.empty empty § INTRODUCTION Visual navigation is a fundamental challenge in robotics, with significant implications for autonomous agents operating in real-world environments. Traditional approaches often rely on constructing precise, globally consistent geometric 3D maps, which can be computationally intensive and difficult to generalize across diverse settings. Alternatively, methods designed for navigating in previously unseen environments may not effectively leverage prior knowledge, limiting their efficiency and adaptability. Inspired by human navigation abilities – where we can traverse environments by reasoning over previously observed images or objects without detailed 3D maps – visual topological navigation has emerged as a promising alternative. Recent research has predominantly focused on image-level topological maps, which, while straightforward, have limited representational capacity. They often lack semantic richness and are sensitive to viewpoint changes, hindering their applicability in dynamic and diverse environments. In contrast, object-level topological maps offer several advantages, including direct open-set natural language querying, semantic interpretability, and viewpoint-invariant visual recognition. These attributes are crucial for enabling open-world navigation that can be seamlessly deployed across different environments, tasks, and robotic platforms. However, integrating object-level topological information into navigation pipelines presents challenges, particularly in bridging global planning with local motion control while ensuring obstacle avoidance and traversability. We present a topometric navigation pipeline that uniquely bridges topological global path planner and metric local trajectory planning, without needing 3D maps or learnt controllers. This enables our method to effective avoid obstacles (bottom row) even when no such objects were present in the mapping (teach) run. In this work, we present a novel RGB-only, object-level, topometric navigation pipeline for zero-shot robot control, in contrast with recent learnt controllers. Specifically, we propose a unique integration of global path planning and local motion planning, where a robot metrically plans its motion to move towards topologically planned object-level sub-goals. The latter is achieved through a recent work, RoboHop, where its global path planner generates object-level sub-goal cost mask for robot's current observation (see Figure ). While this sub-goal mask can guide a robot for where to head, it does not account for traversability or obstacle avoidance due to its purely topological nature. We address this limitation through our proposed topometric controller, where we explicitly predict traversable image segments, project them in Bird's Eye View (BEV) space using monocular metric depth, plan a trajectory to the farthest least-cost sub-goal, and continue this process until the long-horizon goal is reached. The contributions of this paper are as follows: * a novel topometric controller that uniquely bridges topological global path planner and metric local trajectory planning to enable long-horizon object-goal navigation without needing 3D maps or learnt controllers; * an RGB-only method to continuously predict local trajectory using single-view depth and traversability; * an auto-switch-control approach that switches from our proposed controller to a fallback controller by detecting the absence of visible traversable regions; and * a real-world demonstration (5 Hz) of our modular navigation pipeline built on top of `foundation models' such as Fast Segment Anything, Depth A
The Linear Reliability Channel § Abstract We introduce and analyze a discrete soft-decision channel called the linear reliability channel (LRC) in which the soft information is the rank ordering of the received symbol reliabilities. We prove that the LRC is an appropriate approximation to a general class of discrete modulation, continuous noise channels when the noise variance is high. The central feature of the LRC is that its combinatorial nature allows for an extensive mathematical analysis of the channel and its corresponding hard- and soft-decision maximum likelihood (ML) decoders. In particular, we establish explicit error exponents for ML decoding in the LRC when using random codes under both hard- and soft-decision decoding. This analysis allows for a direct, quantitative evaluation of the relative advantage of soft-decision decoding. The discrete geometry of the LRC is distinct from that of the BSC, which is characterized by the Hamming weight, offering a new perspective on code construction for soft-decision settings.maximum likelihood decoding, error exponents, soft-decision decoding, channel coding § INTRODUCTION Error correction decoding algorithms are broadly divisible into hard-decision and soft-decision decoders. Hard-decision decoders are algorithms that take as input only bits, whereas soft-decision decoders also make use of side information, referred to as soft information, quantifying the likelihood that each bit is correct. The standard form of soft information per bit is the log-likelihood ratio (LLR) of the hypotheses that the transmitted bit is 0 or 1 given the channel output. Ordered Reliability Bits Guessing Random Additive Noise Decoding (ORBGRAND) is a code-agnostic, soft-decision decoding algorithm that has recently been shown to be almost capacity-achieving for the real-valued additive white Gaussian noise channel and to be practically feasible via efficient hardware implementation, both in synthesis and silicon. Subsequent theoretical work has explored algorithmic modifications to approach the performance of ML soft-decision decoding while maintaining efficiency and has studied the achievable rate of ORBGRAND in more general settings. Motivated by these developments, this work formalizes the fundamental algorithmic insight of ORBGRAND, the approximation of the sorted magnitudes of the received LLRs by a linear function, into a channel model for which this linear behavior is exact. For this channel, which we call the linear reliability channel (LRC), ORBGRAND is a true maximum-likelihood (ML) soft-decision decoder. A key feature of the LRC is that it is a discrete soft-decision channel in which the soft information is combinatorial and sufficiently structured to allow for a complete mathematical analysis of the maximum-likelihood decoding, both hard- and soft-decision. The behavior of the LRC is aligned with a general family of continuous-noise channels at low signal-to-noise ratios, where decoding performance is most relevant. In the LRC, the received bit reliabilities, i.e., the magnitudes of the LLRs of the received bits, are linearly increasing when subject to a random permutation. The soft information is, therefore, the permutation for a given channel use, and the knowledge of that permutation suffices for an exact ML decoding. Intrinsically connected with the LRC and its ML decoder is a statistic called the logistic weight, which is analogous to the Hamming weight in the context of the BSC. The noise level in the LRC is parameterized by the slope of the linear increase in reliabilities, and this slope plays an analogous role to the bit-flip probability in a BSC. When the slope is large, most bit are transmitted reliability, whereas a significant portion are unreliable when the slope is small. We derive closed-form, computable error exponents for both hard- and soft-decision ML decoding in the LRC. In order to do so, we leverage the mathematical framework of large deviations and guesswork, as introduced in. At a high level, we show that the guesswork process for the noise in the LRC satisfies a large deviation principle (LDP) in both the hard- and soft-decision settings. Having established these LDPs, we utilize the formulation of the channel coding theorem presented in, which results in explicit expressions for the error and success exponents, under the assumption that the code book is chosen uniformly at random. These exponents show that, in the large block length limit, soft-decision decoding strictly outperforms hard-decision decoding in the LRC. This analysis allows for a quantitative evaluation of the performance difference between hard- and soft-decision decoding at any code rate and any noise level. § OVERVIEW OF RESULTS We present here an outline of the sequel, summarizing the main results and offering intuitive interpretations of the more technical statements. defines the LRC and presents its key properties. We show in how the LRC can be viewed as an approximatio
LLM-Based Instance-Driven Heuristic Bias In the Context of a Biased Random Key Genetic Algorithm § Abstract Integrating Large Language Models (LLMs) within metaheuristics opens a novel path for solving complex combinatorial optimization problems. While most existing approaches leverage LLMs for code generation to create or refine specific heuristics, they often overlook the structural properties of individual problem instances. In this work, we introduce a novel framework that integrates LLMs with a Biased Random-Key Genetic Algorithm (BRKGA) to solve the NP-hard Longest Run Subsequence problem. Our approach extends the instance-driven heuristic bias paradigm by introducing a human-LLM collaborative process to co-design and implement a set of computationally efficient metrics. The LLM analyzes these instance-specific metrics to generate a tailored heuristic bias, which steers the BRKGA toward promising areas of the search space. We conduct a comprehensive experimental evaluation, including rigorous statistical tests, convergence and behavioral analyses, and targeted ablation studies, comparing our method against a standard BRKGA baseline across 1,050 generated instances of varying complexity. Results show that our top-performing hybrid, BRKGA+Llama-4-Maverick, achieves statistically significant improvements over the baseline, particularly on the most complex instances. Our findings confirm that leveraging an LLM to produce an a priori, instance-driven heuristic bias is a valuable approach for enhancing metaheuristics in complex optimization domains.§ INTRODUCTION The core challenge in combinatorial optimization is uncovering hidden patterns in complex, structured data. The emergence of Large Language Models (LLMs) such as GPT, Llama, and Gemini has transformed fields like natural language processing, code generation, and reasoning over structured or tabular data. Beyond these tasks, LLMs can detect complex patterns and latent structures, perform symbolic reasoning, and extract task-relevant features that are often hard for humans to identify quickly. It is this latent ability for abstract, data-driven reasoning on a large scale that remains largely unexplored for guiding sophisticated search algorithms. This pattern-recognition capability offers a significant opportunity to enhance metaheuristics (MHs). Although MHs are well-suited for tackling NP-hard combinatorial problems, they typically operate without prior knowledge of the search space. While classic Machine Learning has been used to improve MHs, such integrations often require sophisticated model development and extensive training time. LLMs can mitigate these challenges by leveraging their pre-trained reasoning abilities to provide heuristic guidance in a zero-shot or few-shot context, without the need for traditional training. So far, the primary use of LLMs in the context of MHs has focused on code generation, allowing MHs to evolve their own code or be created from scratch. In this work, we explore an alternative paradigm, originally developed in: using the LLM as an instance-driven heuristic bias. While a human expert can define relevant heuristic features, the LLM's role in this approach is to analyze the numerical matrix of these features for a specific instance—a pattern-recognition task often too complex for quick human assessment—to derive a quantitative, instance-specific search strategy. By converting this analysis into numerical parameters, the LLM guides a MH toward more promising regions of the search space. By doing so, our framework introduces a novel, a priori method for search guidance that complements established dynamic paradigms in evolutionary computation like hyper-heuristics. Our work strengthens this paradigm and expands it in several ways. §.§ Contribution and Paper Organization The main contributions are: * We validate and extend the instance-driven paradigm by successfully applying it to the Longest Run Subsequence (LRS) problem—an NP-hard, string-based combinatorial optimization problem from a domain distinct from the framework's original context—demonstrating its potential for application to other combinatorial optimization problems. * We introduce a co-design human–LLM framework for feature engineering. Instead of a purely expert-led process, our method uses the LLM to propose candidate metrics, while the human expert validates them for correctness and efficiency. This process yields instance-specific parameters that directly steer the search of a Biased Random-Key Genetic Algorithm (BRKGA). * We present the first comprehensive validation of this framework on a problem distinct from its original social-network context. Our evaluation encompasses rigorous statistical analyses, a qualitative investigation of search behavior to elucidate how the guidance operates, and targeted ablation studies to systematically assess the contribution of each core component. The paper unfolds as follows. Section reviews related work on LL
Representation Learning on Large Non-Bipartite Transaction Networks using GraphSAGE This preprint has not undergone peer review or any post-submission improvements or corrections. The Version of Record of this contribution is published in Graph-Based Representations in Pattern Recognition. GbRPR 2025. Lecture Notes in Computer Science, vol 15727. Springer, Cham., and is available online at https://doi.org/10.1007/978-3-031-94139-9_17 § Abstract Financial institutions increasingly require scalable tools to analyse complex transactional networks, yet traditional graph embedding methods struggle with dynamic, real-world banking data. This paper demonstrates the practical application of GraphSAGE, an inductive Graph Neural Network framework, to non-bipartite heterogeneous transaction networks within a banking context. Unlike transductive approaches, GraphSAGE scales well to large networks and can generalise to unseen nodes which is critical for institutions working with temporally evolving transactional data. We construct a transaction network using anonymised customer and merchant transactions and train a GraphSAGE model to generate node embeddings. Our exploratory work on the embeddings reveals interpretable clusters aligned with geographic and demographic attributes. Additionally, we illustrate their utility in downstream classification tasks by applying them to a money mule detection model where using these embeddings improves the prioritisation of high-risk accounts. Beyond fraud detection, our work highlights the adaptability of this framework to banking-scale networks, emphasising its inductive capability, scalability, and interpretability. This study provides a blueprint for financial organisations to harness graph machine learning for actionable insights in transactional ecosystems.Mihir Tare NatWest AI Research NatWest Group London, UK mihir.tare@natwest.com 2nd Clemens Rattasits NatWest AI Research NatWest Group London, UK clemens.rattasits@natwest.com 3rd Yiming Wu NatWest AI Research NatWest Group London, UK yiming.wu@natwest.com 4th Euan Wielewski NatWest AI Research NatWest Group Edinburgh, UK euan.wielewski@natwest.com plain plain GraphSAGE, Graph embeddings, Graph neural networks, Transactional networks, Money mule detection. § INTRODUCTION Graph embedding methods have revolutionised the analysis of networks by providing a way to transform complex network information into low-dimensional vector representations. These embeddings capture the structural, relational, and, when available, feature properties of nodes, making them a powerful tool for applications like fraud detection on financial transaction networks. A taxonomy of graph embedding methods points to three main categories: matrix factorisation, random walk, and GNN (Graph Neural Network) based methods. Matrix factorisation approaches like LINE and HOPE focus on approximating adjacency matrices or similarity matrices to embed nodes. LINE preserves both first order and second order proximities, capturing direct connections and shared neighbourhoods, while HOPE extends this approach to high-order proximities, making it particularly effective for embedding directed graphs. These methods have shown promise in tasks such as link prediction and anomaly detection. However, these methods struggle with large dynamic networks (as observed in financial transaction networks) – they are transductive, meaning they cannot be used to perform inference on unseen nodes without retraining, and have a large computational cost due to the need for matrix decomposition. DeepWalk, introduced by Perozzi et al., was a pioneering work that applied truncated random walks to generate node sequences, treating them as sentences in a text corpus and training node embeddings using the Word2Vec model. This method captures community structures effectively and has been applied in fraud detection problems where the relationships between nodes are important. Building on this, Node2Vec improved upon DeepWalk by introducing biased random walks. This allowed the method to interpolate between breadth-first (BFS) and depth-first (DFS) search strategies, providing flexibility to capture community and structural roles within the same embedding framework. Random walk-based approaches have since been studied further with the introduction of metapath2vec by Dong et al. extending the idea to heterogeneous networks. However, these methods require knowledge of the entire graph during training and are also inherently transductive, hence limiting their scalability to large-scale graphs. Simultaneously, GNNs have emerged as a robust alternative leveraging the power of deep learning to learn embeddings by aggregating information from a node's neighbours. Graph Convolutional Networks (GCNs), proposed by Kipf and Welling, introduced spectral convolutions for semi-supervised node classification. This method effectively leverages node features alongside graph topology. Similarly, Graph Attention
§ Abstract Large Language Models (LLMs) have demonstrated substantial progress in biomedical and clinical applications, motivating rigorous evaluation of their ability to answer nuanced, evidence-based questions. We curate a multi-source benchmark drawing from Cochrane systematic reviews and clinical guidelines, including structured recommendations from the American Heart Association and narrative guidance used by insurers. Using GPT-4o-mini and GPT-5, we observe consistent performance patterns across sources and clinical domains: accuracy is highest on structured guideline recommendations (90%) and lower on narrative guideline and systematic review questions (60–70%). We also find a strong correlation between accuracy and the citation count of the underlying systematic reviews, where each doubling of citations is associated with roughly a 30% increase in the odds of a correct answer. Models show moderate ability to reason about evidence quality when contextual information is supplied. When we incorporate retrieval-augmented prompting, providing the gold-source abstract raises accuracy on previously incorrect items to 0.79; providing top 3 PubMed abstracts (ranked by semantic relevance) improves accuracy to 0.23, while random abstracts reduce accuracy (0.10, within temperature variation). These effects are mirrored in GPT-4o-mini, underscoring that source clarity and targeted retrieval—not just model size—drive performance. Overall, our results highlight both the promise and current limitations of LLMs for evidence-based clinical question answering. Retrieval-augmented prompting emerges as a useful strategy to improve factual accuracy and alignment with source evidence, while stratified evaluation by specialty and question type remains essential to understand current knowledge access and to contextualize model performance.) listings,skins,breakable promptbox[2][] enhanced, breakable, colback=PromptBack, colframe=PromptFrame, coltitle=PromptTitle, fonttitle=, boxrule=0.6pt, arc=1pt, outer arc=1pt, left=8pt,right=8pt,top=6pt,bottom=6pt, titlerule=0pt, title=#2, listing only, listing engine=listings, listing options= basicstyle=, breaklines=true, columns=fullflexible, keepspaces=true, showstringspaces=false, upquote=true, #1 LLM for Evidence-Based Medical QA]Evaluating Large Language Models for Evidence-Based Clinical Question Answering cwang271@jh.edu Johns Hopkins University, USA Yiqun Chen yiqunc@jhu.edu Johns Hopkins University, USA Large Language Models, Clinical Question Answering, Biomedical NLP, Evidence-Based Medicine, Benchmark Datasets *Data and Code Availability The raw materials are publicly available from their original repositories. We release our curated question–answer dataset, data processing scripts, and evaluation code at. § INTRODUCTION Large Language Models (LLMs) have demonstrated strong capabilities in open-domain and medical question answering and reasoning, but their performance in complex, evidence-based clinical domains remains an active area of exploration. While prior benchmarks have evaluated biomedical QA performance across various formats, most existing datasets are derived from well-established medical practice and standardized questions (e.g., MedQA from medical licensing exams). The transportability of these QA datasets to real-world clinical practice has recently been called into question. This has spurred growing interest in whether LLMs can accurately address clinical questions grounded in diverse sources of evidence, particularly in settings that require reasoning about evidence quality. In particular, many clinically relevant questions are difficult to characterize because the underlying evidence may be missing or contradictory (e.g., differing results from clinical trials vs. observational studies). Moreover, such information is not readily available in standalone test formats (such as board exams), since the body of clinical evidence continuously evolves. A key avenue for capturing such evidence is through systematic reviews, which are widely regarded as the gold standard for evidence-based medicine. Systematic reviews begin by surveying the full body of available research, then apply inclusion and exclusion criteria to filter eligible studies, extract relevant data (with graded evidence levels and risk-of-bias assessments), and finally synthesize findings—often using meta-analysis—to summarize the quantitative evidence. Increasingly, researchers have sought to leverage these rich textual and quantitative narratives to build more clinically realistic QA datasets. However, existing QA datasets are primarily designed to serve as benchmarks for LLMs and often fail to examine deeper characteristics of the underlying evidence (e.g., whether the cited studies are well-established or frequently cited, or the subject matter of the QA pair) and how these characteristics affect model accuracy. Moreover, most evaluations are structured as benchmarks of so-called “zero-shot” abil
Miniature Microphone Array for Surface Wave Localization Co-primary authors Carnegie Mellon University USA siqiz2@andrew.cmu.edu [1] Carnegie Mellon University USA Tsinghua University China xiyuxinz@andrew.cmu.edu [1] Carnegie Mellon University USA Michigan State University USA vuduc2@msu.edu [1] Carnegie Mellon University USA Shanghai Jiao Tong University China riderdecade@sjtu.edu.cn Carnegie Mellon University USA Universidad Pontificia Comillas Spain clara.palacios.spain@gmail.com Carnegie Mellon University USA jiangyiz@andrew.cmu.edu Tsinghua University China yuntaowang@tsinghua.edu.cn Carnegie Mellon University USA mayankgoel@cmu.edu Carnegie Mellon University USA justinchan@cmu.edu enables hands-free monitoring of micro-mechanical cardiac events across a range of hearables, including (a) an older adult sleeping at home with over-ear headphones, (b) a driver wearing wireless earbuds, and (c) a commuter on a train using bone-conduction earphones. [baseline=-0.5ex][circle,draw=black,fill=myblue,inner sep=2pt, line width=.5pt]; points correspond to micro-cardiac events detected by our system. (d) Smartphone app user interface with hearables. This figure is composed of four sub-figures that collectively illustrate the system's ability to monitor micro-mechanical cardiac events using different types of hearable devices in various everyday settings. Sub-figures 'a', 'b', and 'c' are illustrations, while sub-figure 'd' is a photograph. Sub-figure 'a' shows a person wearing over-ear headphones and sleeping on a couch, with a red speech bubble above their head containing a red waveform that represents the captured cardiac signal. Sub-figure 'b' shows a person in a car wearing a green wireless earbud while driving, with a green speech bubble above their head containing a green waveform representing a captured signal. Sub-figure 'c' shows a person on a train reading a book while wearing a purple bone-conduction earphone, with a purple speech bubble above their head containing a purple waveform. In all three illustrations, blue circular markers on the waveforms indicate detected micro-cardiac events. Sub-figure 'd' is a photograph of the system's components: a smartphone with the user interface of the mobile app is shown in the center, displaying a user profile, the original hearable recording, and the reconstructed SCG and GCG signals. Surrounding the smartphone are four types of hearables compatible with the system: a pair of black over-ear headphones at the top, a pair of white wireless earbuds to the left, a pair of white wired earphones to the right, and a black bone-conduction headphone to the right. § EVALUATION
§ Abstract Signature-based methods have recently gained significant traction in machine learning for sequential data. In particular, signature kernels have emerged as powerful discriminators and training losses for generative models on time-series, notably in quantitative finance. However, existing implementations do not scale to the dataset sizes and sequence lengths encountered in practice. We present , a high-performance Python library offering optimised implementations of signatures and signature kernels on CPU and GPU, fully compatible with PyTorch’s automatic differentiation. Beyond an efficient software stack for large-scale signature-based computation, we introduce a novel differentiation scheme for signature kernels that delivers accurate gradients at a fraction of the runtime of existing libraries.§ INTRODUCTION Most forms of sequential data such as financial time series, on sufficiently fine time scales, can be represented as a continuous path x: [0,1] →ℝ^d. It was first shown by, and then explored in greater detail and generality in the context of rough path theory in, that any path may be faithfully represented, up to reparameterisation, by the collection of its iterated integrals known as the path-signature. The path-signature S(x) is formally defined as the solution to the tensor differential equation d y_t = y_t ⊗ d x_t in the free tensor algebra T((ℝ^d)):= ∏_n=0^∞ (ℝ^d)^⊗ n, and thereby can be interpreted as a non-commutative analogue of the exponential function. By applying Picard iteration, one recovers the familiar formulation of the path-signature as the sequence of iterated integrals S(x) = ( ∫_0, cybersecurity, information theory, and even quantum computing. Signatures have also served as a theoretical foundation for proving universality properties of neural differential equations and more recently of state-space models (SSMs). In generative modelling, signatures have enabled new approaches to synthesizing financial time series in a model-independent manner, been used as universal nonlinearities in Seq2Seq architectures, and provided representation spaces for training score-based diffusion models on time series. For a detailed and pedagogical introduction to the subject, the reader is referred to, while offers a survey of recent applications. In practice, paths are typically obtained via piecewise linear interpolation of discrete time series. Combining a non-trivial algebraic relation known as Chen's identity with the elementary fact that the signature of a linear segment is the tensor exponential of its increment, yields a concise expression for the signature of a piecewise linear path x = x^1 * ⋯ * x^L as S(x) = exp(Δ x^1) ⊗⋯⊗exp(Δ x^L). This expression underlies the implementation of signature computations in standard Python libraries such as,, and. It enables an efficient evaluation of the signature with time complexity 𝒪(Ld^n), where n ∈ℕ is the truncation level. However, due to the exponential growth in the dimension d, the method becomes computationally prohibitive for large n. A widely adopted solution to this curse of dimensionality is the use of signature kernels. These kernels take the form ⟨ S(x), S(y) ⟩, for suitable inner products ⟨·, ·⟩ on T((ℝ^d)), and can be computed efficiently without explicitly evaluating the feature map S(x). In particular, recent work has shown that the signature kernel satisfies a Goursat PDE. Signature kernels have since found applications in hypothesis testing, causality, kernel-based solvers for path-dependent PDEs in derivative pricing under rough volatility, kernel formulations of deep hedging, and have even been shown to arise as infinite-width limits of neural networks. They have also been used to train neural SDEs for time-series generation across diverse fields, including fluid dynamics, computational neuroscience, and quantitative finance. Despite these advances, current software implementations of signatures and signature kernels do not scale to large datasets of long time series. Moreover, when signature kernels are employed as loss functions, efficient and accurate backpropagation is crucial. Existing implementations compute derivatives using a second PDE; while natural, this approach often yields inaccurate gradients, especially for short time series, leading to unreliable model training. We introduce, a high-performance Python library that wraps C++ and CUDA code, offering optimised CPU- and GPU-amenable implementations of signatures and signature kernels, fully compatible with PyTorch’s automatic differentiation. is significantly faster than existing packages on both CPU and GPU thanks to algorithmic improvements, optimized memory access, and hardware-level parallelism via SIMD instructions. It also supports efficient on-the-fly application of lead–lag and time-augmentation transformations, providing a substantial speed-up in financial applications. All functions are fully backpropagatable through a PyTorch API. The library is op
Analytical Design and Development of a Modular and Intuitive Framework for Robotizing and Enhancing the Existing Endoscopic Procedures § Abstract Despite the widespread adoption of endoscopic devices for several cancer screening procedures, manual control of these devices still remains challenging for clinicians, leading to several critical issues such as increased workload, fatigue, and distractions. To address these issues, in this paper, we introduce the design and development of an intuitive, modular, and easily installable mechatronic framework. This framework includes (i) a novel nested collet-chuck gripping mechanism that can readily be integrated and assembled with the existing endoscopic devices and control their bending degrees-of-freedom (DoFs); (ii) a feeder mechanism that can control the insertion/retraction DoF of a colonoscope, and (iii) a complementary and intuitive user interface that enables simultaneous control of all DoFs during the procedure. To analyze the design of the proposed mechanisms, we also introduce a mathematical modeling approach and a design space for optimal selection of the parameters involved in the design of gripping and feeder mechanisms. Our simulation and experimental studies thoroughly demonstrate the performance of the proposed mathematical modeling and robotic framework., Yash Kulkarni^1, Jiaqi Xue^1, Naruhiko Ikoma^2, Member, IEEE, and Farshid Alambeigi^1, Member, IEEE *Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R21CA280747.^1M. R. Javazm, Y. Kulkarni, J. Xue, and F. Alambeigi are with the Walker Department of Mechanical Engineering and the Texas Robotics at the University of Texas at Austin, Austin, TX, 78712, USA. Email: {mohammad.rafiee, kulkarni.yash08, jiaqixue}@utexas.edu, farshid.alambeigi@austin.utexas.edu.^2N. Ikoma is with the Department of Surgical Oncology, Division of Surgery, The University of Texas MD Anderson Cancer Center, Houston, TX, 77030, USA. Email: nikoma@mdanderson.org. Modular Mechatronics System, Steerable Colonoscope/Endoscope, Colorectal Cancer Screening. § INTRODUCTION Since their invention in late 1800's, endoscopic devices have evolved into versatile and multi-functional instruments, surpassing their original role in examining complex anatomies (e.g., urethra and gastrointestinal (GI) tract) to a platform used for diverse medical interventions such as biopsy and polypectomy. For example, colonoscopy, as an endoscopic procedure, is the gold standard of colorectal cancer (CRC) screening, which is the prominent cause of cancer-related deaths worldwide. Despite their tremendous benefits, literature states that colonoscopic procedures suffer from an early detection miss rate of approximately 30% for CRC polyps. This high detection miss rate can be attributed to non-optimal mechanical design, non-intuitive control interface, slow learning curve, and limitations of the used optical sensing system (e.g., occlusion and blur) in this endoscopic device. While researchers and companies have made significant strides in enhancing endoscopic technologies with advanced optical sensors/cameras, there has been a noticeable gap in addressing the steerability issues and intuitiveness of controlling these devices. Surprisingly, after more than a century, the overall mechanical structure and control interface of these devices have not gone through significant changes since their invention. Illustration of conventional colonoscopy procedure for CRC diagnosis:.5pt-.9pt1 - A surgeon holding a colonoscope,.5pt-.9pt2 - Close-up view of the handheld knob of colonoscope,.5pt-.9pt3 - A monitor for screening,.5pt-.9pt4 - demonstrating the procedure of holding and manually navigating the colonoscope by a skillful surgeon for bending the end effector in the Upward/Downward direction,.5pt-.9pt5 - Left/Right direction, and.5pt-.9pt6 - Insertion/Retraction of colonoscope. As shown in Fig., to perform an endoscopic procedure, a clinician uses the control handle (CH) of this device and pulling/pushing of tube to actuate three degrees of freedom (DoFs). Since the required DoFs to perform the procedure is more than clinicians' hands (i.e., 2 versus 3), a manual locking lever has also been considered to first lock one of the controlled DoFs and then control the other one when needed. This sequential locking of the DOFs can make control of these flexible instruments – that are steered inside a deformable anatomy (e.g., a colon and esophagus) – very challenging and non-intuitive. This problem becomes more critical, when a clinician decides to hold the position of the endoscope and perform a specific procedure such as a biopsy or polypectomy. The cumbersome and non-intuitive handling of these flexible devices directly affects (i) physical and mental fatigue of clinicians and thereby the quality of colonoscopic procedures, (ii) extended duration of the procedure, and (iii) patient
Beyond the Silence: How Men Navigate Infertility Through Digital Communities and Data Sharing § Abstract Men experiencing infertility face unique challenges navigating Traditional Masculinity Ideologies that discourage emotional expression and help-seeking. This study examines how Reddit's r/maleinfertility community helps overcome these barriers through digital support networks. Using topic modeling (115 topics), network analysis (11 micro-communities), and time-lagged regression on 11,095 posts and 79,503 comments from 8,644 users, we found the community functions as a hybrid space: informal diagnostic hub, therapeutic commons, and governed institution. Medical advice dominates discourse (63.3%), while emotional support (7.4%) and moderation (29.2%) create essential infrastructure. Sustained engagement correlates with actionable guidance and affiliation language, not emotional processing. Network analysis revealed structurally cohesive but topically diverse clusters without echo chamber characteristics. Cross-posters (20% of users) who bridge r/maleinfertility and the gender-mixed r/infertility community serve as navigators and mentors, transferring knowledge between spaces. These findings inform trauma-informed design for stigmatized health communities, highlighting role-aware systems and navigation support.Rutgers University New Brunswick, NJ USA tawfiq.ammari@rutgers.edu Johns Hopkins Bloomberg School of Public Health Baltimore, Maryland USA zkhondo1@jh.edu University at Buffalo Buffalok, NY USA ywang492@buffalo.edu Rutgers University New Brunswick, NJ US nikki.roda@gmail.com 10003120.10003121 Human-centered computing Human computer interaction (HCI) 500 [500]Human-centered computing Human computer interaction (HCI) 20 February 2007 [revised]12 March 2009 [accepted]5 June 2009 § INTRODUCTION Male infertility accounts for 20–30% of infertility cases, yet remains under-discussed, under-supported, and heavily stigmatized. Beyond thwarting reproductive goals, it challenges cultural linkages between masculinity and virility, introducing biological, relational, and psychological uncertainties while offering few social scripts or institutional resources for coping. Unlike many areas of reproductive health that benefit from gendered solidarity and advocacy, men experiencing infertility often face profound isolation. In the absence of formal or culturally accessible support systems, many turn to online spaces for guidance and connection. This study focuses on one such forum, r/maleinfertility described as a space “for men experiencing infertility and male perspectives on infertility. Partners are encouraged to participate, but we ask they post in the daily Partner's Perspectives thread.” Research in HCI and CSCW shows how online health and parenting communities foster emotional expression, identity work, and peer support. Studies of stay-at-home fathers and non-normative caregivers reveal that men use online forums to reframe vulnerability and caregiving as compatible with strength and responsibility, sometimes creating father-only spaces. Yet—with the exception of Patel et al. —male infertility has received little attention in social computing scholarship, despite its potential to illuminate gendered stigma, identity repair, and support-seeking. Building on this literature, we examine Reddit as a platform where men disclose and process infertility experiences, asking: RQ1a: What kinds of discourse emerge in male infertility communities on Reddit? While female infertility discourse is well theorized in terms of emotional labor and peer support, male infertility discourse remains underexplored. Understanding its emotional and linguistic structures can inform platform design and peer support. Prior work suggests that affective and narrative strategies—such as vulnerability or mentoring—boost engagement, but it is unclear which forms of self-expression are most accepted or rewarded in male infertility communities. We therefore also ask: RQ1b: What types of discourse and user attributes lead to higher engagement in these communities? Online spaces like Reddit operate across macro (platform-wide), meso (multi-subreddit), and micro (single-subreddit or sub-community) levels of norms. These norms shape moderation and acceptable discourse, introducing differences even within one subreddit. Such dynamics matter in stigmatized contexts, where users may encounter narrow perspectives such as despair narratives or medical fatalism. To investigate these dynamics, we ask: RQ2a: Are there distinct sub-communities within male infertility subreddit? RQ2b: How do different sub-communities frame infertility, masculinity, and coping strategies? RQ2c: Do these sub-communities function as echo chambers? Emerging HCI research highlights the role of cross-posters—users who bridge multiple communities—as knowledge brokers and empathy-builders. In foster care and trauma recovery spaces, cross-posters act as informal mentors, attract more
Prompt Pirates Need a Map: Stealing Seeds helps Stealing Prompts § Abstract Diffusion models have significantly advanced text-to-image generation, enabling the creation of highly realistic images conditioned on textual prompts and seeds. Given the considerable intellectual and economic value embedded in such prompts, prompt theft poses a critical security and privacy concern. In this paper, we investigate prompt-stealing attacks targeting diffusion models. We reveal that numerical optimization-based prompt recovery methods are fundamentally limited as they do not account for the initial random noise used during image generation. We identify and exploit a noise-generation vulnerability (CWE-339), prevalent in major image-generation frameworks, originating from PyTorch's restriction of seed values to a range of 2^32 when generating the initial random noise on CPUs. Through a large-scale empirical analysis conducted on images shared via the popular platform CivitAI, we demonstrate that approximately 95% of these images' seed values can be effectively brute-forced in 140 minutes per seed using our seed-recovery tool, SeedSnitch. Leveraging the recovered seed, we propose PromptPirate, a genetic algorithm-based optimization method explicitly designed for prompt stealing. PromptPirate surpasses state-of-the-art methods, i.e., PromptStealer, P2HP, and CLIP-Interrogator, achieving an 8–11% improvement in LPIPS similarity. Furthermore, we introduce straightforward and effective countermeasures that render seed stealing, and thus optimization-based prompt stealing, ineffective. We have disclosed our findings responsibly and initiated coordinated mitigation efforts with the developers to address this critical vulnerability.§ INTRODUCTION Over the past decade, machine learning techniques have been widely used in various areas of computer science. Among these, diffusion models have emerged as a revolutionary approach to image generation. These models have rapidly gained prominence due to their ability to generate highly realistic, cost-effective, and high-quality images conditioned on an input seed and a textual prompt. Artists, designers and filmmakers have adopted them to craft compelling visuals, generate intricate animations, and build immersive virtual experiences. Notable examples include the award-winning, AI-generated artwork Théâtre d’Opéra Spatial by Jason M. Allen; the AI-generated short film The Crow by Glenn Marshall, winning prizes at Cannes and Linz; and influential virtual personalities, such as the artificial influencer Lil Miquela, who collaborates with global brands. A central aspect of diffusion models is the textual prompt, which is an invaluable asset that defines the quality, specificity, and ultimately, the commercial potential of the generated content. The prompt provides critical semantic context, stylistic precision, and nuanced direction, guiding the model to produce compelling and commercially viable outputs. Mastery of prompt engineering has therefore become its own skill in fields such as advertising, entertainment and social media. The growing economic significance of the expertise of generating the correct modifiers is evidenced by platforms like PromptBase, where specialized prompts are actively traded. Shen et al. estimated that the top 50 sellers on PromptBase collectively sold approximately 45,000 prompts over a nine-month period in 2022, generating roughly 186,525 USD in total revenue. Seed knowledge is essential for online prompt stealing. Original prompt with seed s Stolen prompt stolen seed s () No-seed attack (P2HP ) Prompt A dog... A dog... An alien... 0.09-0.5em[0em][0em]Image Given the increasing economic and intellectual value associated with carefully crafted prompts, prompt-stealing represents a significant security challenge. If the unique style of a movie, artwork, or influencer's identity embedded within a prompt is compromised, it can easily be replicated and falsified, undermining authenticity and potentially leading to financial and reputational harm. Consequently, recent research has increasingly focused on addressing and understanding the vulnerabilities associated with prompt-stealing. Recent work has demonstrated that prompts can be stolen. These works fall into two groups: Offline and online approaches. To capture the style associated with an image, classification models have been proposed. These models are expensively trained in an offline phase but during inference, these models, given an input image, rapidly generate a likelihood vector for a large set of predefined style modifiers, i.e., specific textual cues describing aesthetic attributes or visual styles such as "cinematic lighting," "oil painting," or "cyberpunk". However, the offline-phase of such classifiers is very resource-intensive. They require thousands of images rendered under different seed values and a wide variety of prompts that exhaustively combine relevant style modifiers. This combinatorial
Continuous-Time Value Iteration for Multi-Agent Reinforcement Learning § Abstract Existing reinforcement learning (RL) methods face challenges in handling complex dynamical systems that require interactions at high frequencies or arbitrary time intervals. Continuous-time RL (CTRL) has emerged as a promising alternative to overcome these challenges by utilizing differential value functions that are viscosity solutions to the Hamilton–Jacobi–Bellman (HJB) equation, rather than relying on discrete-time Bellman recursion. However, solving HJB equations through conventional methods remains intractable for high-dimensional dynamics due to the curse of dimensionality (CoD). Furthermore, while prior substantial studies have made progress in CTRL with single-agent systems, their extensions to continuous-time multi-agent RL (CT-MARL) remain unexplored. To address this gap, our paper proposes a novel approach for solving CT-MARL problems. Specifically, we leverage physics‑informed neural networks (PINNs), which offer a scalable approach to alleviate the CoD and approximate HJB-based value functions. Since poor value approximations hinder policy learning, recent studies emphasize the importance of not only minimizing value approximation error but also ensuring correct computation of the value gradients. To this end, we introduce a Value Gradient Iteration (VGI) that refines value gradients during training, thereby significantly improving the value approximation and its applicability to policy learning. We evaluate our method using continuous‑time variants of standard benchmarks, including multi‑agent particle environment (MPE) and multi‑agent MuJoCo. Our results demonstrate that our approach consistently outperforms existing continuous‑time RL baselines and scales to complex multi-agent dynamics.§ INTRODUCTION RL has achieved remarkable success in a range of single- and multi-agent interaction tasks, including robotic manipulation, strategy games, autonomous driving, and traffic coordination. Most existing RL methods model these interactions in discrete time, where Bellman backup is computed at a fixed time interval. However, discrete‑time RL (DTRL) is not well-suited for real-world scenarios that often demand high-frequency decision-making or operate at arbitrary, non-uniform time intervals (e.g., autonomous driving and complex dynamical systems ). Specifically, DTRL methods tend to generalize poorly when deployed under time resolutions that differ from training, leading to suboptimal control and stability issues. To address these limitations, CTRL has emerged as an alternative, designed to learn value functions in continuous time. Yet, prior work has almost exclusively focused on single-agent settings. Extending single-agent CTRL to the multi-agent domain poses additional challenges: non-stationarity across agents makes value learning unstable. Although CT-MARL can adopt centralized training with decentralized execution (CTDE) to mitigate non-stationarity, it still struggles to ensure accurate value approximation in high-dimensional, continuous-time environments, which is essential for effective policy learning. In this paper, we introduce a novel approach for CT-MARL that can tackle these challenges. Furthermore, to highlight the fundamental gap between discrete- and continuous-time RL for multi-agent scenarios, we present a didactic case as shown in Fig. (see details in Appendix ). In this simple continuous‑time control task, DT-MARL fails to accurately approximate the true value functions, leading to incorrect control actions, particularly for agent 2. In contrast, our CT‑MARL algorithm closely follows the ground‑truth trajectory, maintains high returns, and generates accurate control actions for both agents. [t]0.24 Value Approximation [t]0.24 Reward Approximation [t]0.24 Agent 1 Action [t]0.24 Agent 2 Action The performance of CT-MARL and DT-MARL is compared on a continuous-time, two-agent coupled oscillator task. DT-MARL suffers from significant bias and error when applied in the continuous domain. In contrast, CT-MARL yields smoother actions, higher rewards, and more accurate value approximations, closely aligning with the analytical LQR ground truth. Unlike DTRL, which relies on the Bellman operator, CTRL leverages HJB PDEs to compute differential value functions. However, solving HJB PDEs, especially in multi-agent cooperative settings, through conventional approaches (e.g., dynamic programming) suffers from CoD in high-dimensional dynamical systems, where the computational complexity grows exponentially with the state dimension. As a result, it becomes intractable to incorporate value functions solved by dynamic programming into the MARL framework. PINNs have emerged as a powerful alternative to circumvent CoD, and offer convergence guarantees for problems with smooth solutions. To approximate the solutions of HJB PDEs, PINNs translate the underlying physics law (e.g., PDEs) along with boundary conditions i
Data Skeleton Learning: Scalable Active Clustering with Sparse Graph Structures § Abstract In this work, we focus on the efficiency and scalability of pairwise constraint-based active clustering, crucial for processing large-scale data in applications such as data mining, knowledge annotation, and AI model pre-training. Our goals are threefold: (1) to reduce computational costs for iterative clustering updates; (2) to enhance the impact of user-provided constraints to minimize annotation requirements for precise clustering; and (3) to cut down memory usage in practical deployments. To achieve these aims, we propose a graph-based active clustering algorithm that utilizes two sparse graphs: one for representing relationships between data (our proposed data skeleton) and another for updating this data skeleton. These two graphs work in concert, enabling the refinement of connected subgraphs within the data skeleton to create nested clusters. Our empirical analysis confirms that the proposed algorithm consistently facilitates more accurate clustering with dramatically less input of user-provided constraints, and outperforms its counterparts in terms of computational performance and scalability, while maintaining robustness across various distance metrics.a]Wen-Bo Xie a]Xun Fucor1 a]Bin Chen b]Yan-Li Lee a]Tao Deng a]Tian Zou a]Xin Wang c]Zhen Liu d]Jaideep Srivastava [a]School of Computer Science and Software Engineering, Southwest Petroleum University, Chengdu 610500, People's Republic of China [b]School of Computer and Software Engineering, Xihua University, Chengdu 610039, People's Republic of China [c]Web Sciences Center, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China. [d]College of Science and Engineering, University of Minnesota, Minneapolis MN 55455, United States of America. [cor1]Corresponding authors at: School of Computer Science and Software Engineering, Southwest Petroleum University, Chengdu 610500, China. E-mail: fuxun0529@163.com (Xun Fu). Interactive clustering, Active learning, Scalable clustering, Semi-supervised clustering, Knowledge annotation § INTRODUCTION Clustering is a powerful tool for data mining, data pre-processing and knowledge annotation. Its use is therefore essential in advancing data-driven technologies and highly valuable across a range of applications, from industrial automation to intelligent systems. Nevertheless, classical unsupervised clustering methods lack the finesse for high-quality groupings, highlighting the necessity of human-machine synergy. In light of this, constrained clustering using active learning frameworks (a.k.a. active clustering) emerges as a crucial approach, with those based on pairwise constraints being particularly vital. This is because pairwise constraint-based active clustering algorithms adeptly handle the influx of data replete with ever-changing information. This capacity enables these algorithms to navigate the complexities of data such as voiceprints, industrial production data, and streaming videos, which do not conform to traditional labeling paradigms, thereby facilitating a more nuanced understanding and responding of information that spans a wide spectrum of contexts. However, the rapid expansion of data volumes presents new challenges to the practical applicability of these pairwise constraint-based active clustering algorithms. At the core of active clustering lies its human-in-the-loop design. Typically, this loop starts with the machine initializing a preliminary clustering result. From there, the machine leads human-machine interactions by identifying suspicious data instances within the current clustering result and soliciting human feedback to refine the grouping. This approach focuses on optimizing the clustering result with minimal user-provided constraints, contrasting with the less targeted methods of semi-supervised learning and the exhaustive human-led clustering. Yet, this enhanced human-machine interactive efficiency incurs significant time and space costs: * The process of refining clustering results entails iteratively identifying suspicious data points and then reconstructing clusters, which results in a significant increase in time-complexity. * The need for auxiliary structures substantially amplifies space-complexity. * Dealing with pairwise constraints further exacerbates this challenge, resulting in a quadratic increase in both time and space complexities of the algorithm. These bottlenecks in computational efficiency and scalability limit the practicality of active clustering based on pairwise constraints, particularly in the context of real-world big data applications. To address this, we propose the Data Skeleton Learning-based Active Clustering (DSL), a scalable active learning framework for interactive clustering that incorporates a highly effective human-in-the-loop component. DSL excels in achieving more accurate clustering with minimal cost, leveraging
Arabic Large Language Models for Medical Text Generation § Abstract Efficient hospital management systems (HMS) are critical worldwide to address challenges such as overcrowding, limited resources, and poor availability of urgent health care. Existing methods often lack the ability to provide accurate, real-time medical advice, particularly for irregular inputs and underrepresented languages. To overcome these limitations, this study proposes an approach that fine-tunes large language models (LLMs) for Arabic medical text generation. The system is designed to assist patients by providing accurate medical advice, diagnoses, drug recommendations, and treatment plans based on user input. The research methodology required the collection of a unique dataset from social media platforms, capturing real-world medical conversations between patients and doctors. The dataset, which includes patient complaints together with medical advice, was properly cleaned and preprocessed to account for multiple Arabic dialects. Fine-tuning state-of-the-art generative models, such as Mistral-7B-Instruct-v0.2, LLaMA-2-7B, and GPT-2 Medium, optimized the system’s ability to generate reliable medical text. Results from evaluations indicate that the fine-tuned Mistral-7B model outperformed the other models, achieving average BERT (Bidirectional Encoder Representations from Transformers) Score values in precision, recall, and F1-scores of 68.5%, 69.08%, and 68.5%, respectively. Comparative benchmarking and qualitative assessments validate the system’s ability to produce coherent and relevant medical replies to informal input. This study highlights the potential of generative artificial intelligence (AI) in advancing HMS, offering a scalable and adaptable solution for global healthcare challenges, especially in linguistically and culturally diverse environments.§ INTRODUCTION The increasing global demand for healthcare services, coupled with limited resources and rising patient expectations, highlights the critical need for advanced technologies that enable disease detection, resource allocation, and emergency reporting. Current healthcare management systems (HMS) face significant challenges, including the inability to deliver fast and accurate medical support, effective resource distribution, and immediate reporting of critical cases. Training large language models (LLMs) for medical text creation provides an important opportunity for addressing these issues. By leveraging LLMs, healthcare systems can improve accuracy, scalability, and adaptability, bridging the gap between patients and efficient care delivery. Despite progress in artificial intelligence (AI), existing HMS solutions generally rely on traditional machine learning (ML) models and rule-based systems, which often struggle to handle the complexity of medical data. These systems are constrained by the limited availability of high-quality, domain-specific medical datasets, especially for less-represented languages and contexts. Moreover, traditional ML models have difficulty processing the unstructured, informal inputs commonly seen in patient feedback, resulting in reduced accuracy and reliability. The absence of robust solutions for integrating disease detection, resource allocation, and emergency reporting into a unified framework exacerbates these limitations. To address these gaps, a dataset was curated that contains patient complaints and corresponding doctor responses sourced from social media platforms. These data represent authentic social interactions in which qualified doctors provide medical advice directly. Unlike conventional datasets, this collection reflects real-world language use, including unstructured, informal text and diverse linguistic variations. Such data are essential for training LLMs capable of generating accurate, contextually relevant medical responses. However, the lack of fine-tuned LLMs specifically tailored for medical text generation remains a significant bottleneck in the adoption of generative AI for healthcare. This study introduces a framework for fine-tuning LLMs in medical text creation, thereby facilitating more accurate disease detection, resource allocation, and emergency reporting. Figure illustrates the system architecture, where user inputs are preprocessed and then passed to the LLM, which produces outputs such as disease predictions, resource optimization strategies, and emergency alerts. This approach applies modern generative AI techniques to overcome the constraints of classical ML systems and address the complexity of unstructured medical data. Explanation of the LLM-based system architecture. The main contributions of this work are as follows: * A curated dataset of Arabic medical conversations from social media platforms, capturing real-world patient complaints and doctor responses. * A fine-tuned generative AI model for medical text generation, addressing the complexities of unstructured input and multiple Arabic dialect
Efficient and Accurate Downfacing Visual Inertial Odometry § Abstract Visual Inertial Odometry (VIO) is a widely used computer vision method that determines an agent's movement through a camera and an IMU sensor. This paper presents an efficient and accurate VIO pipeline optimized for applications on micro- and nano-UAVs. The proposed design incorporates state-of-the-art feature detection and tracking methods (SuperPoint, PX4FLOW, ORB), all optimized and quantized for emerging RISC-V-based ultra-low-power parallel systems on chips (SoCs). Furthermore, by employing a rigid body motion model, the pipeline reduces estimation errors and achieves improved accuracy in planar motion scenarios. The pipeline's suitability for real-time VIO is assessed on an ultra-low-power SoC in terms of compute requirements and tracking accuracy after quantization. The pipeline, including the three feature tracking methods, was implemented on the SoC for real-world validation. This design bridges the gap between high-accuracy VIO pipelines that are traditionally run on computationally powerful systems and lightweight implementations suitable for microcontrollers. The optimized pipeline on the GAP9 low-power SoC demonstrates an average reduction in RMSE of up to a factor of 3.65x over the baseline pipeline when using the ORB feature tracker. The analysis of the computational complexity of the feature trackers further shows that PX4FLOW achieves on-par tracking accuracy with ORB at a lower runtime for movement speeds below 24 pixels/frame., Christian Vogt, Member, IEEE, Michele Magno, Senior Member, IEEE, and Luca Benini, Fellow, IEEE This work was supported by the Swiss National Science Foundation’s TinyTrainer project under Grant number 207913. Jonas Kühne is with the Integrated Systems Laboratory and the Center for Project-Based Learning, ETH Zurich, 8092 Zurich, Switzerland (e-mail: kuehnej@ethz.ch). Christian Vogt is with the Center for Project-Based Learning, ETH Zurich, 8092 Zurich, Switzerland (e-mail: christian.vogt@pbl.ee.ethz.ch). Michele Magno is with the Center for Project-Based Learning, ETH Zurich, 8092 Zurich, Switzerland (e-mail: michele.magno@pbl.ee.ethz.ch). Luca Benini is with the Integrated Systems Laboratory, ETH Zurich, 8092 Zurich, Switzerland, and also with the Department of Electrical, Electronic and Information Engineering, University of Bologna, 40136 Bologna, Italy (e-mail: luca.benini@unibo.it). Copyright 2025 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org. Constrained Devices, Embedded Devices, Energy Efficient Devices, Cyber-Physical Systems, Mobile and Ubiquitous Systems, Real-Time Systems § INTRODUCTION Visual Inertial Odometry (VIO) describes the process of determining an agent's movement through the use of camera and Inertial Measurement Unit (IMU) data. Cameras are used in pure Visual Odometry (VO) to generate a movement estimate from one frame to another by considering the displacement of features or brightness patches between camera images. While stereo VO (i.e., using two cameras) can estimate metric depth information through extrinsic calibration, monocular VO can only estimate relative pixel movements. It lacks an absolute scale but shows little drift over time. IMUs, on the other hand, are capable of obtaining metric measurements by measuring linear acceleration and rotational velocity. Although the odometry could be estimated purely from IMU data, it is inaccurate due to measurement noise and bias, leading to high estimation errors and, therefore, drift of the odometry signal. To compensate for this, VIO utilizes the complementary nature of (monocular) VO and IMU data to produce a motion prediction with little drift and a metric scale. In this work, we present a downfacing VIO pipeline that is suitable for resource-constrained microcontrollers and SoCs used in small-scale UAVs, e.g., the pictured GAP9 shield. As feature detectors and trackers, we investigate the classical ORB descriptor and the machine-learned SuperPoint descriptor and compare both approaches to the existing parallelized PX4FLOW implementation. VIO systems have been well-researched and miniaturized to a certain extent, specifically targeting smartphones and mini drones. To allow the use of highly accurate VIO in micro- and nano-drones, as well as in AR glasses, these capable systems need to be scaled down further. In the literature, we can identify two directions in VIO research. There is work on: [label=(*)] * accurate but resource-demanding VIO pipelines that typically run on systems that feature an operating system and can rely on powerful libraries such as OpenCV and Ceres thanks to the simplified memory handling and abstraction of parallelization. And * heavily optimized bare-metal implementations on low-power microprocessors. These systems usually rely on much simpler, mo
REAL-WORLD MUSIC PLAGIARISM DETECTION WITH MUSIC SEGMENT TRANSCRIPTION SYSTEM § Abstract As a result of continuous advances in Music Information Retrieval (MIR) technology, generating and distributing music has become more diverse and accessible. In this context, interest in music intellectual property protection is increasing to safeguard individual music copyrights. In this work, we propose a system for detecting music plagiarism by combining various MIR technologies. We developed a music segment transcription system that extracts musically meaningful segments from audio recordings to detect plagiarism across different musical formats. With this system, we compute similarity scores based on multiple musical features that can be evaluated through comprehensive musical analysis. Our approach demonstrated promising results in music plagiarism detection experiments, and the proposed method can be applied to real-world music scenarios. We also collected a Similar Music Pair (SMP) dataset for musical similarity research using real-world cases. The dataset are publicly available.[https://github.com/Mippia/smp_dataset]Mippia Inc. E-mail: gsh@mippia.com firststyle fancy § INTRODUCTION Music plagiarism is one of the most important copyright issues in society. The unauthorized copying of musical elements can have serious legal and economic consequences. Contrary to the definition of a word, the commonly used word "music plagiarism" can be controversial enough even if it is not intentional by the musician. Therefore, technology for detecting plagiarism can be useful for both original composers and alleged plagiarists. With the advancement of AI music generation, creating and distributing music has become more accessible, making plagiarism detection important. Research on defining musical similarity and detecting music plagiarism has been conducted widely. However, applying these studies to real audio data faces several challenges. Most plagiarism detection research relies on MusicXML or MIDI formats, while commercial music exists as raw audio, requiring transcription. Also, many studies assume melodically similar music is plagiarized, but this differs significantly from real-world cases. And real plagiarism cases are complex, potentially including vocals, varying in length, or containing brief plagiarized segments within longer tracks. A proper model needs to identify musically meaningful segments and detect plagiarism within them. To address these issues, we propose transcribing raw audio into musical representations to organize essential musical features. Our goal is extracting musically meaningful and quantized data for plagiarism detection. Although similar ideas exist, we focus on creating structured segment optimized for plagiarism detection by combining various music information retrieval techniques. Based on these quantized data, we explain how to detect plagiarized music using similarity metrics. Finally, we construct a Similar Music Pair (SMP) dataset containing metadata of similar music pairs with timestamps of similar segments. Overall structure of music plagiarism detection system § RELATED WORKS §.§ Music Transcription Music transcription extracts note information from raw audio, typically producing MIDI representations. This task has been studied across various genres and instruments or metadata like lyrics. Beyond MIDI, there is growing interest in transcribing audio into music-score-like representations. This approach allows transcription of complete musical progressions with temporal components, such as measures. We propose segment transcription that incorporates music structure analysis and introduces metric-based self-similarity to analyze and transcribe music segments while adding musical information. This approach allows detailed and musically meaningful results beyond calculating similarity. For instance, we could pinpoint that music A's first chorus segment (e.g., 00:35–00:43, 16th–19th bar) corresponds to music B's second chorus segment (e.g., 01:42–01:51, 44th–47th bar). We define this musical unit as a 'Segment'. This methodology enables detection of similar or plagiarized segments across larger datasets. To perform music segment transcription, we combined MIR technologies including music source separation, beat-tracking, chord recognition to obtain necessary metadata and construct better structural representations for each segment. §.§ Music Plagiarism Detection Research on music plagiarism analysis employs various methodologies, including CNN-based approaches, bipartite graph-based methods, NLP-based methods using tokenization, and audio fingerprinting-based methods. However, most approaches focus on MIDI or MusicXML data, with limited methodologies using raw audio data. But in applying plagiarism detection to real-world scenarios, using raw audio data is essential. Cover Song Identification (CSI) can be considered a similar task to plagiarism detection, as it involves retrieving cover ve
End of preview.

No dataset card yet

Downloads last month
28