node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
42,278 | 10 | Title: The Study of Highway for Lifelong Multi-Agent Path Finding
Abstract: In modern fulfillment warehouses, agents traverse the map to complete endless tasks that arrive on the fly, which is formulated as a lifelong Multi-Agent Path Finding (lifelong MAPF) problem. The goal of tackling this challenging problem is to find the path for each agent in a finite runtime while maximizing the throughput. However, existing methods encounter exponential growth of runtime and undesirable phenomena of deadlocks and rerouting as the map size or agent density grows. To address these challenges in lifelong MAPF, we explore the idea of highways mainly studied for one-shot MAPF (i.e., finding paths at once beforehand), which reduces the complexity of the problem by encouraging agents to move in the same direction. We utilize two methods to incorporate the highway idea into the lifelong MAPF framework and discuss the properties that minimize the existing problems of deadlocks and rerouting. The experimental results demonstrate that the runtime is considerably reduced and the decay of throughput is gradually insignificant as the map size enlarges under the settings of the highway. Furthermore, when the density of agents increases, the phenomena of deadlocks and rerouting are significantly reduced by leveraging the highway. | [] | Train |
42,279 | 16 | Title: INDigo: An INN-Guided Probabilistic Diffusion Algorithm for Inverse Problems
Abstract: Recently it has been shown that using diffusion models for inverse problems can lead to remarkable results. However, these approaches require a closed-form expression of the degradation model and can not support complex degradations. To overcome this limitation, we propose a method (INDigo) that combines invertible neural networks (INN) and diffusion models for general inverse problems. Specifically, we train the forward process of INN to simulate an arbitrary degradation process and use the inverse as a reconstruction process. During the diffusion sampling process, we impose an additional data-consistency step that minimizes the distance between the intermediate result and the INN-optimized result at every iteration, where the INN-optimized image is composed of the coarse information given by the observed degraded image and the details generated by the diffusion process. With the help of INN, our algorithm effectively estimates the details lost in the degradation process and is no longer limited by the requirement of knowing the closed-form expression of the degradation model. Experiments demonstrate that our algorithm obtains competitive results compared with recently leading methods both quantitatively and visually. Moreover, our algorithm performs well on more complex degradation models and real-world low-quality images. | [] | Train |
42,280 | 24 | Title: Interpretable Medical Diagnostics with Structured Data Extraction by Large Language Models
Abstract: Tabular data is often hidden in text, particularly in medical diagnostic reports. Traditional machine learning (ML) models designed to work with tabular data, cannot effectively process information in such form. On the other hand, large language models (LLMs) which excel at textual tasks, are probably not the best tool for modeling tabular data. Therefore, we propose a novel, simple, and effective methodology for extracting structured tabular data from textual medical reports, called TEMED-LLM. Drawing upon the reasoning capabilities of LLMs, TEMED-LLM goes beyond traditional extraction techniques, accurately inferring tabular features, even when their names are not explicitly mentioned in the text. This is achieved by combining domain-specific reasoning guidelines with a proposed data validation and reasoning correction feedback loop. By applying interpretable ML models such as decision trees and logistic regression over the extracted and validated data, we obtain end-to-end interpretable predictions. We demonstrate that our approach significantly outperforms state-of-the-art text classification models in medical diagnostics. Given its predictive performance, simplicity, and interpretability, TEMED-LLM underscores the potential of leveraging LLMs to improve the performance and trustworthiness of ML models in medical applications. | [] | Validation |
42,281 | 25 | Title: Multitrack Music Transcription with a Time-Frequency Perceiver
Abstract: Multitrack music transcription aims to transcribe a music audio input into the musical notes of multiple instruments simultaneously. It is a very challenging task that typically requires a more complex model to achieve satisfactory result. In addition, prior works mostly focus on transcriptions of regular instruments, however, neglecting vocals, which are usually the most important signal source if present in a piece of music. In this paper, we propose a novel deep neural network architecture, Perceiver TF, to model the time-frequency representation of audio input for multitrack transcription. Perceiver TF augments the Perceiver architecture by introducing a hierarchical expansion with an additional Transformer layer to model temporal coherence. Accordingly, our model inherits the benefits of Perceiver that posses better scalability, allowing it to well handle transcriptions of many instruments in a single model. In experiments, we train a Perceiver TF to model 12 instrument classes as well as vocal in a multi-task learning manner. Our result demonstrates that the proposed system outperforms the state-of-the-art counterparts (e.g., MT3 and SpecTNT) on various public datasets. | [
19137
] | Test |
42,282 | 13 | Title: Training a spiking neural network on an event-based label-free flow cytometry dataset
Abstract: Imaging flow cytometry systems aim to analyze a huge number of cells or micro-particles based on their physical characteristics. The vast majority of current systems acquire a large amount of images which are used to train deep artificial neural networks. However, this approach increases both the latency and power consumption of the final apparatus. In this work-in-progress, we combine an event-based camera with a free-space optical setup to obtain spikes for each particle passing in a microfluidic channel. A spiking neural network is trained on the collected dataset, resulting in 97.7% mean training accuracy and 93.5% mean testing accuracy for the fully event-based classification pipeline. | [] | Train |
42,283 | 3 | Title: Towards Cross-Provider Analysis of Transparency Information for Data Protection
Abstract: Transparency and accountability are indispensable principles for modern data protection, from both, legal and technical viewpoints. Regulations such as the GDPR, therefore, require specific transparency information to be provided including, e.g., purpose specifications, storage periods, or legal bases for personal data processing. However, it has repeatedly been shown that all too often, this information is practically hidden in legalese privacy policies, hindering data subjects from exercising their rights. This paper presents a novel approach to enable large-scale transparency information analysis across service providers, leveraging machine-readable formats and graph data science methods. More specifically, we propose a general approach for building a transparency analysis platform (TAP) that is used to identify data transfers empirically, provide evidence-based analyses of sharing clusters of more than 70 real-world data controllers, or even to simulate network dynamics using synthetic transparency information for large-scale data-sharing scenarios. We provide the general approach for advanced transparency information analysis, an open source architecture and implementation in the form of a queryable analysis platform, and versatile analysis examples. These contributions pave the way for more transparent data processing for data subjects, and evidence-based enforcement processes for data protection authorities. Future work can build upon our contributions to gain more insights into so-far hidden data-sharing practices. | [
37571,
43716,
40541
] | Validation |
42,284 | 3 | Title: Data-Centric Governance
Abstract: Artificial intelligence (AI) governance is the body of standards and practices used to ensure that AI systems are deployed responsibly. Current AI governance approaches consist mainly of manual review and documentation processes. While such reviews are necessary for many systems, they are not sufficient to systematically address all potential harms, as they do not operationalize governance requirements for system engineering, behavior, and outcomes in a way that facilitates rigorous and reproducible evaluation. Modern AI systems are data-centric: they act on data, produce data, and are built through data engineering. The assurance of governance requirements must also be carried out in terms of data. This work explores the systematization of governance requirements via datasets and algorithmic evaluations. When applied throughout the product lifecycle, data-centric governance decreases time to deployment, increases solution quality, decreases deployment risks, and places the system in a continuous state of assured compliance with governance requirements. | [] | Train |
42,285 | 30 | Title: Exploiting Pseudo Future Contexts for Emotion Recognition in Conversations
Abstract: With the extensive accumulation of conversational data on the Internet, emotion recognition in conversations (ERC) has received increasing attention. Previous efforts of this task mainly focus on leveraging contextual and speaker-specific features, or integrating heterogeneous external commonsense knowledge. Among them, some heavily rely on future contexts, which, however, are not always available in real-life scenarios. This fact inspires us to generate pseudo future contexts to improve ERC. Specifically, for an utterance, we generate its future context with pre-trained language models, potentially containing extra beneficial knowledge in a conversational form homogeneous with the historical ones. These characteristics make pseudo future contexts easily fused with historical contexts and historical speaker-specific contexts, yielding a conceptually simple framework systematically integrating multi-contexts. Experimental results on four ERC datasets demonstrate our method's superiority. Further in-depth analyses reveal that pseudo future contexts can rival real ones to some extent, especially in relatively context-independent conversations. | [
6110
] | Train |
42,286 | 16 | Title: MemeFier: Dual-stage Modality Fusion for Image Meme Classification
Abstract: Hate speech is a societal problem that has significantly grown through the Internet. New forms of digital content such as image memes have given rise to spread of hate using multimodal means, being far more difficult to analyse and detect compared to the unimodal case. Accurate automatic processing, analysis and understanding of this kind of content will facilitate the endeavor of hindering hate speech proliferation through the digital world. To this end, we propose MemeFier, a deep learning-based architecture for fine-grained classification of Internet image memes, utilizing a dual-stage modality fusion module. The first fusion stage produces feature vectors containing modality alignment information that captures non-trivial connections between the text and image of a meme. The second fusion stage leverages the power of a Transformer encoder to learn inter-modality correlations at the token level and yield an informative representation. Additionally, we consider external knowledge as an additional input, and background image caption supervision as a regularizing component. Extensive experiments on three widely adopted benchmarks, i.e., Facebook Hateful Memes, Memotion7k and MultiOFF, indicate that our approach competes and in some cases surpasses state-of-the-art. Our code is available on GitHub1. | [] | Train |
42,287 | 10 | Title: Personality testing of GPT-3: Limited temporal reliability, but highlighted social desirability of GPT-3's personality instruments results
Abstract: To assess the potential applications and limitations of chatbot GPT-3 Davinci-003, this study explored the temporal reliability of personality questionnaires applied to the chatbot and its personality profile. Psychological questionnaires were administered to the chatbot on two separate occasions, followed by a comparison of the responses to human normative data. The findings revealed varying levels of agreement in the chatbot's responses over time, with some scales displaying excellent while others demonstrated poor agreement. Overall, Davinci-003 displayed a socially desirable and pro-social personality profile, particularly in the domain of communion. However, the underlying basis of the chatbot's responses, whether driven by conscious self-reflection or predetermined algorithms, remains uncertain. | [
36528,
12602,
37516,
31238
] | Train |
42,288 | 30 | Title: Exploring Social Media for Early Detection of Depression in COVID-19 Patients
Abstract: The COVID-19 pandemic has caused substantial damage to global health. Even though three years have passed, the world continues to struggle with the virus. Concerns are growing about the impact of COVID-19 on the mental health of infected individuals, who are more likely to experience depression, which can have long-lasting consequences for both the affected individuals and the world. Detection and intervention at an early stage can reduce the risk of depression in COVID-19 patients. In this paper, we investigated the relationship between COVID-19 infection and depression through social media analysis. Firstly, we managed a dataset of COVID-19 patients that contains information about their social media activity both before and after infection. Secondly, We conducted an extensive analysis of this dataset to investigate the characteristic of COVID-19 patients with a higher risk of depression. Thirdly, we proposed a deep neural network for early prediction of depression risk. This model considers daily mood swings as a psychiatric signal and incorporates textual and emotional characteristics via knowledge distillation. Experimental results demonstrate that our proposed framework outperforms baselines in detecting depression risk, with an AUROC of 0.9317 and an AUPRC of 0.8116. Our model has the potential to enable public health organizations to initiate prompt intervention with high-risk patients. | [
44444
] | Train |
42,289 | 10 | Title: feather - a Python SDK to share and deploy models
Abstract: At its core, feather was a tool that allowed model developers to build shareable user interfaces for their models in under 20 lines of code. Using the Python SDK, developers specified visual components that users would interact with. (e.g. a FileUpload component to allow users to upload a file). Our service then provided 1) a URL that allowed others to access and use the model visually via a user interface; 2) an API endpoint to allow programmatic requests to a model. In this paper, we discuss feather's motivations and the value we intended to offer AI researchers and developers. For example, the SDK can support multi-step models and can be extended to run automatic evaluation against held out datasets. We additionally provide comprehensive technical and implementation details. N.B. feather is presently a dormant project. We have open sourced our code for research purposes: https://github.com/feather-ai/ | [] | Train |
42,290 | 21 | Title: Towards Mobility Data Science (Vision Paper)
Abstract: Mobility data captures the locations of moving objects such as humans, animals, and cars. With the availability of GPS-equipped mobile devices and other inexpensive location-tracking technologies, mobility data is collected ubiquitously. In recent years, the use of mobility data has demonstrated significant impact in various domains including traffic management, urban planning, and health sciences. In this paper, we present the emerging domain of mobility data science. Towards a unified approach to mobility data science, we envision a pipeline having the following components: mobility data collection, cleaning, analysis, management, and privacy. For each of these components, we explain how mobility data science differs from general data science, we survey the current state of the art and describe open challenges for the research community in the coming years. | [
41341
] | Train |
42,291 | 34 | Title: On the Unreasonable Effectiveness of Single Vector Krylov Methods for Low-Rank Approximation
Abstract: Krylov subspace methods are a ubiquitous tool for computing near-optimal rank $k$ approximations of large matrices. While"large block"Krylov methods with block size at least $k$ give the best known theoretical guarantees, block size one (a single vector) or a small constant is often preferred in practice. Despite their popularity, we lack theoretical bounds on the performance of such"small block"Krylov methods for low-rank approximation. We address this gap between theory and practice by proving that small block Krylov methods essentially match all known low-rank approximation guarantees for large block methods. Via a black-box reduction we show, for example, that the standard single vector Krylov method run for $t$ iterations obtains the same spectral norm and Frobenius norm error bounds as a Krylov method with block size $\ell \geq k$ run for $O(t/\ell)$ iterations, up to a logarithmic dependence on the smallest gap between sequential singular values. That is, for a given number of matrix-vector products, single vector methods are essentially as effective as any choice of large block size. By combining our result with tail-bounds on eigenvalue gaps in random matrices, we prove that the dependence on the smallest singular value gap can be eliminated if the input matrix is perturbed by a small random matrix. Further, we show that single vector methods match the more complex algorithm of [Bakshi et al. `22], which combines the results of multiple block sizes to achieve an improved algorithm for Schatten $p$-norm low-rank approximation. | [
40546
] | Train |
42,292 | 27 | Title: Quality Diversity under Sparse Reward and Sparse Interaction: Application to Grasping in Robotics
Abstract: Quality-Diversity (QD) methods are algorithms that aim to generate a set of diverse and high-performing solutions to a given problem. Originally developed for evolutionary robotics, most QD studies are conducted on a limited set of domains - mainly applied to locomotion, where the fitness and the behavior signal are dense. Grasping is a crucial task for manipulation in robotics. Despite the efforts of many research communities, this task is yet to be solved. Grasping cumulates unprecedented challenges in QD literature: it suffers from reward sparsity, behavioral sparsity, and behavior space misalignment. The present work studies how QD can address grasping. Experiments have been conducted on 15 different methods on 10 grasping domains, corresponding to 2 different robot-gripper setups and 5 standard objects. An evaluation framework that distinguishes the evaluation of an algorithm from its internal components has also been proposed for a fair comparison. The obtained results show that MAP-Elites variants that select successful solutions in priority outperform all the compared methods on the studied metrics by a large margin. We also found experimental evidence that sparse interaction can lead to deceptive novelty. To our knowledge, the ability to efficiently produce examples of grasping trajectories demonstrated in this work has no precedent in the literature. | [
3309,
33006,
28947,
8214,
38841,
23261,
24766
] | Validation |
42,293 | 24 | Title: Counterfactual Explanations for Graph Classification Through the Lenses of Density
Abstract: Counterfactual examples have emerged as an effective approach to produce simple and understandable post-hoc explanations. In the context of graph classification, previous work has focused on generating counterfactual explanations by manipulating the most elementary units of a graph, i.e., removing an existing edge, or adding a non-existing one. In this paper, we claim that such language of explanation might be too fine-grained, and turn our attention to some of the main characterizing features of real-world complex networks, such as the tendency to close triangles, the existence of recurring motifs, and the organization into dense modules. We thus define a general density-based counterfactual search framework to generate instance-level counterfactual explanations for graph classifiers, which can be instantiated with different notions of dense substructures. In particular, we show two specific instantiations of this general framework: a method that searches for counterfactual graphs by opening or closing triangles, and a method driven by maximal cliques. We also discuss how the general method can be instantiated to exploit any other notion of dense substructures, including, for instance, a given taxonomy of nodes. We evaluate the effectiveness of our approaches in 7 brain network datasets and compare the counterfactual statements generated according to several widely-used metrics. Results confirm that adopting a semantic-relevant unit of change like density is essential to define versatile and interpretable counterfactual explanation methods. | [] | Train |
42,294 | 24 | Title: Between-Sample Relationship in Learning Tabular Data Using Graph and Attention Networks
Abstract: Traditional machine learning assumes samples in tabular data to be independent and identically distributed (i.i.d). This assumption may miss useful information within and between sample relationships in representation learning. This paper relaxes the i.i.d assumption to learn tabular data representations by incorporating between-sample relationships for the first time using graph neural networks (GNN). We investigate our hypothesis using several GNNs and state-of-the-art (SOTA) deep attention models to learn the between-sample relationship on ten tabular data sets by comparing them to traditional machine learning methods. GNN methods show the best performance on tabular data with large feature-to-sample ratios. Our results reveal that attention-based GNN methods outperform traditional machine learning on five data sets and SOTA deep tabular learning methods on three data sets. Between-sample learning via GNN and deep attention methods yield the best classification accuracy on seven of the ten data sets. This suggests that the i.i.d assumption may not always hold for most tabular data sets. | [] | Train |
42,295 | 23 | Title: Syntax and Domain Aware Model for Unsupervised Program Translation
Abstract: There is growing interest in software migration as the development of software and society. Manually migrating projects between languages is error-prone and expensive. In recent years, researchers have begun to explore automatic program translation using supervised deep learning techniques by learning from large-scale parallel code corpus. However, parallel resources are scarce in the programming language domain, and it is costly to collect bilingual data manually. To address this issue, several unsupervised programming translation systems are proposed. However, these systems still rely on huge monolingual source code to train, which is very expensive. Besides, these models cannot perform well for translating the languages that are not seen during the pre-training procedure. In this paper, we propose SDA-Trans, a syntax and domain-aware model for program translation, which leverages the syntax structure and domain knowledge to enhance the cross-lingual transfer ability. SDA-Trans adopts unsupervised training on a smaller-scale corpus, including Python and Java monolingual programs. The experimental results on function translation tasks between Python, Java, and C++ show that SDA-Trans outperforms many large-scale pre-trained models, especially for unseen language translation. | [
30242,
42523
] | Test |
42,296 | 4 | Title: TimeClave: Oblivious In-enclave Time series Processing System
Abstract: Cloud platforms are widely adopted by many systems, such as time series processing systems, to store and process massive amounts of sensitive time series data. Unfortunately, several incidents have shown that cloud platforms are vulnerable to internal and external attacks that lead to critical data breaches. Adopting cryptographic protocols such as homomorphic encryption and secure multi-party computation adds high computational and network overhead to query operations. We present TimeClave, a fully oblivious in-enclave time series processing system: TimeClave leverages Intel SGX to support aggregate statistics on time series with minimal memory consumption inside the enclave. To hide the access pattern inside the enclave, we introduce a non-blocking read-optimised ORAM named RoORAM. TimeClave integrates RoORAM to obliviously and securely handle client queries with high performance. With an aggregation time interval of $10s$, $2^{14}$ summarised data blocks and 8 aggregate functions, TimeClave run point query in $0.03ms$ and a range query of 50 intervals in $0.46ms$. Compared to the ORAM baseline, TimeClave achieves lower query latency by up to $2.5\times$ and up to $2\times$ throughput, with up to 22K queries per second. | [] | Train |
42,297 | 28 | Title: Joint Beamforming and Device Selection in Federated Learning with Over-the-air Aggregation
Abstract: Federated Learning (FL) with over-the-air computation is susceptible to analog aggregation error due to channel conditions and noise. Excluding devices with weak channels can reduce the aggregation error, but also decreases the amount of training data in FL. In this work, we jointly design the uplink receiver beamforming and device selection in over-the-air FL to maximize the training convergence rate. We propose a new method termed JBFDS, which takes into account the impact of receiver beamforming and device selection on the global loss function at each training round. Our simulation results with real-world image classification demonstrate that the proposed method achieves faster convergence with significantly lower computational complexity than existing alternatives. | [] | Train |
42,298 | 13 | Title: Towards Rigorous Understanding of Neural Networks via Semantics-preserving Transformations
Abstract: In this paper we present an algebraic approach to the precise and global verification and explanation of Rectifier Neural Networks, a subclass of Piece-wise Linear Neural Networks (PLNNs), i.e., networks that semantically represent piece-wise affine functions. Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent Typed Affine Decision Structures (TADS). Due to their deterministic and sequential nature, TADS can, similarly to decision trees, be considered as white-box models and therefore as precise solutions to the model and outcome explanation problem. TADS are linear algebras which allows one to elegantly compare Rectifier Networks for equivalence or similarity, both with precise diagnostic information in case of failure, and to characterize their classification potential by precisely characterizing the set of inputs that are specifically classified or the set of inputs where two network-based classifiers differ. All phenomena are illustrated along a detailed discussion of a minimal, illustrative example: the continuous XOR function. | [
42467,
38325
] | Test |
42,299 | 24 | Title: Perspectives on AI Architectures and Co-design for Earth System Predictability
Abstract: Recently, the U.S. Department of Energy (DOE), Office of Science, Biological and Environmental Research (BER), and Advanced Scientific Computing Research (ASCR) programs organized and held the Artificial Intelligence for Earth System Predictability (AI4ESP) workshop series. From this workshop, a critical conclusion that the DOE BER and ASCR community came to is the requirement to develop a new paradigm for Earth system predictability focused on enabling artificial intelligence (AI) across the field, lab, modeling, and analysis activities, called ModEx. The BER's `Model-Experimentation', ModEx, is an iterative approach that enables process models to generate hypotheses. The developed hypotheses inform field and laboratory efforts to collect measurement and observation data, which are subsequently used to parameterize, drive, and test model (e.g., process-based) predictions. A total of 17 technical sessions were held in this AI4ESP workshop series. This paper discusses the topic of the `AI Architectures and Co-design' session and associated outcomes. The AI Architectures and Co-design session included two invited talks, two plenary discussion panels, and three breakout rooms that covered specific topics, including: (1) DOE HPC Systems, (2) Cloud HPC Systems, and (3) Edge computing and Internet of Things (IoT). We also provide forward-looking ideas and perspectives on potential research in this co-design area that can be achieved by synergies with the other 16 session topics. These ideas include topics such as: (1) reimagining co-design, (2) data acquisition to distribution, (3) heterogeneous HPC solutions for integration of AI/ML and other data analytics like uncertainty quantification with earth system modeling and simulation, and (4) AI-enabled sensor integration into earth system measurements and observations. Such perspectives are a distinguishing aspect of this paper. | [] | Train |
42,300 | 16 | Title: Self-Supervised Learning for Point Clouds Data: A Survey
Abstract: 3D point clouds are a crucial type of data collected by LiDAR sensors and widely used in transportation applications due to its concise descriptions and accurate localization. Deep neural networks (DNNs) have achieved remarkable success in processing large amount of disordered and sparse 3D point clouds, especially in various computer vision tasks, such as pedestrian detection and vehicle recognition. Among all the learning paradigms, Self-Supervised Learning (SSL), an unsupervised training paradigm that mines effective information from the data itself, is considered as an essential solution to solve the time-consuming and labor-intensive data labelling problems via smart pre-training task design. This paper provides a comprehensive survey of recent advances on SSL for point clouds. We first present an innovative taxonomy, categorizing the existing SSL methods into four broad categories based on the pretexts' characteristics. Under each category, we then further categorize the methods into more fine-grained groups and summarize the strength and limitations of the representative methods. We also compare the performance of the notable SSL methods in literature on multiple downstream tasks on benchmark datasets both quantitatively and qualitatively. Finally, we propose a number of future research directions based on the identified limitations of existing SSL research on point clouds. | [
28652
] | Train |
42,301 | 28 | Title: Low-rank Matrix Sensing With Dithered One-Bit Quantization
Abstract: We explore the impact of coarse quantization on low-rank matrix sensing in the extreme scenario of dithered one-bit sampling, where the high-resolution measurements are compared with random time-varying threshold levels. To recover the low-rank matrix of interest from the highly-quantized collected data, we offer an enhanced randomized Kaczmarz algorithm that efficiently solves the emerging highly-overdetermined feasibility problem. Additionally, we provide theoretical guarantees in terms of the convergence and sample size requirements. Our numerical results demonstrate the effectiveness of the proposed methodology. | [] | Train |
42,302 | 16 | Title: Spatially and Spectrally Consistent Deep Functional Maps
Abstract: Cycle consistency has long been exploited as a powerful prior for jointly optimizing maps within a collection of shapes. In this paper, we investigate its utility in the approaches of Deep Functional Maps, which are considered state-of-the-art in non-rigid shape matching. We first justify that under certain conditions, the learned maps, when represented in the spectral domain, are already cycle consistent. Furthermore, we identify the discrepancy that spectrally consistent maps are not necessarily spatially, or point-wise, consistent. In light of this, we present a novel design of unsupervised Deep Functional Maps, which effectively enforces the harmony of learned maps under the spectral and the point-wise representation. By taking advantage of cycle consistency, our framework produces state-of-the-art results in mapping shapes even under significant distortions. Beyond that, by independently estimating maps in both spectral and spatial domains, our method naturally alleviates over-fitting in network training, yielding superior generalization performance and accuracy within an array of challenging tests for both near-isometric and non-isometric datasets. Codes are available at https://github.com/rqhuang88/Spatiallyand-Spectrally-Consistent-Deep-Functional-Maps. | [] | Train |
42,303 | 16 | Title: A denoised Mean Teacher for domain adaptive point cloud registration
Abstract: Point cloud-based medical registration promises increased computational efficiency, robustness to intensity shifts, and anonymity preservation but is limited by the inefficacy of unsupervised learning with similarity metrics. Supervised training on synthetic deformations is an alternative but, in turn, suffers from the domain gap to the real domain. In this work, we aim to tackle this gap through domain adaptation. Self-training with the Mean Teacher is an established approach to this problem but is impaired by the inherent noise of the pseudo labels from the teacher. As a remedy, we present a denoised teacher-student paradigm for point cloud registration, comprising two complementary denoising strategies. First, we propose to filter pseudo labels based on the Chamfer distances of teacher and student registrations, thus preventing detrimental supervision by the teacher. Second, we make the teacher dynamically synthesize novel training pairs with noise-free labels by warping its moving inputs with the predicted deformations. Evaluation is performed for inhale-to-exhale registration of lung vessel trees on the public PVT dataset under two domain shifts. Our method surpasses the baseline Mean Teacher by 13.5/62.8%, consistently outperforms diverse competitors, and sets a new state-of-the-art accuracy (TRE=2.31mm). Code is available at https://github.com/multimodallearning/denoised_mt_pcd_reg. | [] | Train |
42,304 | 6 | Title: Leveraging Mobile Sensing Technology for Societal Change Towards more Sustainable Behavior
Abstract: A pro-environmental attitude in the general population is essential to combat climate change. Society as a whole has the power to change economic processes through market demands and to exert pressure on policymakers - both are key social factors that currently undermine the goals of decarbonization. Creating long-lasting, sustainable attitudes is challenging and behavior change technologies do hard to overcome their limitations. Environmental psychology proposes social factors to be relevant, a.o. creating a global identity feeling and widening one's view beyond the own bubble. From our experience in the field of mobile sensing and psychometric data inferences, we see strong potential in mobile sensing technologies to implement the aforementioned goals. We present concrete ideas in this paper, aiming to refine and extend them with the workshop and evaluate them afterward. | [
39610
] | Train |
42,305 | 32 | Title: Diddy: a Python toolbox for infinite discrete dynamical systems
Abstract: We introduce Diddy, a collection of Python scripts for analyzing infinite discrete dynamical systems. The main focus is on generalized multidimensional shifts of finite type (SFTs). We show how Diddy can be used to easily define SFTs and cellular automata, and analyze their basic properties. We also showcase how to verify or rediscover some results from coding theory and cellular automata theory. | [] | Train |
42,306 | 10 | Title: Over-Squashing in Graph Neural Networks: A Comprehensive survey
Abstract: Graph Neural Networks (GNNs) have emerged as a revolutionary paradigm in the realm of machine learning, offering a transformative approach to dissect intricate relationships inherent in graph-structured data. The foundational architecture of most GNNs involves the dissemination of information through message aggregation and transformation among interconnected nodes, a mechanism that has demonstrated remarkable efficacy across diverse applications encompassing node classification, link prediction, and recommendation systems. However, their potential effectiveness is constrained in situations demanding comprehensive contextual understanding. In certain contexts, accurate predictions hinge not only upon a node's immediate local surroundings but also on interactions spanning far-reaching domains. This intricate demand for long-range information dissemination exposes a pivotal challenge recognized as"over-squashing,"wherein the fidelity of information flow from distant nodes becomes distorted. This phenomenon significantly curtails the efficiency of message-passing mechanisms, particularly for tasks reliant on intricate long-distance interactions. This survey article comprehensively explores the multifaceted issue of over-squashing in GNNs, offering a structured analysis of the underlying causes, consequences, and state-of-the-art mitigation strategies. To address this challenge, we review an array of methodologies proposed by the research community. These methods encompass graph rewiring techniques that modify the graph topology to enhance information flow, and innovative strategies that leverage spectral analysis and curvature-based insights. Furthermore, this survey highlights the interconnectedness of over-squashing with other fundamental limitations in GNNs, such as over-smoothing, and discusses recent developments in addressing these challenges simultaneously. | [
2656,
39331,
28263,
26770,
15414,
40890,
10555,
42942
] | Train |
42,307 | 27 | Title: Deep Reinforcement Learning Based Framework for Mobile Energy Disseminator Dispatching to Charge On-the-Road Electric Vehicles
Abstract: The exponential growth of electric vehicles (EVs) presents novel challenges in preserving battery health and in addressing the persistent problem of vehicle range anxiety. To address these concerns, wireless charging, particularly, Mobile Energy Disseminators (MEDs) have emerged as a promising solution. The MED is mounted behind a large vehicle and charges all participating EVs within a radius upstream of it. Unfortuantely, during such V2V charging, the MED and EVs inadvertently form platoons, thereby occupying multiple lanes and impairing overall corridor travel efficiency. In addition, constrained budgets for MED deployment necessitate the development of an effective dispatching strategy to determine optimal timing and locations for introducing the MEDs into traffic. This paper proposes a deep reinforcement learning (DRL) based methodology to develop a vehicle dispatching framework. In the first component of the framework, we develop a realistic reinforcement learning environment termed"ChargingEnv"which incorporates a reliable charging simulation system that accounts for common practical issues in wireless charging deployment, specifically, the charging panel misalignment. The second component, the Proximal-Policy Optimization (PPO) agent, is trained to control MED dispatching through continuous interactions with ChargingEnv. Numerical experiments were carried out to demonstrate the demonstrate the efficacy of the proposed MED deployment decision processor. The experiment results suggest that the proposed model can significantly enhance EV travel range while efficiently deploying a optimal number of MEDs. The proposed model is found to be not only practical in its applicability but also has promises of real-world effectiveness. The proposed model can help travelers to maximize EV range and help road agencies or private-sector vendors to manage the deployment of MEDs efficiently. | [] | Test |
42,308 | 24 | Title: Dealing with Cross-Task Class Discrimination in Online Continual Learning
Abstract: Existing continual learning (CL) research regards catastrophic forgetting (CF) as almost the only challenge. This paper argues for another challenge in class-incremental learning (CIL), which we call cross-task class discrimination (CTCD), i.e., how to establish decision boundaries between the classes of the new task and old tasks with no (or limited) access to the old task data. CTCD is implicitly and partially dealt with by replay-based methods. A replay method saves a small amount of data (replay data) from previous tasks. When a batch of current task data arrives, the system jointly trains the new data and some sampled replay data. The replay data enables the system to partially learn the decision boundaries between the new classes and the old classes as the amount of the saved data is small. However, this paper argues that the replay approach also has a dynamic training bias issue which reduces the effectiveness of the replay data in solving the CTCD problem. A novel optimization objective with a gradient-based adaptive method is proposed to dynamically deal with the problem in the online CL process. Experimental results show that the new method achieves much better results in online CL. | [
24011
] | Train |
42,309 | 24 | Title: Using Geographic Location-based Public Health Features in Survival Analysis
Abstract: Time elapsed till an event of interest is often modeled using the survival analysis methodology, which estimates a survival score based on the input features. There is a resurgence of interest in developing more accurate prediction models for time-to-event prediction in personalized healthcare using modern tools such as neural networks. Higher quality features and more frequent observations improve the predictions for a patient, however, the impact of including a patient's geographic location-based public health statistics on individual predictions has not been studied. This paper proposes a complementary improvement to survival analysis models by incorporating public health statistics in the input features. We show that including geographic location-based public health information results in a statistically significant improvement in the concordance index evaluated on the Surveillance, Epidemiology, and End Results (SEER) dataset containing nationwide cancer incidence data. The improvement holds for both the standard Cox proportional hazards model and the state-of-the-art Deep Survival Machines model. Our results indicate the utility of geographic location-based public health features in survival analysis. | [] | Validation |
42,310 | 4 | Title: Privacy-Preserving Electricity Theft Detection Based on Blockchain
Abstract: In most electricity theft detection schemes, consumers’ power consumption data is directly input into the detection center. Although it is valid in detecting the theft of consumers, the privacy of all consumers is at risk unless the detection center is assumed to be trusted. In fact, it is impractical. Moreover, existing schemes may result in some security problems, such as the collusion attack due to the presence of a trusted third party, and malicious data tampering caused by the system operator (SO) being attacked. Aiming at the problems above, we propose a blockchain-based privacy-preserving electricity theft detection scheme without a third party. Specifically, the proposed scheme uses an improved functional encryption scheme to enable electricity theft detection and load monitoring while preserving consumers’ privacy; distributed storage of consumers’ data with blockchain to resolve security problems such as data tampering, etc. Meanwhile, we build a long short-term memory network (LSTM) model to perform higher accuracy for electricity theft detection. The proposed scheme is evaluated in a real environment, and the results show that it is more accurate in electricity theft detection within acceptable communication and computational overhead. Our system analysis demonstrates that the proposed scheme can resist various security attacks and preserve consumers’ privacy. | [] | Train |
42,311 | 16 | Title: 3D Face Alignment Through Fusion of Head Pose Information and Features
Abstract: The ability of humans to infer head poses from face shapes, and vice versa, indicates a strong correlation between the two. Accordingly, recent studies on face alignment have employed head pose information to predict facial landmarks in computer vision tasks. In this study, we propose a novel method that employs head pose information to improve face alignment performance by fusing said information with the feature maps of a face alignment network, rather than simply using it to initialize facial landmarks. Furthermore, the proposed network structure performs robust face alignment through a dual-dimensional network using multidimensional features represented by 2D feature maps and a 3D heatmap. For effective dense face alignment, we also propose a prediction method for facial geometric landmarks through training based on knowledge distillation using predicted keypoints. We experimentally assessed the correlation between the predicted facial landmarks and head pose information, as well as variations in the accuracy of facial landmarks with respect to the quality of head pose information. In addition, we demonstrated the effectiveness of the proposed method through a competitive performance comparison with state-of-the-art methods on the AFLW2000-3D, AFLW, and BIWI datasets. | [
7908
] | Validation |
42,312 | 16 | Title: Meta-hallucinator: Towards Few-Shot Cross-Modality Cardiac Image Segmentation
Abstract: nan | [
17340
] | Train |
42,313 | 30 | Title: UATTA-EB: Uncertainty-Aware Test-Time Augmented Ensemble of BERTs for Classifying Common Mental Illnesses on Social Media Posts
Abstract: Given the current state of the world, because of existing situations around the world, millions of people suffering from mental illnesses feel isolated and unable to receive help in person. Psychological studies have shown that our state of mind can manifest itself in the linguistic features we use to communicate. People have increasingly turned to online platforms to express themselves and seek help with their conditions. Deep learning methods have been commonly used to identify and analyze mental health conditions from various sources of information, including social media. Still, they face challenges, including a lack of reliability and overconfidence in predictions resulting in the poor calibration of the models. To solve these issues, We propose UATTA-EB: Uncertainty-Aware Test-Time Augmented Ensembling of BERTs for producing reliable and well-calibrated predictions to classify six possible types of mental illnesses- None, Depression, Anxiety, Bipolar Disorder, ADHD, and PTSD by analyzing unstructured user data on Reddit. | [] | Test |
42,314 | 16 | Title: Cascaded Local Implicit Transformer for Arbitrary-Scale Super-Resolution
Abstract: Implicit neural representation has recently shown a promising ability in representing images with arbitrary resolutions. In this paper, we present a Local Implicit Transformer (LIT), which integrates the attention mechanism and frequency encoding technique into a local implicit image function. We design a cross-scale local attention block to effectively aggregate local features and a local frequency encoding block to combine positional encoding with Fourier domain information for constructing high-resolution images. To further improve representative power, we propose a Cascaded LIT (CLIT) that exploits multi-scale features, along with a cumulative training strategy that gradually increases the upsampling scales during training. We have conducted extensive experiments to validate the effectiveness of these components and analyze various training strategies. The qualitative and quantitative results demonstrate that LIT and CLIT achieve favorable results and outperform the prior works in arbitrary super-resolution tasks. | [
5878
] | Validation |
42,315 | 10 | Title: Beyond Known Reality: Exploiting Counterfactual Explanations for Medical Research
Abstract: The field of explainability in artificial intelligence has witnessed a growing number of studies and increasing scholarly interest. However, the lack of human-friendly and individual interpretations in explaining the outcomes of machine learning algorithms has significantly hindered the acceptance of these methods by clinicians in their research and clinical practice. To address this, our study employs counterfactual explanations to explore"what if?"scenarios in medical research, aiming to expand our understanding beyond existing boundaries on magnetic resonance imaging (MRI) features for diagnosing pediatric posterior fossa brain tumors. In our case study, the proposed concept provides a novel way to examine alternative decision-making scenarios that offer personalized and context-specific insights, enabling the validation of predictions and clarification of variations under diverse circumstances. Additionally, we explore the potential use of counterfactuals for data augmentation and evaluate their feasibility as an alternative approach in our medical research case. The results demonstrate the promising potential of using counterfactual explanations to enhance trust and acceptance of AI-driven methods in clinical research. | [
33345
] | Train |
42,316 | 24 | Title: A New PHO-rmula for Improved Performance of Semi-Structured Networks
Abstract: Recent advances to combine structured regression models and deep neural networks for better interpretability, more expressiveness, and statistically valid uncertainty quantification demonstrate the versatility of semi-structured neural networks (SSNs). We show that techniques to properly identify the contributions of the different model components in SSNs, however, lead to suboptimal network estimation, slower convergence, and degenerated or erroneous predictions. In order to solve these problems while preserving favorable model properties, we propose a non-invasive post-hoc orthogonalization (PHO) that guarantees identifiability of model components and provides better estimation and prediction quality. Our theoretical findings are supported by numerical experiments, a benchmark comparison as well as a real-world application to COVID-19 infections. | [] | Train |
42,317 | 23 | Title: Black-box Adversarial Example Attack towards FCG Based Android Malware Detection under Incomplete Feature Information
Abstract: The function call graph (FCG) based Android malware detection methods have recently attracted increasing attention due to their promising performance. However, these methods are susceptible to adversarial examples (AEs). In this paper, we design a novel black-box AE attack towards the FCG based malware detection system, called BagAmmo. To mislead its target system, BagAmmo purposefully perturbs the FCG feature of malware through inserting"never-executed"function calls into malware code. The main challenges are two-fold. First, the malware functionality should not be changed by adversarial perturbation. Second, the information of the target system (e.g., the graph feature granularity and the output probabilities) is absent. To preserve malware functionality, BagAmmo employs the try-catch trap to insert function calls to perturb the FCG of malware. Without the knowledge about feature granularity and output probabilities, BagAmmo adopts the architecture of generative adversarial network (GAN), and leverages a multi-population co-evolution algorithm (i.e., Apoem) to generate the desired perturbation. Every population in Apoem represents a possible feature granularity, and the real feature granularity can be achieved when Apoem converges. Through extensive experiments on over 44k Android apps and 32 target models, we evaluate the effectiveness, efficiency and resilience of BagAmmo. BagAmmo achieves an average attack success rate of over 99.9% on MaMaDroid, APIGraph and GCN, and still performs well in the scenario of concept drift and data imbalance. Moreover, BagAmmo outperforms the state-of-the-art attack SRL in attack success rate. | [
29305
] | Train |
42,318 | 23 | Title: Towards Concrete and Connected AI Risk Assessment (C2AIRA): A Systematic Mapping Study
Abstract: The rapid development of artificial intelligence (AI) has led to increasing concerns about the capability of AI systems to make decisions and behave responsibly. Responsible AI (RAI) refers to the development and use of AI systems that benefit humans, society, and the environment while minimising the risk of negative consequences. To ensure responsible AI, the risks associated with AI systems' development and use must be identified, assessed and mitigated. Various AI risk assessment frameworks have been released recently by governments, organisations, and companies. However, it can be challenging for AI stakeholders to have a clear picture of the available frameworks and determine the most suitable ones for a specific context. Additionally, there is a need to identify areas that require further research or development of new frameworks, as well as updating and maintaining existing ones. To fill the gap, we present a mapping study of 16 existing AI risk assessment frameworks from the industry, governments, and non-government organizations (NGOs). We identify key characteristics of each framework and analyse them in terms of RAI principles, stakeholders, system lifecycle stages, geographical locations, targeted domains, and assessment methods. Our study provides a comprehensive analysis of the current state of the frameworks and highlights areas of convergence and divergence among them. We also identify the deficiencies in existing frameworks and outlines the essential characteristics of a concrete and connected framework AI risk assessment (C2 AIRA) framework. Our findings and insights can help relevant stakeholders choose suitable AI risk assessment frameworks and guide the design of future frameworks towards concreteness and connectedness. | [
4744,
33962
] | Validation |
42,319 | 30 | Title: Probing Brain Context-Sensitivity with Masked-Attention Generation
Abstract: Two fundamental questions in neurolinguistics concerns the brain regions that integrate information beyond the lexical level, and the size of their window of integration. To address these questions we introduce a new approach named masked-attention generation. It uses GPT-2 transformers to generate word embeddings that capture a fixed amount of contextual information. We then tested whether these embeddings could predict fMRI brain activity in humans listening to naturalistic text. The results showed that most of the cortex within the language network is sensitive to contextual information, and that the right hemisphere is more sensitive to longer contexts than the left. Masked-attention generation supports previous analyses of context-sensitivity in the brain, and complements them by quantifying the window size of context integration per voxel. | [
29930
] | Test |
42,320 | 28 | Title: Secrecy Analysis for MISO Broadcast Systems with Regularized Zero-Forcing Precoding
Abstract: —As an effective way to enhance the physical layer security (PLS) for the broadcast channel (BC), regularized zero- forcing (RZF) precoding has attracted much attention. However, the reliability performance, i.e., secrecy outage probability (SOP), of RZF is not well investigated in the literature. In this paper, we characterize the secrecy performance of RZF precoding in the large multiple-input single-output (MISO) broadcast system. For this purpose, we first consider a central limit theorem (CLT) for the joint distribution of the users’ signal-to-interference-plus-noise ratio (SINR) and the eavesdropper’s (Eve’s) signal-to-noise ratio (ESNR) by leveraging random matrix theory (RMT). The result is then utilized to obtain a closed-form approximation for the ergodic secrecy rate (ESR) and SOP of three typical scenarios: the case with only external Eves, the case with only internal Eves, and that with both. The derived results are then used to evaluate the percentage of users in secrecy outage and the required number of transmit antennas to achieve a positive secrecy rate. It is shown that, with equally-capable Eves, the secrecy loss caused by external Eves is higher than that caused by internal Eves. Numerical simulations validate the accuracy of the theoretical results. | [
24505
] | Test |
42,321 | 16 | Title: Plug the Leaks: Advancing Audio-driven Talking Face Generation by Preventing Unintended Information Flow
Abstract: Audio-driven talking face generation is the task of creating a lip-synchronized, realistic face video from given audio and reference frames. This involves two major challenges: overall visual quality of generated images on the one hand, and audio-visual synchronization of the mouth part on the other hand. In this paper, we start by identifying several problematic aspects of synchronization methods in recent audio-driven talking face generation approaches. Specifically, this involves unintended flow of lip and pose information from the reference to the generated image, as well as instabilities during model training. Subsequently, we propose various techniques for obviating these issues: First, a silent-lip reference image generator prevents leaking of lips from the reference to the generated image. Second, an adaptive triplet loss handles the pose leaking problem. Finally, we propose a stabilized formulation of synchronization loss, circumventing aforementioned training instabilities while additionally further alleviating the lip leaking issue. Combining the individual improvements, we present state-of-the art performance on LRS2 and LRW in both synchronization and visual quality. We further validate our design in various ablation experiments, confirming the individual contributions as well as their complementary effects. | [] | Validation |
42,322 | 16 | Title: SFD2: Semantic-Guided Feature Detection and Description
Abstract: Visual localization is a fundamental task for various applications including autonomous driving and robotics. Prior methods focus on extracting large amounts of often redundant locally reliable features, resulting in limited efficiency and accuracy, especially in large-scale environments under challenging conditions. Instead, we propose to extract globally reliable features by implicitly embedding high-level semantics into both the detection and description processes. Specifically, our semantic-aware detector is able to detect keypoints from reliable regions (e.g. building, traffic lane) and suppress unreliable areas (e.g. sky, car) implicitly instead of relying on explicit semantic labels. This boosts the accuracy of keypoint matching by reducing the number of features sensitive to appearance changes and avoiding the need of additional segmentation networks at test time. Moreover, our descriptors are augmented with semantics and have stronger discriminative ability, providing more inliers at test time. Particularly, experiments on long-term large-scale visual localization Aachen Day Night and RobotCar-Seasons datasets demonstrate that our model outperforms previous local features and gives competitive accuracy to advanced matchers but is about 2 and 3 times faster when using 2k and 4k keypoints, respectively. Code is available at https://github.com/feixue94/sfd2. | [
7880
] | Train |
42,323 | 24 | Title: Near Optimal Private and Robust Linear Regression
Abstract: We study the canonical statistical estimation problem of linear regression from $n$ i.i.d.~examples under $(\varepsilon,\delta)$-differential privacy when some response variables are adversarially corrupted. We propose a variant of the popular differentially private stochastic gradient descent (DP-SGD) algorithm with two innovations: a full-batch gradient descent to improve sample complexity and a novel adaptive clipping to guarantee robustness. When there is no adversarial corruption, this algorithm improves upon the existing state-of-the-art approach and achieves a near optimal sample complexity. Under label-corruption, this is the first efficient linear regression algorithm to guarantee both $(\varepsilon,\delta)$-DP and robustness. Synthetic experiments confirm the superiority of our approach. | [
10947
] | Validation |
42,324 | 16 | Title: A CNN Based Framework for Unistroke Numeral Recognition in Air-Writing
Abstract: Air-writing refers to virtually writing linguistic characters through hand gestures in three dimensional space with six degrees of freedom. In this paper a generic video camera dependent convolutional neural network (CNN) based air-writing framework has been proposed. Gestures are performed using a marker of fixed color in front of a generic video camera followed by color based segmentation to identify the marker and track the trajectory of marker tip. A pre-trained CNN is then used to classify the gesture. The recognition accuracy is further improved using transfer learning with the newly acquired data. The performance of the system varies greatly on the illumination condition due to color based segmentation. In a less fluctuating illumination condition the system is able to recognize isolated unistroke numerals of multiple languages. The proposed framework achieved 97.7%, 95.4% and 93.7% recognition rate in person independent evaluation over English, Bengali and Devanagari numerals, respectively. | [] | Train |
42,325 | 34 | Title: Improving Oblivious Reconfigurable Networks with High Probability
Abstract: Oblivious Reconfigurable Networks (ORNs) use rapidly reconfiguring switches to create a dynamic time-varying topology. Prior theoretical work on ORNs has focused on the tradeoff between maximum latency and guaranteed throughput. This work shows that by relaxing the notion of guaranteed throughput to an achievable rate with high probability, one can achieve a significant improvement in the latency/throughput tradeoff. For a fixed maximum latency, we show that almost twice the maximum possible guaranteed throughput rate can be achieved with high probability. Alternatively for a fixed throughput value, relaxing to achievement with high probability decreases the maximum latency to almost the square root of the latency required to guarantee the throughput rate. We first give a lower bound on the best maximum latency possible given an achieved throughput rate with high probability. This is done using an LP duality style argument. We then give a family of ORN designs which achieves these tradeoffs. The connection schedule is based on the Vandermonde Basis Scheme of Amir, Wilson, Shrivastav, Weatherspoon, Kleinberg, and Agarwal, although the period and routing scheme differ significantly. We prove achievable throughput with high probability by interpreting the amount of flow on each edge as a sum of negatively associated variables, and applying a Chernoff bound. This gives us a design with maximum latency that is tight with our lower bound (up to a log factor) for almost all constant throughput values. | [
15013
] | Validation |
42,326 | 5 | Title: DGC: Training Dynamic Graphs with Spatio-Temporal Non-Uniformity using Graph Partitioning by Chunks
Abstract: Dynamic Graph Neural Network (DGNN) has shown a strong capability of learning dynamic graphs by exploiting both spatial and temporal features. Although DGNN has recently received considerable attention by AI community and various DGNN models have been proposed, building a distributed system for efficient DGNN training is still challenging. It has been well recognized that how to partition the dynamic graph and assign workloads to multiple GPUs plays a critical role in training acceleration. Existing works partition a dynamic graph into snapshots or temporal sequences, which only work well when the graph has uniform spatio-temporal structures. However, dynamic graphs in practice are not uniformly structured, with some snapshots being very dense while others are sparse. To address this issue, we propose DGC, a distributed DGNN training system that achieves a 1.25x - 7.52x speedup over the state-of-the-art in our testbed. DGC's success stems from a new graph partitioning method that partitions dynamic graphs into chunks, which are essentially subgraphs with modest training workloads and few inter connections. This partitioning algorithm is based on graph coarsening, which can run very fast on large graphs. In addition, DGC has a highly efficient run-time, powered by the proposed chunk fusion and adaptive stale aggregation techniques. Extensive experimental results on 3 typical DGNN models and 4 popular dynamic graph datasets are presented to show the effectiveness of DGC. | [
37256
] | Train |
42,327 | 16 | Title: Improving Continuous Sign Language Recognition with Cross-Lingual Signs
Abstract: This work dedicates to continuous sign language recognition (CSLR), which is a weakly supervised task dealing with the recognition of continuous signs from videos, without any prior knowledge about the temporal boundaries between consecutive signs. Data scarcity heavily impedes the progress of CSLR. Existing approaches typically train CSLR models on a monolingual corpus, which is orders of magnitude smaller than that of speech recognition. In this work, we explore the feasibility of utilizing multilingual sign language corpora to facilitate monolingual CSLR. Our work is built upon the observation of cross-lingual signs, which originate from different sign languages but have similar visual signals (e.g., hand shape and motion). The underlying idea of our approach is to identify the cross-lingual signs in one sign language and properly leverage them as auxiliary training data to improve the recognition capability of another. To achieve the goal, we first build two sign language dictionaries containing isolated signs that appear in two datasets. Then we identify the sign-to-sign mappings between two sign languages via a well-optimized isolated sign language recognition model. At last, we train a CSLR model on the combination of the target data with original labels and the auxiliary data with mapped labels. Experimentally, our approach achieves state-of-the-art performance on two widely-used CSLR datasets: Phoenix-2014 and Phoenix-2014T. | [
7395
] | Train |
42,328 | 10 | Title: Natural Example-Based Explainability: a Survey
Abstract: Explainable Artificial Intelligence (XAI) has become increasingly significant for improving the interpretability and trustworthiness of machine learning models. While saliency maps have stolen the show for the last few years in the XAI field, their ability to reflect models' internal processes has been questioned. Although less in the spotlight, example-based XAI methods have continued to improve. It encompasses methods that use examples as explanations for a machine learning model's predictions. This aligns with the psychological mechanisms of human reasoning and makes example-based explanations natural and intuitive for users to understand. Indeed, humans learn and reason by forming mental representations of concepts based on examples. This paper provides an overview of the state-of-the-art in natural example-based XAI, describing the pros and cons of each approach. A"natural"example simply means that it is directly drawn from the training data without involving any generative process. The exclusion of methods that require generating examples is justified by the need for plausibility which is in some regards required to gain a user's trust. Consequently, this paper will explore the following family of methods: similar examples, counterfactual and semi-factual, influential instances, prototypes, and concepts. In particular, it will compare their semantic definition, their cognitive impact, and added values. We hope it will encourage and facilitate future work on natural example-based XAI. | [] | Test |
42,329 | 24 | Title: Neural Fine-Gray: Monotonic neural networks for competing risks
Abstract: Time-to-event modelling, known as survival analysis, differs from standard regression as it addresses censoring in patients who do not experience the event of interest. Despite competitive performances in tackling this problem, machine learning methods often ignore other competing risks that preclude the event of interest. This practice biases the survival estimation. Extensions to address this challenge often rely on parametric assumptions or numerical estimations leading to sub-optimal survival approximations. This paper leverages constrained monotonic neural networks to model each competing survival distribution. This modelling choice ensures the exact likelihood maximisation at a reduced computational cost by using automatic differentiation. The effectiveness of the solution is demonstrated on one synthetic and three medical datasets. Finally, we discuss the implications of considering competing risks when developing risk scores for medical practice. | [] | Train |
42,330 | 27 | Title: RSPT: Reconstruct Surroundings and Predict Trajectories for Generalizable Active Object Tracking
Abstract: Active Object Tracking (AOT) aims to maintain a specific relation between the tracker and object(s) by autonomously controlling the motion system of a tracker given observations. AOT has wide-ranging applications, such as in mobile robots and autonomous driving. However, building a generalizable active tracker that works robustly across different scenarios remains a challenge, especially in unstructured environments with cluttered obstacles and diverse layouts. We argue that constructing a state representation capable of modeling the geometry structure of the surroundings and the dynamics of the target is crucial for achieving this goal. To address this challenge, we present RSPT, a framework that forms a structure-aware motion representation by Reconstructing the Surroundings and Predicting the target Trajectory. Additionally, we enhance the generalization of the policy network by training in an asymmetric dueling mechanism. We evaluate RSPT on various simulated scenarios and show that it outperforms existing methods in unseen environments, particularly those with complex obstacles and layouts. We also demonstrate the successful transfer of RSPT to real-world settings. Project Website: https://sites.google.com/view/aot-rspt. | [
19482
] | Train |
42,331 | 9 | Title: Agreement theorems for high dimensional expanders in the small soundness regime: the role of covers
Abstract: Given a family $X$ of subsets of $[n]$ and an ensemble of local functions $\{f_s:s\to\Sigma\; | \; s\in X\}$, an agreement test is a randomized property tester that is supposed to test whether there is some global function $G:[n]\to\Sigma$ such that $f_s=G|_s$ for many sets $s$. A"classical"small-soundness agreement theorem is a list-decoding $(LD)$ statement, saying that \[\tag{$LD$} Agree(\{f_s\})>\varepsilon \quad \Longrightarrow \quad \exists G^1,\dots, G^\ell,\quad P_s[f_s\overset{0.99}{\approx}G^i|_s]\geq poly(\varepsilon),\;i=1,\dots,\ell. \] Such a statement is motivated by PCP questions and has been shown in the case where $X=\binom{[n]}k$, or where $X$ is a collection of low dimensional subspaces of a vector space. In this work we study small the case of on high dimensional expanders $X$. It has been an open challenge to analyze their small soundness behavior. Surprisingly, the small soundness behavior turns out to be governed by the topological covers of $X$.We show that: 1. If $X$ has no connected covers, then $(LD)$ holds, provided that $X$ satisfies an additional expansion property. 2. If $X$ has a connected cover, then $(LD)$ necessarily fails. 3. If $X$ has a connected cover (and assuming the additional expansion property), we replace the $(LD)$ by a weaker statement we call lift-decoding: \[ \tag{$LFD$} Agree(\{f_s\})>\varepsilon \Longrightarrow \quad \exists\text{ cover }\rho:Y\twoheadrightarrow X,\text{ and }G:Y(0)\to\Sigma,\text{ such that }\] \[P_{{\tilde s\twoheadrightarrow s}}[f_s \overset{0.99}{\approx} G|_{\tilde s}] \geq poly(\varepsilon),\] where ${\tilde s\twoheadrightarrow s}$ means that $\rho(\tilde s)=s$. The additional expansion property is cosystolic expansion of a complex derived from $X$ holds for the spherical building and for quotients of the Bruhat-Tits building. | [
11705
] | Validation |
42,332 | 24 | Title: Neural Relation Graph: A Unified Framework for Identifying Label Noise and Outlier Data
Abstract: Diagnosing and cleaning data is a crucial step for building robust machine learning systems. However, identifying problems within large-scale datasets with real-world distributions is challenging due to the presence of complex issues such as label errors, under-representation, and outliers. In this paper, we propose a unified approach for identifying the problematic data by utilizing a largely ignored source of information: a relational structure of data in the feature-embedded space. To this end, we present scalable and effective algorithms for detecting label errors and outlier data based on the relational graph structure of data. We further introduce a visualization tool that provides contextual information of a data point in the feature-embedded space, serving as an effective tool for interactively diagnosing data. We evaluate the label error and outlier/out-of-distribution (OOD) detection performances of our approach on the large-scale image, speech, and language domain tasks, including ImageNet, ESC-50, and MNLI. Our approach achieves state-of-the-art detection performance on all tasks considered and demonstrates its effectiveness in debugging large-scale real-world datasets across various domains. | [] | Validation |
42,333 | 36 | Title: Coarse Correlated Equilibrium Implies Nash Equilibrium in Two-Player Zero-Sum Games
Abstract: We give a simple proof of the well-known result that the marginal strategies of a coarse correlated equilibrium form a Nash equilibrium in two-player zero-sum games. A corollary of this fact is that no-external-regret learning algorithms that converge to the set of coarse correlated equilibria will also converge to Nash equilibria in two-player zero-sum games. We show an approximate version: that $\epsilon$-coarse correlated equilibria imply $2\epsilon$-Nash equilibria. | [] | Train |
42,334 | 7 | Title: Parametrised polyconvex hyperelasticity with physics-augmented neural networks
Abstract: In the present work, neural networks are applied to formulate parametrised hyperelastic constitutive models. The models fulfill all common mechanical conditions of hyperelasticity by construction. In particular, partially input-convex neural network (pICNN) architectures are applied based on feed-forward neural networks. Receiving two different sets of input arguments, pICNNs are convex in one of them, while for the other, they represent arbitrary relationships which are not necessarily convex. In this way, the model can fulfill convexity conditions stemming from mechanical considerations without being too restrictive on the functional relationship in additional parameters, which may not necessarily be convex. Two different models are introduced, where one can represent arbitrary functional relationships in the additional parameters, while the other is monotonic in the additional parameters. As a first proof of concept, the model is calibrated to data generated with two differently parametrised analytical potentials, whereby three different pICNN architectures are investigated. In all cases, the proposed model shows excellent performance. | [
39195
] | Test |
42,335 | 31 | Title: Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Abstract: The success of contextual word representations and advances in neural information retrieval have made dense vector-based retrieval a standard approach for passage and document ranking. While effective and efficient, dual-encoders are brittle to variations in query distributions and noisy queries. Data augmentation can make models more robust but introduces overhead to training set generation and requires retraining and index regeneration. We present Contrastive Alignment POst Training (CAPOT), a highly efficient finetuning method that improves model robustness without requiring index regeneration, the training set optimization, or alteration. CAPOT enables robust retrieval by freezing the document encoder while the query encoder learns to align noisy queries with their unaltered root. We evaluate CAPOT noisy variants of MSMARCO, Natural Questions, and Trivia QA passage retrieval, finding CAPOT has a similar impact as data augmentation with none of its overhead. | [] | Validation |
42,336 | 24 | Title: Breaking the Complexity Barrier in Compositional Minimax Optimization
Abstract: Compositional minimax optimization is a pivotal yet under-explored challenge across machine learning, including distributionally robust training and policy evaluation for reinforcement learning. Current techniques exhibit suboptimal complexity or rely heavily on large batch sizes. This paper proposes Nested STOchastic Recursive Momentum (NSTORM), attaining the optimal sample complexity of $O(\kappa^3/\epsilon^3)$ for finding an $\epsilon$-accurate solution. However, NSTORM requires low learning rates, potentially limiting applicability. Thus we introduce ADAptive NSTORM (ADA-NSTORM) with adaptive learning rates, proving it achieves the same sample complexity while experiments demonstrate greater effectiveness. Our methods match lower bounds for minimax optimization without large batch requirements, validated through extensive experiments. This work significantly advances compositional minimax optimization, a crucial capability for distributional robustness and policy evaluation | [] | Validation |
42,337 | 26 | Title: Investigating disaster response through social media data and the Susceptible-Infected-Recovered (SIR) model: A case study of 2020 Western U.S. wildfire season
Abstract: Effective disaster response is critical for affected communities. Responders and decision-makers would benefit from reliable, timely measures of the issues impacting their communities during a disaster, and social media offers a potentially rich data source. Social media can reflect public concerns and demands during a disaster, offering valuable insights for decision-makers to understand evolving situations and optimize resource allocation. We used Bidirectional Encoder Representations from Transformers (BERT) topic modeling to cluster topics from Twitter data. Then, we conducted a temporal-spatial analysis to examine the distribution of these topics across different regions during the 2020 western U.S. wildfire season. Our results show that Twitter users mainly focused on three topics:"health impact,""damage,"and"evacuation."We used the Susceptible-Infected-Recovered (SIR) theory to explore the magnitude and velocity of topic diffusion on Twitter. The results displayed a clear relationship between topic trends and wildfire propagation patterns. The estimated parameters obtained from the SIR model in selected cities revealed that residents exhibited a high level of several concerns during the wildfire. Our study details how the SIR model and topic modeling using social media data can provide decision-makers with a quantitative approach to measure disaster response and support their decision-making processes. | [] | Validation |
42,338 | 16 | Title: Self-supervised Hypergraphs for Learning Multiple World Interpretations
Abstract: We present a method for learning multiple scene representations given a small labeled set, by exploiting the relationships between such representations in the form of a multi-task hypergraph. We also show how we can use the hypergraph to improve a powerful pretrained VisTransformer model without any additional labeled data. In our hypergraph, each node is an interpretation layer (e.g., depth or segmentation) of the scene. Within each hyperedge, one or several input nodes predict the layer at the output node. Thus, each node could be an input node in some hyperedges and an output node in others. In this way, multiple paths can reach the same node, to form ensembles from which we obtain robust pseudolabels, which allow self-supervised learning in the hypergraph. We test different ensemble models and different types of hyperedges and show superior performance to other multi-task graph models in the field. We also introduce Dronescapes, a large video dataset captured with UAVs in different complex real-world scenes, with multiple representations, suitable for multi-task learning. | [] | Test |
42,339 | 24 | Title: Regular Time-series Generation using SGM
Abstract: Score-based generative models (SGMs) are generative models that are in the spotlight these days. Time-series frequently occurs in our daily life, e.g., stock data, climate data, and so on. Especially, time-series forecasting and classification are popular research topics in the field of machine learning. SGMs are also known for outperforming other generative models. As a result, we apply SGMs to synthesize time-series data by learning conditional score functions. We propose a conditional score network for the time-series generation domain. Furthermore, we also derive the loss function between the score matching and the denoising score matching in the time-series generation domain. Finally, we achieve state-of-the-art results on real-world datasets in terms of sampling diversity and quality. | [
42957,
17941,
27257,
8668,
16958
] | Validation |
42,340 | 10 | Title: FuzzyLogic.jl: a Flexible Library for Efficient and Productive Fuzzy Inference
Abstract: This paper introduces \textsc{FuzzyLogic.jl}, a Julia library to perform fuzzy inference. The library is fully open-source and released under a permissive license. The core design principles of the library are: user-friendliness, flexibility, efficiency and interoperability. Particularly, our library is easy to use, allows to specify fuzzy systems in an expressive yet concise domain specific language, has several visualization tools, supports popular inference systems like Mamdani, Sugeno and Type-2 systems, can be easily expanded with custom user settings or algorithms and can perform fuzzy inference efficiently. It also allows reading fuzzy models from other formats such as Matlab .fis, FCL or FML. In this paper, we describe the library main features and benchmark it with a few examples, showing it achieves significant speedup compared to the Matlab fuzzy toolbox. | [] | Validation |
42,341 | 26 | Title: Migration Reframed? A multilingual analysis on the stance shift in Europe during the Ukrainian crisis
Abstract: The war in Ukraine seems to have positively changed the attitude toward the critical societal topic of migration in Europe – at least towards refugees from Ukraine. We investigate whether this impression is substantiated by how the topic is reflected in online news and social media, thus linking the representation of the issue on the Web to its perception in society. For this purpose, we combine and adapt leading-edge automatic text processing for a novel multilingual stance detection approach. Starting from 5.5M Twitter posts published by 565 European news outlets in one year, beginning September 2021, plus replies, we perform a multilingual analysis of migration-related media coverage and associated social media interaction for Europe and selected European countries. The results of our analysis show that there is actually a reframing of the discussion illustrated by the terminology change, e.g., from “migrant” to “refugee”, often even accentuated with phrases such as “real refugees”. However, concerning a stance shift in public perception, the picture is more diverse than expected. All analyzed cases show a noticeable temporal stance shift around the start of the war in Ukraine. Still, there are apparent national differences in the size and stability of this shift. | [
10641
] | Train |
42,342 | 16 | Title: Diversity-Measurable Anomaly Detection
Abstract: Reconstruction-based anomaly detection models achieve their purpose by suppressing the generalization ability for anomaly. However, diverse normal patterns are consequently not well reconstructed as well. Although some efforts have been made to alleviate this problem by modeling sample diversity, they suffer from shortcut learning due to undesired transmission of abnormal information. In this paper, to better handle the tradeoff problem, we propose Diversity-Measurable Anomaly Detection (DMAD) framework to enhance reconstruction diversity while avoid the undesired generalization on anomalies. To this end, we design Pyramid Deformation Module (PDM), which models diverse normals and measures the severity of anomaly by estimating multi-scale deformation fields from reconstructed reference to original input. Integrated with an information compression module, PDM essentially decouples deformation from prototypical embedding and makes the final anomaly score more reliable. Experimental results on both surveillance videos and industrial images demonstrate the effectiveness of our method. In addition, DMAD works equally well in front of contaminated data and anomaly-like normal samples. | [
40265,
34749
] | Train |
42,343 | 28 | Title: Finding a Burst of Positives via Nonadaptive Semiquantitative Group Testing
Abstract: Motivated by testing for pathogenic diseases we consider a new nonadaptive group testing problem for which: (1) positives occur within a burst, capturing the fact that infected test subjects often come in clusters, and (2) that the test outcomes arise from semiquantitative measurements that provide coarse information about the number of positives in any tested group. Our model generalizes prior work on detecting a single burst with classical group testing [1] to the setting of semiquantitative group testing (SQGT) [2]. Specifically, we study the setting where the burst-length ℓ is known and the semiquantitative tests provide potentially nonuniform estimates on the number of positives in a test group. The estimates represent the index of a quantization bin containing the (exact) total number of positives, for arbitrary thresholds η1,…, ηs. Interestingly, we show that the minimum number of tests needed for burst identification is essentially only a function of the largest threshold ηs. In this context, our main result is an order-optimal test scheme that can recover any burst of length ℓ using roughly $\left\lfloor {\frac{\ell }{{2{\eta _s}}}} \right\rfloor + {\log _{s + 1}}(n)$ measurements. This suggests that a large saturation level ηs is more important than finely quantized information when dealing with bursts. We also provide results for related modeling assumptions and specialized choices of thresholds. | [] | Train |
42,344 | 24 | Title: DOMINO: Domain-invariant Hyperdimensional Classification for Multi-Sensor Time Series Data
Abstract: With the rapid evolution of the Internet of Things, many real-world applications utilize heterogeneously connected sensors to capture time-series information. Edge-based machine learning (ML) methodologies are often employed to analyze locally collected data. However, a fundamental issue across data-driven ML approaches is distribution shift. It occurs when a model is deployed on a data distribution different from what it was trained on, and can substantially degrade model performance. Additionally, increasingly sophisticated deep neural networks (DNNs) have been proposed to capture spatial and temporal dependencies in multi-sensor time series data, requiring intensive computational resources beyond the capacity of today's edge devices. While brain-inspired hyperdimensional computing (HDC) has been introduced as a lightweight solution for edge-based learning, existing HDCs are also vulnerable to the distribution shift challenge. In this paper, we propose DOMINO, a novel HDC learning framework addressing the distribution shift problem in noisy multi-sensor time-series data. DOMINO leverages efficient and parallel matrix operations on high-dimensional space to dynamically identify and filter out domain-variant dimensions. Our evaluation on a wide range of multi-sensor time series classification tasks shows that DOMINO achieves on average 2.04% higher accuracy than state-of-the-art (SOTA) DNN-based domain generalization techniques, and delivers 16.34x faster training and 2.89x faster inference. More importantly, DOMINO performs notably better when learning from partially labeled and highly imbalanced data, providing 10.93x higher robustness against hardware noises than SOTA DNNs. | [
39402,
28111
] | Test |
42,345 | 23 | Title: Can You Improve My Code? Optimizing Programs with Local Search
Abstract: This paper introduces a local search method for improving an existing program with respect to a measurable objective. Program Optimization with Locally Improving Search (POLIS) exploits the structure of a program, defined by its lines. POLIS improves a single line of the program while keeping the remaining lines fixed, using existing brute-force synthesis algorithms, and continues iterating until it is unable to improve the program's performance. POLIS was evaluated with a 27-person user study, where participants wrote programs attempting to maximize the score of two single-agent games: Lunar Lander and Highway. POLIS was able to substantially improve the participants' programs with respect to the game scores. A proof-of-concept demonstration on existing Stack Overflow code measures applicability in real-world problems. These results suggest that POLIS could be used as a helpful programming assistant for programming problems with measurable objectives. | [] | Test |
42,346 | 16 | Title: TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration
Abstract: We propose a novel task for generating 3D dance movements that simultaneously incorporate both text and music modalities. Unlike existing works that generate dance movements using a single modality such as music, our goal is to produce richer dance movements guided by the instructive information provided by the text. However, the lack of paired motion data with both music and text modalities limits the ability to generate dance movements that integrate both. To alleviate this challenge, we propose to utilize a 3D human motion VQ-VAE to project the motions of the two datasets into a latent space consisting of quantized vectors, which effectively mix the motion tokens from the two datasets with different distributions for training. Additionally, we propose a cross-modal transformer to integrate text instructions into motion generation architecture for generating 3D dance movements without degrading the performance of music-conditioned dance generation. To better evaluate the quality of the generated motion, we introduce two novel metrics, namely Motion Prediction Distance (MPD) and Freezing Score, to measure the coherence and freezing percentage of the generated motion. Extensive experiments show that our approach can generate realistic and coherent dance movements conditioned on both text and music while maintaining comparable performance with the two single modalities. Code will be available at: https://garfield-kh.github.io/TM2D/. | [] | Test |
42,347 | 11 | Title: Learning Optimal "Pigovian Tax" in Sequential Social Dilemmas
Abstract: In multi-agent reinforcement learning, each agent acts to maximize its individual accumulated rewards. Nevertheless, individual accumulated rewards could not fully reflect how others perceive them, resulting in selfish behaviors that undermine global performance. The externality theory, defined as ``the activities of one economic actor affect the activities of another in ways that are not reflected in market transactions,'' is applicable to analyze the social dilemmas in MARL. One of its most profound non-market solutions, ``Pigovian Tax'', which internalizes externalities by taxing those who create negative externalities and subsidizing those who create positive externalities, could aid in developing a mechanism to resolve MARL's social dilemmas. The purpose of this paper is to apply externality theory to analyze social dilemmas in MARL. To internalize the externalities in MARL, the \textbf{L}earning \textbf{O}ptimal \textbf{P}igovian \textbf{T}ax method (LOPT), is proposed, where an additional agent is introduced to learn the tax/allowance allocation policy so as to approximate the optimal ``Pigovian Tax'' which accurately reflects the externalities for all agents. Furthermore, a reward shaping mechanism based on the approximated optimal ``Pigovian Tax'' is applied to reduce the social cost of each agent and tries to alleviate the social dilemmas. Compared with existing state-of-the-art methods, the proposed LOPT leads to higher collective social welfare in both the Escape Room and the Cleanup environments, which shows the superiority of our method in solving social dilemmas. | [
21893
] | Train |
42,348 | 16 | Title: Benchmarking Deepart Detection
Abstract: Deepfake technologies have been blurring the boundaries between the real and unreal, likely resulting in malicious events. By leveraging newly emerged deepfake technologies, deepfake researchers have been making a great upending to create deepfake artworks (deeparts), which are further closing the gap between reality and fantasy. To address potentially appeared ethics questions, this paper establishes a deepart detection database (DDDB) that consists of a set of high-quality conventional art images (conarts) and five sets of deepart images generated by five state-of-the-art deepfake models. This database enables us to explore once-for-all deepart detection and continual deepart detection. For the two new problems, we suggest four benchmark evaluations and four families of solutions on the constructed DDDB. The comprehensive study demonstrates the effectiveness of the proposed solutions on the established benchmark dataset, which is capable of paving a way to more interesting directions of deepart detection. The constructed benchmark dataset and the source code will be made publicly available. | [
2317
] | Train |
42,349 | 6 | Title: So, I Can Feel Normal: Participatory Design for Accessible Social Media Sites for Individuals with Traumatic Brain Injury
Abstract: Traumatic brain injury (TBI) can result in chronic sensorimotor, cognitive, psychosocial, and communication challenges that can limit social participation. Social media can be a useful outlet for social participation for individuals with TBI, but there are barriers to access. While research has drawn attention to the nature of access barriers, few studies have investigated technological solutions to address these barriers, particularly considering the perspectives of individuals with TBI. To address this gap in knowledge, we used a participatory approach to engage 10 adults with TBI in conceptualizing tools to address their challenges accessing Facebook. Participants described multifaceted challenges in using social media, including interface overload, social comparisons, and anxiety over self-presentation and communication after injury. They discussed their needs and preferences and generated ideas for design solutions. Our work contributes to designing assistive and accessibility technology to facilitate an equal access to the benefits of social media for individuals with TBI. | [] | Train |
42,350 | 34 | Title: Algorithmic Aspects of the Log-Laplace Transform and a Non-Euclidean Proximal Sampler
Abstract: The development of efficient sampling algorithms catering to non-Euclidean geometries has been a challenging endeavor, as discretization techniques which succeed in the Euclidean setting do not readily carry over to more general settings. We develop a non-Euclidean analog of the recent proximal sampler of [LST21], which naturally induces regularization by an object known as the log-Laplace transform (LLT) of a density. We prove new mathematical properties (with an algorithmic flavor) of the LLT, such as strong convexity-smoothness duality and an isoperimetric inequality, which are used to prove a mixing time on our proximal sampler matching [LST21] under a warm start. As our main application, we show our warm-started sampler improves the value oracle complexity of differentially private convex optimization in $\ell_p$ and Schatten-$p$ norms for $p \in [1, 2]$ to match the Euclidean setting [GLL22], while retaining state-of-the-art excess risk bounds [GLLST23]. We find our investigation of the LLT to be a promising proof-of-concept of its utility as a tool for designing samplers, and outline directions for future exploration. | [] | Train |
42,351 | 10 | Title: Scalable Coupling of Deep Learning with Logical Reasoning
Abstract: In the ongoing quest for hybridizing discrete reasoning with neural nets, there is an increasing interest in neural architectures that can learn how to solve discrete reasoning or optimization problems from natural inputs. In this paper, we introduce a scalable neural architecture and loss function dedicated to learning the constraints and criteria of NP-hard reasoning problems expressed as discrete Graphical Models. We empirically show our loss function is able to efficiently learn how to solve NP-hard reasoning problems from natural inputs as the symbolic, visual or many-solutions Sudoku problems as well as the energy optimization formulation of the protein design problem, providing data efficiency, interpretability, and a posteriori control over predictions. | [] | Validation |
42,352 | 4 | Title: PRAGTHOS: Practical Game Theoretically Secure Proof-of-Work Blockchain
Abstract: Security analysis of blockchain technology is an active domain of research. There has been both cryptographic and game-theoretic security analysis of Proof-of-Work (PoW) blockchains. Prominent work includes the cryptographic security analysis under the Universal Composable framework and Game-theoretic security analysis using Rational Protocol Design. These security analysis models rely on stricter assumptions that might not hold. In this paper, we analyze the security of PoW blockchain protocols. We first show how assumptions made by previous models need not be valid in reality, which attackers can exploit to launch attacks that these models fail to capture. These include Difficulty Alternating Attack, under which forking is possible for an adversary with less than 0.5 mining power, Quick-Fork Attack, a general bound on selfish mining attack and transaction withholding attack. Following this, we argue why previous models for security analysis fail to capture these attacks and propose a more practical framework for security analysis pRPD. We then propose a framework to build PoW blockchains PRAGTHOS, which is secure from the attacks mentioned above. Finally, we argue that PoW blockchains complying with the PRAGTHOS framework are secure against a computationally bounded adversary under certain conditions on the reward scheme. | [] | Validation |
42,353 | 24 | Title: Deep Learning for Time Series Classification and Extrinsic Regression: A Current Survey
Abstract: Time Series Classification and Extrinsic Regression are important and challenging machine learning tasks. Deep learning has revolutionized natural language processing and computer vision and holds great promise in other fields such as time series analysis where the relevant features must often be abstracted from the raw data but are not known a priori. This paper surveys the current state of the art in the fast-moving field of deep learning for time series classification and extrinsic regression. We review different network architectures and training methods used for these tasks and discuss the challenges and opportunities when applying deep learning to time series data. We also summarize two critical applications of time series classification and extrinsic regression, human activity recognition and satellite earth observation. | [
28496,
34152,
32869
] | Train |
42,354 | 23 | Title: xASTNN: Improved Code Representations for Industrial Practice
Abstract: The application of deep learning techniques in software engineering becomes increasingly popular. One key problem is developing high-quality and easy-to-use source code representations for code-related tasks. The research community has acquired impressive results in recent years. However, due to the deployment difficulties and performance bottlenecks, seldom these approaches are applied to the industry. In this paper, we present xASTNN, an eXtreme Abstract Syntax Tree (AST)-based Neural Network for source code representation, aiming to push this technique to industrial practice. The proposed xASTNN has three advantages. First, xASTNN is completely based on widely-used ASTs and does not require complicated data pre-processing, making it applicable to various programming languages and practical scenarios. Second, three closely-related designs are proposed to guarantee the effectiveness of xASTNN, including statement subtree sequence for code naturalness, gated recursive unit for syntactical information, and gated recurrent unit for sequential information. Third, a dynamic batching algorithm is introduced to significantly reduce the time complexity of xASTNN. Two code comprehension downstream tasks, code classification and code clone detection, are adopted for evaluation. The results demonstrate that our xASTNN can improve the state-of-the-art while being faster than the baselines. | [] | Train |
42,355 | 27 | Title: WALL-E: Embodied Robotic WAiter Load Lifting with Large Language Model
Abstract: Enabling robots to understand language instructions and react accordingly to visual perception has been a long-standing goal in the robotics research community. Achieving this goal requires cutting-edge advances in natural language processing, computer vision, and robotics engineering. Thus, this paper mainly investigates the potential of integrating the most recent Large Language Models (LLMs) and existing visual grounding and robotic grasping system to enhance the effectiveness of the human-robot interaction. We introduce the WALL-E (Embodied Robotic WAiter load lifting with Large Language model) as an example of this integration. The system utilizes the LLM of ChatGPT to summarize the preference object of the users as a target instruction via the multi-round interactive dialogue. The target instruction is then forwarded to a visual grounding system for object pose and size estimation, following which the robot grasps the object accordingly. We deploy this LLM-empowered system on the physical robot to provide a more user-friendly interface for the instruction-guided grasping task. The further experimental results on various real-world scenarios demonstrated the feasibility and efficacy of our proposed framework. See the project website at: https://star-uu-wang.github.io/WALL-E/ | [
40192,
26562,
22115,
13700,
21702,
295,
29419,
7198
] | Train |
42,356 | 27 | Title: Incremental procedural and sensorimotor learning in cognitive humanoid robots
Abstract: The ability to automatically learn movements and behaviors of increasing complexity is a long-term goal in autonomous systems. Indeed, this is a very complex problem that involves understanding how knowledge is acquired and reused by humans as well as proposing mechanisms that allow artificial agents to reuse previous knowledge. Inspired by Jean Piaget's theory's first three sensorimotor substages, this work presents a cognitive agent based on CONAIM (Conscious Attention-Based Integrated Model) that can learn procedures incrementally. Throughout the paper, we show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent. Experiments were conducted with a humanoid robot in a simulated environment modeled with the Cognitive Systems Toolkit (CST) performing an object tracking task. The system is modeled using a single procedural learning mechanism based on Reinforcement Learning. The increasing agent's cognitive complexity is managed by adding new terms to the reward function for each learning phase. Results show that this approach is capable of solving complex tasks incrementally. | [] | Train |
42,357 | 27 | Title: End-to-End Driving via Self-Supervised Imitation Learning Using Camera and LiDAR Data
Abstract: In autonomous driving, the end-to-end (E2E) driving approach that predicts vehicle control signals directly from sensor data is rapidly gaining attention. To learn a safe E2E driving system, one needs an extensive amount of driving data and human intervention. Vehicle control data is constructed by many hours of human driving, and it is challenging to construct large vehicle control datasets. Often, publicly available driving datasets are collected with limited driving scenes, and collecting vehicle control data is only available by vehicle manufacturers. To address these challenges, this paper proposes the first self-supervised learning framework, self-supervised imitation learning (SSIL), that can learn E2E driving networks without using driving command data. To construct pseudo steering angle data, proposed SSIL predicts a pseudo target from the vehicle's poses at the current and previous time points that are estimated with light detection and ranging sensors. Our numerical experiments demonstrate that the proposed SSIL framework achieves comparable E2E driving accuracy with the supervised learning counterpart. In addition, our qualitative analyses using a conventional visual explanation tool show that trained NNs by proposed SSIL and the supervision counterpart attend similar objects in making predictions. | [] | Train |
42,358 | 16 | Title: Category Feature Transformer for Semantic Segmentation
Abstract: Aggregation of multi-stage features has been revealed to play a significant role in semantic segmentation. Unlike previous methods employing point-wise summation or concatenation for feature aggregation, this study proposes the Category Feature Transformer (CFT) that explores the flow of category embedding and transformation among multi-stage features through the prevalent multi-head attention mechanism. CFT learns unified feature embeddings for individual semantic categories from high-level features during each aggregation process and dynamically broadcasts them to high-resolution features. Integrating the proposed CFT into a typical feature pyramid structure exhibits superior performance over a broad range of backbone networks. We conduct extensive experiments on popular semantic segmentation benchmarks. Specifically, the proposed CFT obtains a compelling 55.1% mIoU with greatly reduced model parameters and computations on the challenging ADE20K dataset. | [] | Validation |
42,359 | 13 | Title: Don't Bet on Luck Alone: Enhancing Behavioral Reproducibility of Quality-Diversity Solutions in Uncertain Domains
Abstract: Quality-Diversity (QD) algorithms are designed to generate collections of high-performing solutions while maximizing their diversity in a given descriptor space. However, in the presence of unpredictable noise, the fitness and descriptor of the same solution can differ significantly from one evaluation to another, leading to uncertainty in the estimation of such values. Given the elitist nature of QD algorithms, they commonly end up with many degenerate solutions in such noisy settings. In this work, we introduce Archive Reproducibility Improvement Algorithm (ARIA); a plug-and-play approach that improves the reproducibility of the solutions present in an archive. We propose it as a separate optimization module, relying on natural evolution strategies, that can be executed on top of any QD algorithm. Our module mutates solutions to (1) optimize their probability of belonging to their niche, and (2) maximize their fitness. The performance of our method is evaluated on various tasks, including a classical optimization problem and two high-dimensional control tasks in simulated robotic environments. We show that our algorithm enhances the quality and descriptor space coverage of any given archive by at least 50%. | [
23261,
37567
] | Train |
42,360 | 18 | Title: SerIOS: Enhancing Hardware Security in Integrated Optoelectronic Systems
Abstract: Silicon photonics (SiPh) has different applications, from enabling fast and high-bandwidth communication for high-performance computing systems to realizing energy-efficient optical computation for AI hardware accelerators. However, integrating SiPh with electronic sub-systems can introduce new security vulnerabilities that cannot be adequately addressed using existing hardware security solutions for electronic systems. This paper introduces SerIOS, the first framework aimed at enhancing hardware security in optoelectronic systems by leveraging the unique properties of optical lithography. SerIOS employs cryptographic keys generated based on imperfections in the optical lithography process and an online detection mechanism to detect attacks. Simulation and synthesis results demonstrate SerIOS's effectiveness in detecting and preventing attacks, with a small area footprint of less than 15% and a 100% detection rate across various attack scenarios and optoelectronic architectures, including photonic AI accelerators. | [] | Train |
42,361 | 24 | Title: Fascinating Supervisory Signals and Where to Find Them: Deep Anomaly Detection with Scale Learning
Abstract: Due to the unsupervised nature of anomaly detection, the key to fueling deep models is finding supervisory signals. Different from current reconstruction-guided generative models and transformation-based contrastive models, we devise novel data-driven supervision for tabular data by introducing a characteristic -- scale -- as data labels. By representing varied sub-vectors of data instances, we define scale as the relationship between the dimensionality of original sub-vectors and that of representations. Scales serve as labels attached to transformed representations, thus offering ample labeled data for neural network training. This paper further proposes a scale learning-based anomaly detection method. Supervised by the learning objective of scale distribution alignment, our approach learns the ranking of representations converted from varied subspaces of each data instance. Through this proxy task, our approach models inherent regularities and patterns within data, which well describes data"normality". Abnormal degrees of testing instances are obtained by measuring whether they fit these learned patterns. Extensive experiments show that our approach leads to significant improvement over state-of-the-art generative/contrastive anomaly detection methods. | [
4698
] | Train |
42,362 | 16 | Title: Diffusion-HPC: Generating Synthetic Images with Realistic Humans
Abstract: Recent text-to-image generative models have exhibited remarkable abilities in generating high-fidelity and photo-realistic images. However, despite the visually impressive results, these models often struggle to preserve plausible human structure in the generations. Due to this reason, while generative models have shown promising results in aiding downstream image recognition tasks by generating large volumes of synthetic data, they remain infeasible for improving downstream human pose perception and understanding. In this work, we propose Diffusion model with Human Pose Correction (Diffusion HPC), a text-conditioned method that generates photo-realistic images with plausible posed humans by injecting prior knowledge about human body structure. We show that Diffusion HPC effectively improves the realism of human generations. Furthermore, as the generations are accompanied by 3D meshes that serve as ground truths, Diffusion HPC's generated image-mesh pairs are well-suited for downstream human mesh recovery task, where a shortage of 3D training data has long been an issue. | [
44593,
16103
] | Train |
42,363 | 23 | Title: Software Startups - A Research Agenda
Abstract: Software startup companies develop innovative, software-intensive products within limited timeframes and with few resources, searching for sustainable and scalable business models. Software startup ... | [
36975,
10714,
36531,
14135,
30906,
14267,
7357
] | Train |
42,364 | 16 | Title: Lightweight wood panel defect detection method incorporating attention mechanism and feature fusion network
Abstract: In recent years, deep learning has made significant progress in wood panel defect detection. However, there are still challenges such as low detection , slow detection speed, and difficulties in deploying embedded devices on wood panel surfaces. To overcome these issues, we propose a lightweight wood panel defect detection method called YOLOv5-LW, which incorporates attention mechanisms and a feature fusion network.Firstly, to enhance the detection capability of acceptable defects, we introduce the Multi-scale Bi-directional Feature Pyramid Network (MBiFPN) as a feature fusion network. The MBiFPN reduces feature loss, enriches local and detailed features, and improves the model's detection capability for acceptable defects.Secondly, to achieve a lightweight design, we reconstruct the ShuffleNetv2 network model as the backbone network. This reconstruction reduces the number of parameters and computational requirements while maintaining performance. We also introduce the Stem Block and Spatial Pyramid Pooling Fast (SPPF) models to compensate for any accuracy loss resulting from the lightweight design, ensuring the model's detection capabilities remain intact while being computationally efficient.Thirdly, we enhance the backbone network by incorporating Efficient Channel Attention (ECA), which improves the network's focus on key information relevant to defect detection. By attending to essential features, the model becomes more proficient in accurately identifying and localizing defects.We validate the proposed method using a self-developed wood panel defect dataset.The experimental results demonstrate the effectiveness of the improved YOLOv5-LW method. Compared to the original model, our approach achieves a 92.8\% accuracy rate, reduces the number of parameters by 27.78\%, compresses computational volume by 41.25\%, improves detection inference speed by 10.16\% | [] | Validation |
42,365 | 24 | Title: HypLL: The Hyperbolic Learning Library
Abstract: Deep learning in hyperbolic space is quickly gaining traction in the fields of machine learning, multimedia, and computer vision. Deep networks commonly operate in Euclidean space, implicitly assuming that data lies on regular grids. Recent advances have shown that hyperbolic geometry provides a viable alternative foundation for deep learning, especially when data is hierarchical in nature and when working with few embedding dimensions. Currently however, no accessible open-source library exists to build hyperbolic network modules akin to well-known deep learning libraries. We present HypLL, the Hyperbolic Learning Library to bring the progress on hyperbolic deep learning together. HypLL is built on top of PyTorch, with an emphasis in its design for ease-of-use, in order to attract a broad audience towards this new and open-ended research direction. The code is available at: https://github.com/maxvanspengler/hyperbolic_learning_library. | [] | Validation |
42,366 | 15 | Title: Optimized Real-Time Assembly in a RISC Simulator
Abstract: Simulators for the RISC-V instruction set architecture (ISA) are useful for teaching assembly language and modern CPU architecture concepts. The Assembly/Simulation Platform for Illustration of RISC-V in Education (ASPIRE) is an integrated RISC-V assembler and simulator used to illustrate these concepts and evaluate algorithms to generate machine language code. In this article, ASPIRE is introduced, selected features of the simulator that interactively explain the RISC-V ISA as teaching aides are presented, then two assembly algorithms are evaluated. Both assembly algorithms run in real time as code is being edited in the simulator. The optimized algorithm performs incremental assembly limited to only the portion of the program that is changed. Both algorithms are then evaluated based on overall run-time performance. | [] | Train |
42,367 | 27 | Title: Toward Fine Contact Interactions: Learning to Control Normal Contact Force with Limited Information
Abstract: Dexterous manipulation of objects through fine control of physical contacts is essential for many important tasks of daily living. A fundamental ability underlying fine contact control is compliant control, i.e., controlling the contact forces while moving. For robots, the most widely explored approaches heavily depend on models of manipulated objects and expensive sensors to gather contact location and force information needed for real-time control. The models are difficult to obtain, and the sensors are costly, hindering personal robots' adoption in our homes and businesses. This study performs model-free reinforcement learning of a normal contact force controller on a robotic manipulation system built with a low-cost, information-poor tactile sensor. Despite the limited sensing capability, our force controller can be combined with a motion controller to enable fine contact interactions during object manipulation. Promising results are demonstrated in non-prehensile, dexterous manipulation experiments. | [] | Test |
42,368 | 16 | Title: Shape Reconstruction from Thoracoscopic Images using Self-supervised Virtual Learning
Abstract: Intraoperative shape reconstruction of organs from endoscopic camera images is a complex yet indispensable technique for image-guided surgery. To address the uncertainty in reconstructing entire shapes from single-viewpoint occluded images, we propose a framework for generative virtual learning of shape reconstruction using image translation with common latent variables between simulated and real images. As it is difficult to prepare sufficient amount of data to learn the relationship between endoscopic images and organ shapes, self-supervised virtual learning is performed using simulated images generated from statistical shape models. However, small differences between virtual and real images can degrade the estimation performance even if the simulated images are regarded as equivalent by humans. To address this issue, a Variational Autoencoder is used to convert real and simulated images into identical synthetic images. In this study, we targeted the shape reconstruction of collapsed lungs from thoracoscopic images and confirmed that virtual learning could improve the similarity between real and simulated images. Furthermore, shape reconstruction error could be improved by 16.9%. | [] | Train |
42,369 | 16 | Title: Hidden Knowledge: Mathematical Methods for the Extraction of the Fingerprint of Medieval Paper from Digital Images
Abstract: Medieval paper, a handmade product, is made with a mould which leaves an indelible imprint on the sheet of paper. This imprint includes chain lines, laid lines and watermarks which are often visible on the sheet. Extracting these features allows the identification of paper stock and gives information about chronology, localisation and movement of books and people. Most computational work for feature extraction of paper analysis has so far focused on radiography or transmitted light images. While these imaging methods provide clear visualisation for the features of interest, they are expensive and time consuming in their acquisition and not feasible for smaller institutions. However, reflected light images of medieval paper manuscripts are abundant and possibly cheaper in their acquisition. In this paper, we propose algorithms to detect and extract the laid and chain lines from reflected light images. We tackle the main drawback of reflected light images, that is, the low contrast attenuation of lines and intensity jumps due to noise and degradation, by employing the spectral total variation decomposition and develop methods for subsequent line extraction. Our results clearly demonstrate the feasibility of using reflected light images in paper analysis. This work enables the feature extraction for paper manuscripts that have otherwise not been analysed due to a lack of appropriate images. We also open the door for paper stock identification at scale. | [] | Train |
42,370 | 24 | Title: CoarsenConf: Equivariant Coarsening with Aggregated Attention for Molecular Conformer Generation
Abstract: Molecular conformer generation (MCG) is an important task in cheminformatics and drug discovery. The ability to efficiently generate low-energy 3D structures can avoid expensive quantum mechanical simulations, leading to accelerated screenings and enhanced structural exploration. Several generative models have been developed for MCG, but many struggle to consistently produce high-quality conformers. To address these issues, we introduce CoarsenConf, which coarse-grains molecular graphs based on torsional angles and integrates them into an SE(3)-equivariant hierarchical variational autoencoder. Through equivariant coarse-graining, we aggregate the fine-grained atomic coordinates of subgraphs connected via rotatable bonds, creating a variable-length coarse-grained latent representation. Our model uses a novel aggregated attention mechanism to restore fine-grained coordinates from the coarse-grained latent representation, enabling efficient autoregressive generation of large molecules. Furthermore, our work expands current conformer generation benchmarks and introduces new metrics to better evaluate the quality and viability of generated conformers. We demonstrate that CoarsenConf generates more accurate conformer ensembles compared to prior generative models and traditional cheminformatics methods. | [
22897,
15045
] | Train |
42,371 | 34 | Title: Approximation Algorithms for Directed Weighted Spanners
Abstract: In the pairwise weighted spanner problem, the input consists of an $n$-vertex-directed graph, where each edge is assigned a cost and a length. Given $k$ vertex pairs and a distance constraint for each pair, the goal is to find a minimum-cost subgraph in which the distance constraints are satisfied. This formulation captures many well-studied connectivity problems, including spanners, distance preservers, and Steiner forests. In the offline setting, we show: 1. An $\tilde{O}(n^{4/5 + \epsilon})$-approximation algorithm for pairwise weighted spanners. When the edges have unit costs and lengths, the best previous algorithm gives an $\tilde{O}(n^{3/5 + \epsilon})$-approximation, due to Chlamt\'a\v{c}, Dinitz, Kortsarz, and Laekhanukit (TALG, 2020). 2. An $\tilde{O}(n^{1/2+\epsilon})$-approximation algorithm for all-pair weighted distance preservers. When the edges have unit costs and arbitrary lengths, the best previous algorithm gives an $\tilde{O}(n^{1/2})$-approximation for all-pair spanners, due to Berman, Bhattacharyya, Makarychev, Raskhodnikova, and Yaroslavtsev (Information and Computation, 2013). In the online setting, we show: 1. An $\tilde{O}(k^{1/2 + \epsilon})$-competitive algorithm for pairwise weighted spanners. The state-of-the-art results are $\tilde{O}(n^{4/5})$-competitive when edges have unit costs and arbitrary lengths, and $\min\{\tilde{O}(k^{1/2 + \epsilon}), \tilde{O}(n^{2/3 + \epsilon})\}$-competitive when edges have unit costs and lengths, due to Grigorescu, Lin, and Quanrud (APPROX, 2021). 2. An $\tilde{O}(k^{\epsilon})$-competitive algorithm for single-source weighted spanners. Without distance constraints, this problem is equivalent to the directed Steiner tree problem. The best previous algorithm for online directed Steiner trees is $\tilde{O}(k^{\epsilon})$-competitive, due to Chakrabarty, Ene, Krishnaswamy, and Panigrahi (SICOMP, 2018). | [] | Train |
42,372 | 24 | Title: Performance Evaluation of Regression Models in Predicting the Cost of Medical Insurance
Abstract: The study aimed to evaluate the regression models' performance in predicting the cost of medical insurance. The Three (3) Regression Models in Machine Learning namely Linear Regression, Gradient Boosting, and Support Vector Machine were used. The performance will be evaluated using the metrics RMSE (Root Mean Square), r2 (R Square), and K-Fold Cross-validation. The study also sought to pinpoint the feature that would be most important in predicting the cost of medical insurance.The study is anchored on the knowledge discovery in databases (KDD) process. (KDD) process refers to the overall process of discovering useful knowledge from data. It show the performance evaluation results reveal that among the three (3) Regression models, Gradient boosting received the highest r2 (R Square) 0.892 and the lowest RMSE (Root Mean Square) 1336.594. Furthermore, the 10-Fold Cross-validation weighted mean findings are not significantly different from the r2 (R Square) results of the three (3) regression models. In addition, Exploratory Data Analysis (EDA) using a box plot of descriptive statistics observed that in the charges and smoker features the median of one group lies outside of the box of the other group, so there is a difference between the two groups. It concludes that Gradient boosting appears to perform better among the three (3) regression models. K-Fold Cross-Validation concluded that the three (3) regression models are good. Moreover, Exploratory Data Analysis (EDA) using a box plot of descriptive statistics ceases that the highest charges are due to the smoker feature. | [] | Train |
42,373 | 33 | Title: On the Succinctness of Good-for-MDPs Automata
Abstract: Good-for-MDPs and good-for-games automata are two recent classes of nondeterministic automata that reside between general nondeterministic and deterministic automata. Deterministic automata are good-for-games, and good-for-games automata are good-for-MDPs, but not vice versa. One of the question this raises is how these classes relate in terms of succinctness. Good-for-games automata are known to be exponentially more succinct than deterministic automata, but the gap between good-for-MDPs and good-for-games automata as well as the gap between ordinary nondeterministic automata and those that are good-for-MDPs have been open. We establish that these gaps are exponential, and sharpen this result by showing that the latter gap remains exponential when restricting the nondeterministic automata to separating safety or unambiguous reachability automata. | [] | Validation |
42,374 | 28 | Title: Improved List Decoding for Polar-Coded Probabilistic Shaping
Abstract: A modified successive cancellation list (SCL) decoder is proposed for polar-coded probabilistic shaping. The decoder exploits the deterministic encoding rule for shaping bits to rule out candidate code words that the encoder would not generate. This provides error detection and decreases error rates compared to standard SCL decoding while at the same time reducing the length of the outer cyclic redundancy check code. | [] | Validation |
42,375 | 16 | Title: PDL: Regularizing Multiple Instance Learning with Progressive Dropout Layers
Abstract: Multiple instance learning (MIL) was a weakly supervised learning approach that sought to assign binary class labels to collections of instances known as bags. However, due to their weak supervision nature, the MIL methods were susceptible to overfitting and required assistance in developing comprehensive representations of target instances. While regularization typically effectively combated overfitting, its integration with the MIL model has been frequently overlooked in prior studies. Meanwhile, current regularization methods for MIL have shown limitations in their capacity to uncover a diverse array of representations. In this study, we delve into the realm of regularization within the MIL model, presenting a novel approach in the form of a Progressive Dropout Layer (PDL). We aim to not only address overfitting but also empower the MIL model in uncovering intricate and impactful feature representations. The proposed method was orthogonal to existing MIL methods and could be easily integrated into them to boost performance. Our extensive evaluation across a range of MIL benchmark datasets demonstrated that the incorporation of the PDL into multiple MIL methods not only elevated their classification performance but also augmented their potential for weakly-supervised feature localizations. | [] | Validation |
42,376 | 4 | Title: Confidential Truth Finding with Multi-Party Computation (Extended Version)
Abstract: Federated knowledge discovery and data mining are challenged to assess the trustworthiness of data originating from autonomous sources while protecting confidentiality and privacy. Truth-finding algorithms help corroborate data from disagreeing sources. For each query it receives, a truth-finding algorithm predicts a truth value of the answer, possibly updating the trustworthiness factor of each source. Few works, however, address the issues of confidentiality and privacy. We devise and present a secure secret-sharing-based multi-party computation protocol for pseudo-equality tests that are used in truth-finding algorithms to compute additions depending on a condition. The protocol guarantees confidentiality of the data and privacy of the sources. We also present variants of truth-finding algorithms that would make the computation faster when executed using secure multi-party computation. We empirically evaluate the performance of the proposed protocol on two state-of-the-art truth-finding algorithms, Cosine, and 3-Estimates, and compare them with that of the baseline plain algorithms. The results confirm that the secret-sharing-based secure multi-party algorithms are as accurate as the corresponding baselines but for proposed numerical approximations that significantly reduce the efficiency loss incurred. | [] | Train |
42,377 | 4 | Title: A First Look at Digital Rights Management Systems for Secure Mobile Content Delivery
Abstract: Digital rights management (DRM) solutions aim to prevent the copying or distribution of copyrighted material. On mobile devices, a variety of DRM technologies have become widely deployed. However, a detailed security study comparing their internal workings, and their strengths and weaknesses, remains missing in the existing literature. In this paper, we present the first detailed security analysis of mobile DRM systems, addressing the modern paradigm of cloud-based content delivery followed by major platforms, such as Netflix, Disney+, and Amazon Prime. We extensively analyse the security of three widely used DRM solutions -- Google Widevine, Apple FairPlay, and Microsoft PlayReady -- deployed on billions of devices worldwide. We then consolidate their features and capabilities, deriving common features and security properties for their evaluation. Furthermore, we identify some design-level shortcomings that render them vulnerable to emerging attacks within the state of the art, including micro-architectural side-channel vulnerabilities and an absence of post-quantum security. Lastly, we propose mitigations and suggest future directions of research. | [] | Validation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.