node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
44,378 | 30 | Title: Joint Modelling of Spoken Language Understanding Tasks with Integrated Dialog History
Abstract: Most human interactions occur in the form of spoken conversations where the semantic meaning of a given utterance depends on the context. Each utterance in spoken conversation can be represented by many semantic and speaker attributes, and there has been an interest in building Spoken Language Understanding (SLU) systems for automatically predicting these attributes. Recent work has shown that incorporating dialogue history can help advance SLU performance. However, separate models are used for each SLU task, leading to an increase in inference time and computation cost. Motivated by this, we aim to ask: can we jointly model all the SLU tasks while incorporating context to facilitate low-latency and lightweight inference? To answer this, we propose a novel model architecture that learns dialog context to jointly predict the intent, dialog act, speaker role, and emotion for the spoken utterance. Note that our joint prediction is based on an autoregressive model and we need to decide the prediction order of dialog attributes, which is not trivial. To mitigate the issue, we also propose an order agnostic training method. Our experiments show that our joint model achieves similar results to task-specific classifiers and can effectively integrate dialog context to further improve the SLU performance. | [] | Train |
44,379 | 36 | Title: Algorithmic Solutions for Maximizing Shareable Costs
Abstract: This paper addresses the optimization problem to maximize the total costs that can be shared among a group of agents, while maintaining stability in the sense of the core constraints of a cooperative transferable utility game, or TU game. When maximizing total shareable costs, the cost shares must satisfy all constraints that define the core of a TU game, except for being budget balanced. The paper first gives a fairly complete picture of the computational complexity of this optimization problem, its relation to optimiztion over the core itself, and its equivalence to other, minimal core relaxations that have been proposed earlier. We then address minimum cost spanning tree (MST) games as an example for a class of cost sharing games with non-empty core. While submodular cost functions yield efficient algorithms to maximize shareable costs, MST games have cost functions that are subadditive, but generally not submodular. Nevertheless, it is well known that cost shares in the core of MST games can be found efficiently. In contrast, we show that the maximization of shareable costs is NP-hard for MST games and derive a 2-approximation algorithm. Our work opens several directions for future research. | [] | Train |
44,380 | 23 | Title: Vulnerability Propagation in Package Managers Used in iOS Development
Abstract: Although using third-party libraries is common practice when writing software, vulnerabilities may be found even in well-known libraries. Detected vulnerabilities are often fixed quickly in the library code. The easiest way to include these fixes in a dependent software application, is to update the used library version. Package managers provide automated solutions for updating library dependencies, which make this process relatively easy. However, library dependencies can have dependencies to other libraries resulting in a dependency network with several levels of indirections. Assessing vulnerability risks induced by dependency networks is a non-trivial task for software developers.The library dependency network in the Swift ecosystem encompasses libraries from CocoaPods, Carthage and Swift Package Manager. These three package managers are used while developing, for example, iOS or Mac OS applications in Swift or Objective-C. We analysed how vulnerabilities propagate in the library dependency network of the Swift ecosystem, how vulnerable dependencies could be fixed via dependency upgrades, and if third party vulnerability analysis could be made more precise given public information on these vulnerabilities.We found that only 5.9% of connected libraries had a direct or transitive dependency to a vulnerable library. Although we found that most libraries with publicly reported vulnerabilities are written in C, the highest impact of publicly reported vulnerabilities originated from libraries written in native iOS languages, i.e., Objective-C and Swift. We found that around 30% of vulnerable dependencies could have been fixed via upgrading the library dependency. In case of critical vulnerabilities and latest library versions, over 70% of vulnerable dependencies would have been fixed via a dependency upgrade. Lastly, we checked whether the analysis of vulnerable dependency use could be refined using publicly available information on the code location (method or class) of a reported vulnerability. We found that such information is not available most of the time. | [] | Train |
44,381 | 4 | Title: DIVAS: An LLM-based End-to-End Framework for SoC Security Analysis and Policy-based Protection
Abstract: Securing critical assets in a bus-based System-On-Chip (SoC) is imperative to mitigate potential vulnerabilities and prevent unauthorized access, ensuring the integrity, availability, and confidentiality of the system. Ensuring security throughout the SoC design process is a formidable task owing to the inherent intricacies in SoC designs and the dispersion of assets across diverse IPs. Large Language Models (LLMs), exemplified by ChatGPT (OpenAI) and BARD (Google), have showcased remarkable proficiency across various domains, including security vulnerability detection and prevention in SoC designs. In this work, we propose DIVAS, a novel framework that leverages the knowledge base of LLMs to identify security vulnerabilities from user-defined SoC specifications, map them to the relevant Common Weakness Enumerations (CWEs), followed by the generation of equivalent assertions, and employ security measures through enforcement of security policies. The proposed framework is implemented using multiple ChatGPT and BARD models, and their performance was analyzed while generating relevant CWEs from the SoC specifications provided. The experimental results obtained from open-source SoC benchmarks demonstrate the efficacy of our proposed framework. | [
17800,
26766,
29781,
13838
] | Test |
44,382 | 24 | Title: Operator theory, kernels, and Feedforward Neural Networks
Abstract: In this paper we show how specific families of positive definite kernels serve as powerful tools in analyses of iteration algorithms for multiple layer feedforward Neural Network models. Our focus is on particular kernels that adapt well to learning algorithms for data-sets/features which display intrinsic self-similarities at feedforward iterations of scaling. | [] | Train |
44,383 | 4 | Title: LemonLDAP::NG A Full AAA Free Open Source WebSSO Solution
Abstract: Nowadays, security is becoming a major issue and concern. More and more organizations like hospitals, metropolis or banks are under cyberattacks and have to improve their network infrastructure security. The first prerequisites are to authenticate users, to provide identity and to grant just the needed and useful accesses. These requirements can be solved by implementing a Single Sign-On (SSO) solution. It is an authentication scheme that permits a user to log in with a single identity to any of several related, yet independent, systems. It allows users to log in once and to access services without authenticating again. SSO solutions are classified depending on Authentication, Authorization, and Accounting features. The ’AAA’ acronym defines a framework for intelligently controlling access to resources, enforcing security policies, auditing usage, and providing the information necessary to bill for services. These combined processes are considered important for effective network management and cybersecurity. LemonLDAP::NG (LL::NG) is a full AAA WebSSO solution. It implements all standard authentication and identity federation (IdF) protocols. The main LL::NG’s advantages compared to other products are its plug-in engine and its advanced handler-based protection mechanism that can be employed to protect Server2Server exchanges or to offer the SSO as a Service, a solution to implement a full DevOps architecture. LL::NG is a community and professional project mainly employed by the French government to secure Police, Finance or Justice Ministries and a French mobile operator IT infrastructures since 2010. But for several years, contributions come from all around the world and LL::NG is becoming more and more popular. | [] | Test |
44,384 | 16 | Title: METransformer: Radiology Report Generation by Transformer with Multiple Learnable Expert Tokens
Abstract: In clinical scenarios, multi-specialist consultation could significantly benefit the diagnosis, especially for intricate cases. This inspires us to explore a “multi-expert joint diagnosis” mechanism to upgrade the existing “single expert” framework commonly seen in the current literature. To this end, we propose METransformer, a method to realize this idea with a transformer-based backbone. The key design of our method is the introduction of multiple learnable “expert” tokens into both the transformer encoder and decoder. In the encoder, each expert token interacts with both vision tokens and other expert tokens to learn to attend different image regions for image representation. These expert tokens are encouraged to capture complementary information by an orthogonal loss that minimizes their over-lap. In the decoder, each attended expert token guides the cross-attention between input words and visual tokens, thus influencing the generated report. A metrics-based expert voting strategy is further developed to generate the final report. By the multi-experts concept, our model enjoys the merits of an ensemble-based approach but through a manner that is computationally more efficient and supports more sophisticated interactions among experts. Experimental results demonstrate the promising performance of our proposed model on two widely used benchmarks. Last but not least, the framework-level innovation makes our work ready to incorporate advances on existing “single-expert” models to further improve its performance. | [
18368,
23618
] | Train |
44,385 | 13 | Title: Runtime Analyses of Multi-Objective Evolutionary Algorithms in the Presence of Noise
Abstract: In single-objective optimization, it is well known that evolutionary algorithms also without further adjustments can stand a certain amount of noise in the evaluation of the objective function. In contrast, this question is not at all understood for multi-objective optimization.
In this work, we conduct the first mathematical runtime analysis of a simple multi-objective evolutionary algorithm (MOEA) on a classic benchmark in the presence of noise in the objective function.
We prove that when bit-wise prior noise with rate p <= alpha/n, alpha a suitable constant, is present, the simple evolutionary multi-objective optimizer (SEMO) without any adjustments to cope with noise finds the Pareto front of the OneMinMax benchmark in time O(n^2 log n), just as in the case without noise. Given that the problem here is to arrive at a population consisting of n+1 individuals witnessing the Pareto front, this is a surprisingly strong robustness to noise (comparably simple evolutionary algorithms cannot optimize the single-objective OneMax problem in polynomial time when p = omega(log(n)/n)). Our proofs suggest that the strong robustness of the MOEA stems from its implicit diversity mechanism designed to enable it to compute a population covering the whole Pareto front.
Interestingly this result only holds when the objective value of a solution is determined only once and the algorithm from that point on works with this, possibly noisy, objective value. We prove that when all solutions are reevaluated in each iteration, then any noise rate p = omega(log(n)/n^2) leads to a super-polynomial runtime. This is very different from single-objective optimization, where it is generally preferred to reevaluate solutions whenever their fitness is important and where examples are known such that not reevaluating solutions can lead to catastrophic performance losses. | [
28340,
39607
] | Train |
44,386 | 24 | Title: Dynamic Early Exiting Predictive Coding Neural Networks
Abstract: Internet of Things (IoT) sensors are nowadays heavily utilized in various real-world applications ranging from wearables to smart buildings passing by agrotechnology and health monitoring. With the huge amounts of data generated by these tiny devices, Deep Learning (DL) models have been extensively used to enhance them with intelligent processing. However, with the urge for smaller and more accurate devices, DL models became too heavy to deploy. It is thus necessary to incorporate the hardware's limited resources in the design process. Therefore, inspired by the human brain known for its efficiency and low power consumption, we propose a shallow bidirectional network based on predictive coding theory and dynamic early exiting for halting further computations when a performance threshold is surpassed. We achieve comparable accuracy to VGG-16 in image classification on CIFAR-10 with fewer parameters and less computational complexity. | [
27753
] | Train |
44,387 | 31 | Title: Cross-Element Combinatorial Selection for Multi-Element Creative in Display Advertising
Abstract: The effectiveness of ad creatives is greatly influenced by their visual appearance. Advertising platforms can generate ad creatives with different appearances by combining creative elements provided by advertisers. However, with the increasing number of ad creative elements, it becomes challenging to select a suitable combination from the countless possibilities. The industry's mainstream approach is to select individual creative elements independently, which often overlooks the importance of interaction between creative elements during the modeling process. In response, this paper proposes a Cross-Element Combinatorial Selection framework for multiple creative elements, termed CECS. In the encoder process, a cross-element interaction is adopted to dynamically adjust the expression of a single creative element based on the current candidate creatives. In the decoder process, the creative combination problem is transformed into a cascade selection problem of multiple creative elements. A pointer mechanism with a cascade design is used to model the associations among candidates. Comprehensive experiments on real-world datasets show that CECS achieved the SOTA score on offline metrics. Moreover, the CECS algorithm has been deployed in our industrial application, resulting in a significant 6.02% CTR and 10.37% GMV lift, which is beneficial to the business. | [
40743
] | Validation |
44,388 | 8 | Title: Network-Calculus Service Curves of the Interleaved Regulator
Abstract: The interleaved regulator (implemented by IEEE TSN Asynchronous Traffic Shaping) is used in time-sensitive networks for reshaping the flows with per-flow contracts. When applied to an aggregate of flows that come from a FIFO system, an interleaved regulator that reshapes the flows with their initial contracts does not increase the worst-case delay of the aggregate. This shaping-for-free property supports the computation of end-to-end latency bounds and the validation of the network's timing requirements. A common method to establish the properties of a network element is to obtain a network-calculus service-curve model. The existence of such a model for the interleaved regulator remains an open question. If a service-curve model were found for the interleaved regulator, then the analysis of this mechanism would no longer be limited to the situations where the shaping-for-free holds, which would widen its use in time-sensitive networks. In this paper, we investigate if network-calculus service curves can capture the behavior of the interleaved regulator. We find that an interleaved regulator placed outside of the shaping-for-free requirements (after a non-FIFO system) can yield unbounded latencies. Consequently, we prove that no network-calculus service curve exists to explain the interleaved regulator's behavior. It is still possible to find non-trivial service curves for the interleaved regulator. However, their long-term rate cannot be large enough to provide any guarantee (specifically, we prove that for the regulators that process at least four flows with the same contract, the long-term rate of any service curve is upper bounded by three times the rate of the per-flow contract). | [] | Train |
44,389 | 16 | Title: Audio-Visual Glance Network for Efficient Video Recognition
Abstract: Deep learning has made significant strides in video understanding tasks, but the computation required to classify lengthy and massive videos using clip-level video classifiers remains impractical and prohibitively expensive. To address this issue, we propose Audio-Visual Glance Network (AVGN), which leverages the commonly available audio and visual modalities to efficiently process the spatio-temporally important parts of a video. AVGN firstly divides the video into snippets of image-audio clip pair and employs lightweight unimodal encoders to extract global visual features and audio features. To identify the important temporal segments, we use an Audio-Visual Temporal Saliency Transformer (AV-TeST) that estimates the saliency scores of each frame. To further increase efficiency in the spatial dimension, AVGN processes only the important patches instead of the whole images. We use an Audio-Enhanced Spatial Patch Attention (AESPA) module to produce a set of enhanced coarse visual features, which are fed to a policy network that produces the coordinates of the important patches. This approach enables us to focus only on the most important spatio-temporally parts of the video, leading to more efficient video recognition. Moreover, we incorporate various training techniques and multi-modal feature fusion to enhance the robustness and effectiveness of our AVGN. By combining these strategies, our AVGN sets new state-of-the-art performance in multiple video recognition benchmarks while achieving faster processing speed. | [] | Test |
44,390 | 3 | Title: Socio-economic landscape of digital transformation & public NLP systems: A critical review
Abstract: The current wave of digital transformation has spurred digitisation reforms and has led to prodigious development of AI&NLP systems, with several of them entering the public domain. There is a perception that these systems have a non trivial impact on society but there is a dearth of literature in critical AI exploring what kinds of systems exist and how do they operate. This paper constructs a broad taxonomy of NLP systems which impact or are impacted by the ``public'' and provides a concrete analyses via various instrumental and normative lenses on the socio-technical nature of these systems. This paper categorises thirty examples of these systems into seven families, namely; finance, customer service, policy making, education, healthcare, law, and security, based on their public use cases. It then critically analyses these applications, first the priors and assumptions they are based on, then their mechanisms, possible methods of data collection, the models and error functions used, etc. This paper further delves into exploring the socio-economic and political contexts in which these families of systems are generally used and their potential impact on the same, and the function creep of these systems. It provides commentary on the potential long-term downstream impact of these systems on communities which use them. Aside from providing a birds eye view of what exists our in depth analysis provides insights on what is lacking in the current discourse on NLP in particular and critical AI in general, proposes additions to the current framework of analysis, provides recommendations future research direction, and highlights the need to importance of exploring the social in this socio-technical system. | [] | Validation |
44,391 | 27 | Title: BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration
Abstract: Human bimanual manipulation can perform more complex tasks than a simple combination of two single arms, which is credited to the spatio-temporal coordination between the arms. However, the description of bimanual coordination is still an open topic in robotics. This makes it difficult to give an explainable coordination paradigm, let alone applied to robotics. In this work, we divide the main bimanual tasks in human daily activities into two types: leader-follower and synergistic coordination. Then we propose a relative parameterization method to learn these types of coordination from human demonstration. It represents coordination as Gaussian mixture models from bimanual demonstration to describe the change in the importance of coordination throughout the motions by probability. The learned coordinated representation can be generalized to new task parameters while ensuring spatio-temporal coordination. We demonstrate the method using synthetic motions and human demonstration data and deploy it to a humanoid robot to perform a generalized bimanual coordination motion. We believe that this easy-to-use bimanual learning from demonstration (LfD) method has the potential to be used as a data augmentation plugin for robot large manipulation model training. The corresponding codes are open-sourced in https://github.com/Skylark0924/Rofunc. | [
23220
] | Test |
44,392 | 27 | Title: Active Robot Vision for Distant Object Change Detection: A Lightweight Training Simulator Inspired by Multi-Armed Bandits
Abstract: In ground-view object change detection, the recently emerging map-less navigation has great potential as a means of navigating a robot to distantly detected objects and identifying their changing states (appear/disappear/no-change) with high resolution imagery. However, the brute-force naive action strategy of navigating to every distant object requires huge sense/plan/action costs proportional to the number of objects. In this work, we study this new problem of ``Which distant objects should be prioritized for map-less navigation?"and in order to speed up the R{\&}D cycle, propose a highly-simplified approach that is easy to implement and easy to extend. In our approach, a new layer called map-based navigation is added on top of the map-less navigation, which constitutes a hierarchical planner. First, a dataset consisting of $N$ view sequences is acquired by a real robot via map-less navigation. Then, an environment simulator was built to simulate a simple action planning problem: ``Which view sequence should the robot select next?". Then, a solver was built inspired by the analogy to the multi-armed bandit problem: ``Which arm should the player select next?". Finally, the effectiveness of the proposed framework was verified using the semantically non-trivial scenario ``sofa as bookshelf". | [] | Validation |
44,393 | 10 | Title: Explaining CLIP through Co-Creative Drawings and Interaction
Abstract: This paper analyses a visual archive of drawings produced by an interactive robotic art installation where audience members narrated their dreams into a system powered by CLIPdraw deep learning (DL) model that interpreted and transformed their dreams into images. The resulting archive of prompt-image pairs were examined and clustered based on concept representation accuracy. As a result of the analysis, the paper proposes four groupings for describing and explaining CLIP-generated results: clear concept, text-to-text as image, indeterminacy and confusion, and lost in translation. This article offers a glimpse into a collection of dreams interpreted, mediated and given form by Artificial Intelligence (AI), showcasing oftentimes unexpected, visually compelling or, indeed, the dream-like output of the system, with the emphasis on processes and results of translations between languages, sign-systems and various modules of the installation. In the end, the paper argues that proposed clusters support better understanding of the neural model. | [] | Train |
44,394 | 16 | Title: AdaOPC: A Self-Adaptive Mask Optimization Framework For Real Design Patterns
Abstract: Optical proximity correction (OPC) is a widely-used resolution enhancement technique (RET) for printability optimization. Recently, rigorous numerical optimization and fast machine learning are the research focus of OPC in both academia and industry, each of which complements the other in terms of robustness or efficiency. We inspect the pattern distribution on a design layer and find that different sub-regions have different pattern complexity. Besides, we also find that many patterns repetitively appear in the design layout, and these patterns may possibly share optimized masks. We exploit these properties and propose a self-adaptive OPC framework to improve efficiency. Firstly we choose different OPC solvers adaptively for patterns of different complexity from an extensible solver pool to reach a speed/accuracy co-optimization. Apart from that, we prove the feasibility of reusing optimized masks for repeated patterns and hence, build a graph-based dynamic pattern library reusing stored masks to further speed up the OPC flow. Experimental results show that our framework achieves substantial improvement in both performance and efficiency. | [
24000,
24472
] | Train |
44,395 | 7 | Title: Finding the Optimum Design of Large Gas Engines Prechambers Using CFD and Bayesian Optimization
Abstract: The turbulent jet ignition concept using prechambers is a promising solution to achieve stable combustion at lean conditions in large gas engines, leading to high efficiency at low emission levels. Due to the wide range of design and operating parameters for large gas engine prechambers, the preferred method for evaluating different designs is computational fluid dynamics (CFD), as testing in test bed measurement campaigns is time-consuming and expensive. However, the significant computational time required for detailed CFD simulations due to the complexity of solving the underlying physics also limits its applicability. In optimization settings similar to the present case, i.e., where the evaluation of the objective function(s) is computationally costly, Bayesian optimization has largely replaced classical design-of-experiment. Thus, the present study deals with the computationally efficient Bayesian optimization of large gas engine prechambers design using CFD simulation. Reynolds-averaged-Navier-Stokes simulations are used to determine the target values as a function of the selected prechamber design parameters. The results indicate that the chosen strategy is effective to find a prechamber design that achieves the desired target values. | [] | Test |
44,396 | 16 | Title: Full or Weak Annotations? An Adaptive Strategy for Budget-Constrained Annotation Campaigns
Abstract: Annotating new datasets for machine learning tasks is tedious, time-consuming, and costly. For segmentation applications, the burden is particularly high as manual delin-eations of relevant image content are often extremely expensive or can only be done by experts with domain-specific knowledge. Thanks to developments in transfer learning and training with weak supervision, segmentation models can now also greatly benefit from annotations of different kinds. However, for any new domain application looking to use weak supervision, the dataset builder still needs to define a strategy to distribute full segmentation and other weak annotations. Doing so is challenging, however, as it is a priori unknown how to distribute an annotation budget for a given new dataset. To this end, we propose a novel approach to determine annotation strategies for segmentation datasets, whereby estimating what proportion of segmentation and classification annotations should be collected given a fixed budget. To do so, our method sequentially determines proportions of segmentation and classification annotations to collect for budget-fractions by modeling the expected improvement of the final segmentation model. We show in our experiments that our approach yields annotations that perform very close to the optimal for a number of different annotation budgets and datasets. | [] | Validation |
44,397 | 23 | Title: Invited: Automated Code generation for Information Technology Tasks in YAML through Large Language Models
Abstract: The recent improvement in code generation capabilities due to the use of large language models has mainly benefited general purpose programming languages. Domain specific languages, such as the ones used for IT Automation, received far less attention, despite involving many active developers and being an essential component of modern cloud platforms. This work focuses on the generation of Ansible YAML, a widely used markup language for IT Automation. We present Ansible Wisdom, a natural-language to Ansible YAML code generation tool, aimed at improving IT automation productivity. Results show that Ansible Wisdom can accurately generate Ansible script from natural language prompts with performance comparable or better than existing state of the art code generation models. | [
13700,
33220
] | Train |
44,398 | 16 | Title: Learning Trustworthy Model from Noisy Labels based on Rough Set for Surface Defect Detection
Abstract: —In the surface defect detection, there are some suspicious regions that cannot be uniquely classified as abnormal or normal. The annotating of suspicious regions is easily affected by factors such as workers’ emotional fluctuations and judgment standard, resulting in noisy labels, which in turn leads to missing and false detections, and ultimately leads to inconsistent judgments of product quality. Unlike the usual noisy labels, the ones used for surface defect detection appear to be inconsistent rather than mislabeled. The noise occurs in almost every label and is difficult to correct or evaluate. In this paper, we proposed a framework that learns trustworthy models from noisy labels for surface defect defection. At first, to avoid the negative impact of noisy labels on the model, we represent the suspicious regions with consistent and precise elements at the pixel-level and redesign the loss function. Secondly, without changing network structure and adding any extra labels, pluggable spatially correlated Bayesian module is proposed. Finally, the defect discrimination confidence is proposed to measure the uncertainty, with which anomalies can be identified as defects. Our results indicate not only the effectiveness of the proposed method in learning from noisy labels, but also robustness and real-time performance. | [] | Train |
44,399 | 16 | Title: Semi-supervised Contrastive Regression for Estimation of Eye Gaze
Abstract: With the escalated demand of human-machine interfaces for intelligent systems, development of gaze controlled system have become a necessity. Gaze, being the non-intrusive form of human interaction, is one of the best suited approach. Appearance based deep learning models are the most widely used for gaze estimation. But the performance of these models is entirely influenced by the size of labeled gaze dataset and in effect affects generalization in performance. This paper aims to develop a semi-supervised contrastive learning framework for estimation of gaze direction. With a small labeled gaze dataset, the framework is able to find a generalized solution even for unseen face images. In this paper, we have proposed a new contrastive loss paradigm that maximizes the similarity agreement between similar images and at the same time reduces the redundancy in embedding representations. Our contrastive regression framework shows good performance in comparison to several state of the art contrastive learning techniques used for gaze estimation. | [] | Test |
44,400 | 23 | Title: Is It Enough to Recommend Tasks to Newcomers? Understanding Mentoring on Good First Issues
Abstract: Newcomers are critical for the success and continuity of open source software (OSS) projects. To attract newcomers and facilitate their onboarding, many OSS projects recommend tasks for newcomers, such as good first issues (GFIs). Previous studies have preliminarily investigated the effects of GFIs and techniques to identify suitable GFIs. However, it is still unclear whether just recommending tasks is enough and how significant mentoring is for newcomers. To better understand mentoring in OSS communities, we analyze the resolution process of 48,402 GFIs from 964 repositories through a mix-method approach. We investigate the extent, the mentorship structures, the discussed topics, and the relevance of expert involvement. We find that ~70% of GFIs have expert participation, with each GFI usually having one expert who makes two comments. Half of GFIs will receive their first expert comment within 8.5 hours after a newcomer comment. Through analysis of the collaboration networks of newcomers and experts, we observe that community mentorship presents four types of structure: centralized mentoring, decentralized mentoring, collaborative mentoring, and distributed mentoring. As for discussed topics, we identify 14 newcomer challenges and 18 expert mentoring content. By fitting the generalized linear models, we find that expert involvement positively correlates with newcomers' successful contributions but negatively correlates with newcomers' retention. Our study manifests the status and significance of mentoring in the OSS projects, which provides rich practical implications for optimizing the mentoring process and helping newcomers contribute smoothly and suecessfully, | [
33773,
31654
] | Train |
44,401 | 20 | Title: Dual-Quaternion Interpolation
Abstract: Transformations in the field of computer graphics and geometry are one of the most important concepts for efficient manipulation and control of objects in 2-dimensional and 3-dimensional space. Transformations take many forms each with their advantages and disadvantages. A particularly powerful tool for representing transforms in a unified form are dual-quaternions. A benefit of this unified form is the interpolation properties, which address a range of limitations (compact form that allows a rotational and translational components to be coupled). In this article, we examine various dual-quaternion interpolation options that achieve different trade-offs between computational cost, aesthetic factors and coupling dependency. Surprisingly, despite dual-quaternions being a common tool in graphics libraries, there are limited details on the interpolation details. Here we attempt to explain interpolation concept, elaborating on underpinning theories, while explaining concepts and bespoke modifications for added control. | [] | Validation |
44,402 | 24 | Title: ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats
Abstract: In the complex domain of large language models (LLMs), striking a balance between computational efficiency and maintaining model quality is a formidable challenge. Navigating the inherent limitations of uniform quantization, particularly when dealing with outliers, and motivated by the launch of NVIDIA's H100 hardware, this study delves into the viability of floating-point (FP) quantization, particularly focusing on FP8 and FP4, as a potential solution. Our comprehensive investigation reveals that for LLMs, FP8 activation consistently outshines its integer (INT8) equivalent, with the performance edge becoming more noticeable in models possessing parameters beyond one billion. For weight quantization, our findings indicate that FP4 exhibits comparable, if not superior, performance to INT4, simplifying deployment on FP-supported hardware like H100. To mitigate the overhead from precision alignment caused by the disparity between weights and activations, we propose two scaling constraints for weight quantization that negligibly impact the performance compared to the standard W4A8 model. We additionally enhance our quantization methods by integrating the Low Rank Compensation (LoRC) strategy, yielding improvements especially in smaller models. The results of our investigation emphasize the immense potential of FP quantization for LLMs, paving the way for high-efficiency deployment in resource-limited settings. | [
16866,
6979,
13700,
21157,
5649,
1234,
20467,
36887,
1141,
5815
] | Train |
44,403 | 24 | Title: Decentralized Randomly Distributed Multi-agent Multi-armed Bandit with Heterogeneous Rewards
Abstract: We study a decentralized multi-agent multi-armed bandit problem in which multiple clients are connected by time dependent random graphs provided by an environment. The reward distributions of each arm vary across clients and rewards are generated independently over time by an environment based on distributions that include both sub-exponential and sub-gaussian distributions. Each client pulls an arm and communicates with neighbors based on the graph provided by the environment. The goal is to minimize the overall regret of the entire system through collaborations. To this end, we introduce a novel algorithmic framework, which first provides robust simulation methods for generating random graphs using rapidly mixing Markov chains or the random graph model, and then combines an averaging-based consensus approach with a newly proposed weighting technique and the upper confidence bound to deliver a UCB-type solution. Our algorithms account for the randomness in the graphs, removing the conventional doubly stochasticity assumption, and only require the knowledge of the number of clients at initialization. We derive optimal instance-dependent regret upper bounds of order $\log{T}$ in both sub-gaussian and sub-exponential environments, and a nearly optimal mean-gap independent regret upper bound of order $\sqrt{T}\log T$ up to a $\log T$ factor. Importantly, our regret bounds hold with high probability and capture graph randomness, whereas prior works consider expected regret under assumptions and require more stringent reward distributions. | [
17089
] | Train |
44,404 | 23 | Title: Characterizing Bugs in Python and R Data Analytics Programs
Abstract: R and Python are among the most popular languages used in many critical data analytics tasks. However, we still do not fully understand the capabilities of these two languages w.r.t. bugs encountered in data analytics tasks. What type of bugs are common? What are the main root causes? What is the relation between bugs and root causes? How to mitigate these bugs? We present a comprehensive study of 5,068 Stack Overflow posts, 1,800 bug fix commits from GitHub repositories, and several GitHub issues of the most used libraries to understand bugs in R and Python. Our key findings include: while both R and Python have bugs due to inexperience with data analysis, Python see significantly larger data preprocessing bugs compared to R. Developers experience significantly more data flow bugs in R because intermediate results are often implicit. We also found changes and bugs in packages and libraries cause more bugs in R compared to Python while package or library misselection and conflicts cause more bugs in Python than R. While R has a slightly higher readability barrier for data analysts, the statistical power of R leads to a less number of bad performance bugs. In terms of data visualization, R packages have significantly more bugs than Python libraries. We also identified a strong correlation between comparable packages in R and Python despite their linguistic and methodological differences. Lastly, we contribute a large dataset of manually verified R and Python bugs. | [] | Train |
44,405 | 28 | Title: Efficient Algorithms for Constructing Minimum-Weight Codewords in Some Extended Binary BCH Codes
Abstract: We present $O(m^3)$ algorithms for specifying the support of minimum-weight words of extended binary BCH codes of length $n=2^m$ and designed distance $d(m,s,i):=2^{m-1-s}-2^{m-1-i-s}$ for some values of $m,i,s$, where $m$ may grow to infinity. The support is specified as the sum of two sets: a set of $2^{2i-1}-2^{i-1}$ elements, and a subspace of dimension $m-2i-s$, specified by a basis. In some detail, for designed distance $6\cdot 2^j$, we have a deterministic algorithm for even $m\geq 4$, and a probabilistic algorithm with success probability $1-O(2^{-m})$ for odd $m>4$. For designed distance $28\cdot 2^j$, we have a probabilistic algorithm with success probability $\geq 1/3-O(2^{-m/2})$ for even $m\geq 6$. Finally, for designed distance $120\cdot 2^j$, we have a deterministic algorithm for $m\geq 8$ divisible by $4$. We also present a construction via Gold functions when $2i|m$. Our construction builds on results of Kasami and Lin (IEEE T-IT, 1972), who proved that for extended binary BCH codes of designed distance $d(m,s,i)$, the minimum distance equals the designed distance. Their proof makes use of a non-constructive result of Berlekamp (Inform. Contrl., 1970), and a constructive ``down-conversion theorem'' that converts some words in BCH codes to lower-weight words in BCH codes of lower designed distance. Our main contribution is in replacing the non-constructive argument of Berlekamp by a low-complexity algorithm. In one aspect, we extends the results of Grigorescu and Kaufman (IEEE T-IT, 2012), who presented explicit minimum-weight words for designed distance $6$ (and hence also for designed distance $6\cdot 2^j$, by a well-known ``up-conversion theorem''), as we cover more cases of the minimum distance. However, the minimum-weight words we construct are not affine generators for designed distance $>6$. | [] | Test |
44,406 | 24 | Title: Graph Exploration for Effective Multi-agent Q-Learning
Abstract: This paper proposes an exploration technique for multi-agent reinforcement learning (MARL) with graph-based communication among agents. We assume the individual rewards received by the agents are independent of the actions by the other agents, while their policies are coupled. In the proposed framework, neighbouring agents collaborate to estimate the uncertainty about the state-action space in order to execute more efficient explorative behaviour. Different from existing works, the proposed algorithm does not require counting mechanisms and can be applied to continuous-state environments without requiring complex conversion techniques. Moreover, the proposed scheme allows agents to communicate in a fully decentralized manner with minimal information exchange. And for continuous-state scenarios, each agent needs to exchange only a single parameter vector. The performance of the algorithm is verified with theoretical results for discrete-state scenarios and with experiments for continuous ones. | [] | Train |
44,407 | 16 | Title: Rail Detection: An Efficient Row-based Network and a New Benchmark
Abstract: Rail detection, essential for railroad anomaly detection, aims to identify the railroad region in video frames. Although various studies on rail detection exist, neither an open benchmark nor a high-speed network is available in the community, making al- gorithm comparison and development difficult. Inspired by the growth of lane detection, we propose a rail database and a row- based rail detection method. In detail, we make several contribu- tions: (i) We present a real-world railway dataset, Rail-DB, with 7432 pairs of images and annotations. The images are collected from different situations in lighting, road structures, and views. The rails are labeled with polylines, and the images are catego- rized into nine scenes. The Rail-DB is expected to facilitate the improvement of rail detection algorithms. (ii) We present an ef- ficient row-based rail detection method, Rail-Net, containing a lightweight convolutional backbone and an anchor classifier. Specif- ically, we formulate the process of rail detection as a row-based selecting problem. This strategy reduces the computational cost compared to alternative segmentation methods. (iii) We evaluate the Rail-Net on Rail-DB with extensive experiments, including cross-scene settings and network backbones ranging from ResNet to Vision Transformers. Our method achieves promising perfor- mance in terms of both speed and accuracy. Notably, a lightweight version could achieve 92.77% accuracy and 312 frames per second. The Rail-Net outperforms the traditional method by 50.65% and the segmentation one by 5.86%. The database and code are available at: https://github.com/Sampson-Lee/Rail-Detection. | [
9409
] | Train |
44,408 | 24 | Title: SeFNet: Bridging Tabular Datasets with Semantic Feature Nets
Abstract: Machine learning applications cover a wide range of predictive tasks in which tabular datasets play a significant role. However, although they often address similar problems, tabular datasets are typically treated as standalone tasks. The possibilities of using previously solved problems are limited due to the lack of structured contextual information about their features and the lack of understanding of the relations between them. To overcome this limitation, we propose a new approach called Semantic Feature Net (SeFNet), capturing the semantic meaning of the analyzed tabular features. By leveraging existing ontologies and domain knowledge, SeFNet opens up new opportunities for sharing insights between diverse predictive tasks. One such opportunity is the Dataset Ontology-based Semantic Similarity (DOSS) measure, which quantifies the similarity between datasets using relations across their features. In this paper, we present an example of SeFNet prepared for a collection of predictive tasks in healthcare, with the features' relations derived from the SNOMED-CT ontology. The proposed SeFNet framework and the accompanying DOSS measure address the issue of limited contextual information in tabular datasets. By incorporating domain knowledge and establishing semantic relations between features, we enhance the potential for meta-learning and enable valuable insights to be shared across different predictive tasks. | [
24473
] | Train |
44,409 | 10 | Title: Decision-Making Under Uncertainty: Beyond Probabilities
Abstract: This position paper reflects on the state-of-the-art in decision-making under uncertainty. A classical assumption is that probabilities can sufficiently capture all uncertainty in a system. In this paper, the focus is on the uncertainty that goes beyond this classical interpretation, particularly by employing a clear distinction between aleatoric and epistemic uncertainty. The paper features an overview of Markov decision processes (MDPs) and extensions to account for partial observability and adversarial behavior. These models sufficiently capture aleatoric uncertainty but fail to account for epistemic uncertainty robustly. Consequently, we present a thorough overview of so-called uncertainty models that exhibit uncertainty in a more robust interpretation. We show several solution techniques for both discrete and continuous models, ranging from formal verification, over control-based abstractions, to reinforcement learning. As an integral part of this paper, we list and discuss several key challenges that arise when dealing with rich types of uncertainty in a model-based fashion. | [
25722,
13979
] | Test |
44,410 | 26 | Title: Friction Interventions to Curb the Spread of Misinformation on Social Media
Abstract: Social media has enabled the spread of information at unprecedented speeds and scales, and with it the proliferation of high-engagement, low-quality content. *Friction* -- behavioral design measures that make the sharing of content more cumbersome -- might be a way to raise the quality of what is spread online. Here, we study the effects of friction with and without quality-recognition learning. Experiments from an agent-based model suggest that friction alone decreases the number of posts without improving their quality. A small amount of friction combined with learning, however, increases the average quality of posts significantly. Based on this preliminary evidence, we propose a friction intervention with a learning component about the platform's community standards, to be tested via a field experiment. The proposed intervention would have minimal effects on engagement and may easily be deployed at scale. | [
39680
] | Train |
44,411 | 16 | Title: Automatic Signboard Recognition in Low Quality Night Images
Abstract: An essential requirement for driver assistance systems and autonomous driving technology is implementing a robust system for detecting and recognizing traffic signs. This system enables the vehicle to autonomously analyze the environment and make appropriate decisions regarding its movement, even when operating at higher frame rates. However, traffic sign images captured in inadequate lighting and adverse weather conditions are poorly visible, blurred, faded, and damaged. Consequently, the recognition of traffic signs in such circumstances becomes inherently difficult. This paper addressed the challenges of recognizing traffic signs from images captured in low light, noise, and blurriness. To achieve this goal, a two-step methodology has been employed. The first step involves enhancing traffic sign images by applying a modified MIRNet model and producing enhanced images. In the second step, the Yolov4 model recognizes the traffic signs in an unconstrained environment. The proposed method has achieved 5.40% increment in mAP@0.5 for low quality images on Yolov4. The overall mAP@0.5 of 96.75% has been achieved on the GTSRB dataset. It has also attained mAP@0.5 of 100% on the GTSDB dataset for the broad categories, comparable with the state-of-the-art work. | [] | Validation |
44,412 | 24 | Title: Characteristics of networks generated by kernel growing neural gas
Abstract: This research aims to develop kernel GNG, a kernelized version of the growing neural gas (GNG) algorithm, and to investigate the features of the networks generated by the kernel GNG. The GNG is an unsupervised artificial neural network that can transform a dataset into an undirected graph, thereby extracting the features of the dataset as a graph. The GNG is widely used in vector quantization, clustering, and 3D graphics. Kernel methods are often used to map a dataset to feature space, with support vector machines being the most prominent application. This paper introduces the kernel GNG approach and explores the characteristics of the networks generated by kernel GNG. Five kernels, including Gaussian, Laplacian, Cauchy, inverse multiquadric, and log kernels, are used in this study. The results of this study show that the average degree and the average clustering coefficient decrease as the kernel parameter increases for Gaussian, Laplacian, Cauchy, and IMQ kernels. If we avoid more edges and a higher clustering coefficient (or more triangles), the kernel GNG with a larger value of the parameter will be more appropriate. | [] | Train |
44,413 | 30 | Title: Vocab-Expander: A System for Creating Domain-Specific Vocabularies Based on Word Embeddings
Abstract: In this paper, we propose Vocab-Expander at https://vocab-expander.com, an online tool that enables end-users (e.g., technology scouts) to create and expand a vocabulary of their domain of interest. It utilizes an ensemble of state-of-the-art word embedding techniques based on web text and ConceptNet, a common-sense knowledge base, to suggest related terms for already given terms. The system has an easy-to-use interface that allows users to quickly confirm or reject term suggestions. Vocab-Expander offers a variety of potential use cases, such as improving concept-based information retrieval in technology and innovation management, enhancing communication and collaboration within organizations or interdisciplinary projects, and creating vocabularies for specific courses in education. | [] | Train |
44,414 | 3 | Title: Socialbots and the Challenges of Cyberspace Awareness
Abstract: As security communities brace for the emerging social automation based threats, we examine the mechanisms of developing situation awareness in cyberspace and the governance issues that socialbots bring into this existing paradigm of cyber situation awareness. We point out that an organisation's situation awareness in cyberspace is a phenomena fundamentally distinct from the original conception of situation awareness, requiring continuous data exchange and knowledge management where the standard implementation mechanisms require significant policy attention in light of threats like malicious social automation. We conceptualise Cyberspace Awareness as a socio-technical phenomena with Syntactic, Semantic, and Operatic dimensions - each subject to a number of stressors which are exacerbated under social automation based threats. The paper contributes to the ideas of situational awareness in cyberspace, and characterises the challenges therein around tackling the increasingly social and often pervasive, automation in cyber threat environments. | [] | Test |
44,415 | 6 | Title: UX Heuristics and Checklist for Deep Learning powered Mobile Applications with Image Classification
Abstract: Advances in mobile applications providing image classification enabled by Deep Learning require innovative User Experience solutions in order to assure their adequate use by users. To aid the design process, usability heuristics are typically customized for a specific kind of application. Therefore, based on a literature review and analyzing existing mobile applications with image classification, we propose an initial set of AIX heuristics for Deep Learning powered mobile applications with image classification decomposed into a checklist. In order to facilitate the usage of the checklist we also developed an online course presenting the concepts and heuristics as well as a web-based tool in order to support an evaluation using these heuristics. These results of this research can be used to guide the design of the interfaces of such applications as well as support the conduction of heuristic evaluations supporting practitioners to develop image classification apps that people can understand, trust, and can engage with effectively. | [] | Train |
44,416 | 24 | Title: Zero-Shot Robustification of Zero-Shot Models With Foundation Models
Abstract: Zero-shot inference is a powerful paradigm that enables the use of large pretrained models for downstream classification tasks without further training. However, these models are vulnerable to inherited biases that can impact their performance. The traditional solution is fine-tuning, but this undermines the key advantage of pretrained models, which is their ability to be used out-of-the-box. We propose RoboShot, a method that improves the robustness of pretrained model embeddings in a fully zero-shot fashion. First, we use zero-shot language models (LMs) to obtain useful insights from task descriptions. These insights are embedded and used to remove harmful and boost useful components in embeddings -- without any supervision. Theoretically, we provide a simple and tractable model for biases in zero-shot embeddings and give a result characterizing under what conditions our approach can boost performance. Empirically, we evaluate RoboShot on nine image and NLP classification tasks and show an average improvement of 15.98% over several zero-shot baselines. Additionally, we demonstrate that RoboShot is compatible with a variety of pretrained and language models. | [
3777,
20562,
13700,
5852
] | Train |
44,417 | 39 | Title: Adaptive large neighborhood search for a personnel task scheduling problem with task selection and parallel task assignments
Abstract: Motivated by a real-world application, we model and solve a complex staff scheduling problem. Tasks are to be assigned to workers for supervision. Multiple tasks can be covered in parallel by a single worker, with worker shifts being flexible within availabilities. Each worker has a different skill set, enabling them to cover different tasks. Tasks require assignment according to priority and skill requirements. The objective is to maximize the number of assigned tasks weighted by their priorities, while minimizing assignment penalties. We develop an adaptive large neighborhood search (ALNS) algorithm, relying on tailored destroy and repair operators. It is tested on benchmark instances derived from real-world data and compared to optimal results obtained by means of a commercial MIP-solver. Furthermore, we analyze the impact of considering three additional alternative objective functions. When applied to large-scale company data, the developed ALNS outperforms the previously applied solution approach. | [] | Validation |
44,418 | 24 | Title: Bandit Multi-linear DR-Submodular Maximization and Its Applications on Adversarial Submodular Bandits
Abstract: We investigate the online bandit learning of the monotone multi-linear DR-submodular functions, designing the algorithm $\mathtt{BanditMLSM}$ that attains $O(T^{2/3}\log T)$ of $(1-1/e)$-regret. Then we reduce submodular bandit with partition matroid constraint and bandit sequential monotone maximization to the online bandit learning of the monotone multi-linear DR-submodular functions, attaining $O(T^{2/3}\log T)$ of $(1-1/e)$-regret in both problems, which improve the existing results. To the best of our knowledge, we are the first to give a sublinear regret algorithm for the submodular bandit with partition matroid constraint. A special case of this problem is studied by Streeter et al.(2009). They prove a $O(T^{4/5})$ $(1-1/e)$-regret upper bound. For the bandit sequential submodular maximization, the existing work proves an $O(T^{2/3})$ regret with a suboptimal $1/2$ approximation ratio (Niazadeh et al. 2021). | [
29958,
7327
] | Test |
44,419 | 30 | Title: Consistency Analysis of ChatGPT
Abstract: ChatGPT, a question-and-answer dialogue system based on a large language model, has gained huge popularity since its introduction. Its positive aspects have been reported through many media platforms, and some analyses even showed that ChatGPT achieved a decent grade in professional exams, including the law, medical, and finance domains, adding extra support to the claim that AI now can assist and, even, replace humans in industrial fields. Others, however, doubt its reliability and trustworthiness. In this paper, we investigate ChatGPT's trustworthiness regarding logically consistent behaviours. Our findings suggest that, although ChatGPT seems to achieve an improved language understanding ability, it still fails to generate logically correct predictions frequently. Hence, while it is true that ChatGPT is an impressive and promising new technique, we conclude that its usage in real-world applications without thorough human inspection requires further consideration, especially for risk-sensitive areas. | [
40192,
9569,
38849,
387,
28580,
33699,
20881,
25973,
26357,
30775,
44246,
13278
] | Validation |
44,420 | 37 | Title: The Fast and the Private: Task-based Dataset Search
Abstract: Modern dataset search platforms employ ML task-based utility metrics instead of relying on metadata-based keywords to comb through extensive dataset repositories. In this setup, requesters provide an initial dataset, and the platform identifies complementary datasets to augment (join or union) the requester's dataset such that the ML model (e.g., linear regression) performance is improved most. Although effective, current task-based data searches are stymied by (1) high latency which deters users, (2) privacy concerns for regulatory standards, and (3) low data quality which provides low utility. We introduce Mileena, a fast, private, and high-quality task-based dataset search platform. At its heart, Mileena is built on pre-computed semi-ring sketches for efficient ML training and evaluation. Based on semi-ring, we develop a novel Factorized Privacy Mechanism that makes the search differentially private and scales to arbitrary corpus sizes and numbers of requests without major quality degradation. We also demonstrate the early promise in using LLM-based agents for automatic data transformation and applying semi-rings to support causal discovery and treatment effect estimation. | [
10368,
39873,
3757,
39282,
33942,
31518
] | Train |
44,421 | 24 | Title: [Re] Double Sampling Randomized Smoothing
Abstract: This paper is a contribution to the reproducibility challenge in the field of machine learning, specifically addressing the issue of certifying the robustness of neural networks (NNs) against adversarial perturbations. The proposed Double Sampling Randomized Smoothing (DSRS) framework overcomes the limitations of existing methods by using an additional smoothing distribution to improve the robustness certification. The paper provides a clear manifestation of DSRS for a generalized family of Gaussian smoothing and a computationally efficient method for implementation. The experiments on MNIST and CIFAR-10 demonstrate the effectiveness of DSRS, consistently certifying larger robust radii compared to other methods. Also various ablations studies are conducted to further analyze the hyperparameters and effect of adversarial training methods on the certified radius by the proposed framework. | [] | Train |
44,422 | 10 | Title: Variations on the Reinforcement Learning performance of Blackjack
Abstract: Blackjack or"21"is a popular card-based game of chance and skill. The objective of the game is to win by obtaining a hand total higher than the dealer's without exceeding 21. The ideal blackjack strategy will maximize financial return in the long run while avoiding gambler's ruin. The stochastic environment and inherent reward structure of blackjack presents an appealing problem to better understand reinforcement learning agents in the presence of environment variations. Here we consider a q-learning solution for optimal play and investigate the rate of learning convergence of the algorithm as a function of deck size. A blackjack simulator allowing for universal blackjack rules is also implemented to demonstrate the extent to which a card counter perfectly using the basic strategy and hi-lo system can bring the house to bankruptcy and how environment variations impact this outcome. The novelty of our work is to place this conceptual understanding of the impact of deck size in the context of learning agent convergence. | [] | Test |
44,423 | 27 | Title: Euclidean and non-Euclidean Trajectory Optimization Approaches for Quadrotor Racing
Abstract: We present two approaches to compute raceline trajectories for quadrotors by solving an optimal control problem. The approaches involve expressing quadrotor pose in either a Euclidean or non-Euclidean frame of reference and are both based on collocation. The compute times of both approaches are over 100x faster than published methods. Additionally, both approaches compute trajectories with faster lap time and show improved numerical convergence. In the last part of the paper we devise a novel method to compute racelines in dense obstacle fields using the non-Euclidean approach. | [
6663
] | Train |
44,424 | 4 | Title: FedZKP: Federated Model Ownership Verification with Zero-knowledge Proof
Abstract: Federated learning (FL) allows multiple parties to cooperatively learn a federated model without sharing private data with each other. The need of protecting such federated models from being plagiarized or misused, therefore, motivates us to propose a provable secure model ownership verification scheme using zero-knowledge proof, named FedZKP. It is shown that the FedZKP scheme without disclosing credentials is guaranteed to defeat a variety of existing and potential attacks. Both theoretical analysis and empirical studies demonstrate the security of FedZKP in the sense that the probability for attackers to breach the proposed FedZKP is negligible. Moreover, extensive experimental results confirm the fidelity and robustness of our scheme. | [
12468
] | Train |
44,425 | 31 | Title: E Pluribus Unum: Guidelines on Multi-Objective Evaluation of Recommender Systems
Abstract: Recommender Systems today are still mostly evaluated in terms of accuracy, with other aspects beyond the immediate relevance of recommendations, such as diversity, long-term user retention and fairness, often taking a back seat. Moreover, reconciling multiple performance perspectives is by definition indeterminate, presenting a stumbling block to those in the pursuit of rounded evaluation of Recommender Systems. EvalRS 2022 -- a data challenge designed around Multi-Objective Evaluation -- was a first practical endeavour, providing many insights into the requirements and challenges of balancing multiple objectives in evaluation. In this work, we reflect on EvalRS 2022 and expound upon crucial learnings to formulate a first-principles approach toward Multi-Objective model selection, and outline a set of guidelines for carrying out a Multi-Objective Evaluation challenge, with potential applicability to the problem of rounded evaluation of competing models in real-world deployments. | [] | Train |
44,426 | 24 | Title: Composing Task Knowledge with Modular Successor Feature Approximators
Abstract: Recently, the Successor Features and Generalized Policy Improvement (SF&GPI) framework has been proposed as a method for learning, composing, and transferring predictive knowledge and behavior. SF&GPI works by having an agent learn predictive representations (SFs) that can be combined for transfer to new tasks with GPI. However, to be effective this approach requires state features that are useful to predict, and these state-features are typically hand-designed. In this work, we present a novel neural network architecture,"Modular Successor Feature Approximators"(MSFA), where modules both discover what is useful to predict, and learn their own predictive representations. We show that MSFA is able to better generalize compared to baseline architectures for learning SFs and modular architectures | [
36343
] | Train |
44,427 | 16 | Title: Unleash the Potential of 3D Point Cloud Modeling with A Calibrated Local Geometry-driven Distance Metric
Abstract: Quantifying the dissimilarity between two unstructured 3D point clouds is a challenging task, with existing metrics often relying on measuring the distance between corresponding points that can be either inefficient or ineffective. In this paper, we propose a novel distance metric called Calibrated Local Geometry Distance (CLGD), which computes the difference between the underlying 3D surfaces calibrated and induced by a set of reference points. By associating each reference point with two given point clouds through computing its directional distances to them, the difference in directional distances of an identical reference point characterizes the geometric difference between a typical local region of the two point clouds. Finally, CLGD is obtained by averaging the directional distance differences of all reference points. We evaluate CLGD on various optimization and unsupervised learning-based tasks, including shape reconstruction, rigid registration, scene flow estimation, and feature representation. Extensive experiments show that CLGD achieves significantly higher accuracy under all tasks in a memory and computationally efficient manner, compared with existing metrics. As a generic metric, CLGD has the potential to advance 3D point cloud modeling. The source code is publicly available at https://github.com/rsy6318/CLGD. | [] | Train |
44,428 | 16 | Title: Multi-Sample Consensus Driven Unsupervised Normal Estimation for 3D Point Clouds
Abstract: Deep normal estimators have made great strides on synthetic benchmarks. Unfortunately, their performance dramatically drops on the real scan data since they are supervised only on synthetic datasets. The point-wise annotation of ground truth normals is vulnerable to inefficiency and inaccuracies, which totally makes it impossible to build perfect real datasets for supervised deep learning. To overcome the challenge, we propose a multi-sample consensus paradigm for unsupervised normal estimation. The paradigm consists of multi-candidate sampling, candidate rejection, and mode determination. The latter two are driven by neighbor point consensus and candidate consensus respectively. Two primary implementations of the paradigm, MSUNE and MSUNE-Net, are proposed. MSUNE minimizes a candidate consensus loss in mode determination. As a robust optimization method, it outperforms the cutting-edge supervised deep learning methods on real data at the cost of longer runtime for sampling enough candidate normals for each query point. MSUNE-Net, the first unsupervised deep normal estimator as far as we know, significantly promotes the multi-sample consensus further. It transfers the three online stages of MSUNE to offline training. Thereby its inference time is 100 times faster. Besides that, more accurate inference is achieved, since the candidates of query points from similar patches can form a sufficiently large candidate set implicitly in MSUNE-Net. Comprehensive experiments demonstrate that the two proposed unsupervised methods are noticeably superior to some supervised deep normal estimators on the most common synthetic dataset. More importantly, they show better generalization ability and outperform all the SOTA conventional and deep methods on three real datasets: NYUV2, KITTI, and a dataset from PCV [1]. | [] | Test |
44,429 | 15 | Title: Study on the Data Storage Technology of Mini-Airborne Radar Based on Machine Learning
Abstract: The data rate of airborne radar is much higher than the wireless data transfer rate in many detection applications, so the onboard data storage systems are usually used to store the radar data. Data storage systems with good seismic performance usually use NAND Flash as storage medium, and there is a widespread problem of long file management time, which seriously affects the data storage speed, especially under the limitation of platform miniaturization. To solve this problem, a data storage method based on machine learning is proposed for mini-airborne radar. The storage training model is established based on machine learning, and could process various kinds of radar data. The file management methods are classified and determined using the model, and then are applied to the storage of radar data. To verify the performance of the proposed method, a test was carried out on the data storage system of a mini-airborne radar. The experimental results show that the method based on machine learning can form various data storage methods adapted to different data rates and application scenarios. The ratio of the file management time to the actual data writing time is extremely low. | [] | Train |
44,430 | 30 | Title: On the Calibration and Uncertainty with Pólya-Gamma Augmentation for Dialog Retrieval Models
Abstract: Deep neural retrieval models have amply demonstrated their power but estimating the reliability of their predictions remains challenging. Most dialog response retrieval models output a single score for a response on how relevant it is to a given question. However, the bad calibration of deep neural network results in various uncertainty for the single score such that the unreliable predictions always misinform user decisions. To investigate these issues, we present an efficient calibration and uncertainty estimation framework PG-DRR for dialog response retrieval models which adds a Gaussian Process layer to a deterministic deep neural network and recovers conjugacy for tractable posterior inference by Pólya-Gamma augmentation. Finally, PG-DRR achieves the lowest empirical calibration error (ECE) in the in-domain datasets and the distributional shift task while keeping R10@1 and MAP performance. | [] | Test |
44,431 | 10 | Title: Uncertainty Calibration for Counterfactual Propensity Estimation in Recommendation
Abstract: In recommendation systems, a large portion of the ratings are missing due to the selection biases, which is known as Missing Not At Random. The counterfactual inverse propensity scoring (IPS) was used to weight the imputation error of every observed rating. Although effective in multiple scenarios, we argue that the performance of IPS estimation is limited due to the uncertainty miscalibration of propensity estimation. In this paper, we propose the uncertainty calibration for the propensity estimation in recommendation systems with multiple representative uncertainty calibration techniques. Theoretical analysis on the bias and generalization bound shows the superiority of the calibrated IPS estimator over the uncalibrated one. Experimental results on the coat and yahoo datasets shows that the uncertainty calibration is improved and hence brings the better recommendation results. | [] | Test |
44,432 | 24 | Title: Iterative Partial Fulfillment of Counterfactual Explanations: Benefits and Risks
Abstract: Counterfactual (CF) explanations, also known as contrastive explanations and algorithmic recourses, are popular for explaining machine learning models in high-stakes domains. For a subject that receives a negative model prediction (e.g., mortgage application denial), the CF explanations are similar instances but with positive predictions, which informs the subject of ways to improve. While their various properties have been studied, such as validity and stability, we contribute a novel one: their behaviors under iterative partial fulfillment (IPF). Specifically, upon receiving a CF explanation, the subject may only partially fulfill it before requesting a new prediction with a new explanation, and repeat until the prediction is positive. Such partial fulfillment could be due to the subject’s limited capability (e.g., can only pay down two out of four credit card accounts at this moment) or an attempt to take the chance (e.g., betting that a monthly salary increase of $800 is enough even though $1,000 is recommended). Does such iterative partial fulfillment increase or decrease the total cost of improvement incurred by the subject? We mathematically formalize IPF and demonstrate, both theoretically and empirically, that different CF algorithms exhibit vastly different behaviors under IPF. We discuss implications of our observations, advocate for this factor to be carefully considered in the development and study of CF algorithms, and give several directions for future work. | [
33345
] | Train |
44,433 | 33 | Title: Complexity of Membership and Non-Emptiness Problems in Unbounded Memory Automata
Abstract: We study the complexity relationship between three models of unbounded memory automata: nu-automata ($\nu$-A), Layered Memory Automata (LaMA)and History-Register Automata (HRA). These are all extensions of finite state automata with unbounded memory over infinite alphabets. We prove that the membership problem is NP-complete for all of them, while they fall into different classes for what concerns non-emptiness. The problem of non-emptiness is known to be Ackermann-complete for HRA, we prove that it is PSPACE-complete for $\nu$-A. | [] | Train |
44,434 | 30 | Title: Structured Voronoi Sampling
Abstract: Recently, there has been a growing interest in the development of gradient-based sampling algorithms for text generation, especially in the context of controlled generation. However, there exists a lack of theoretically grounded and principled approaches for this task. In this paper, we take an important step toward building a principled approach for sampling from language models with gradient-based methods. We use discrete distributions given by language models to define densities and develop an algorithm based on Hamiltonian Monte Carlo to sample from them. We name our gradient-based technique Structured Voronoi Sampling (SVS). In an experimental setup where the reference distribution is known, we show that the empirical distribution of SVS samples is closer to the reference distribution compared to alternative sampling schemes. Furthermore, in a controlled generation task, SVS is able to generate fluent and diverse samples while following the control targets significantly better than other methods. | [
32252
] | Train |
44,435 | 6 | Title: M2LADS: A System for Generating MultiModal Learning Analytics Dashboards
Abstract: In this article, we present a Web-based System called M2LADS, which supports the integration and visualization of multimodal data recorded in learning sessions in a MOOC in the form of Web-based Dashboards. Based on the edBB platform, the multimodal data gathered contains biometric and behavioral signals including electroencephalogram data to measure learners’ cognitive attention, heart rate for affective measures, visual attention from the video recordings. Additionally, learners’ static background data and their learning performance measures are tracked using LOGCE and MOOC tracking logs respectively, and both are included in the Web-based System. M2LADS provides opportunities to capture learners’ holistic experience during their interactions with the MOOC, which can in turn be used to improve their learning outcomes through feedback visualizations and interventions, as well as to enhance learning analytics models and improve the open content of the MOOC. | [
40658,
30844
] | Test |
44,436 | 4 | Title: The Hitchhiker's Guide to Malicious Third-Party Dependencies
Abstract: The increasing popularity of certain programming languages has spurred the creation of ecosystem-specific package repositories and package managers. Such repositories (e.g., NPM, PyPI) serve as public databases that users can query to retrieve packages for various functionalities, whereas package managers automatically handle dependency resolution and package installation on the client side. These mechanisms enhance software modularization and accelerate implementation. However, they have become a target for malicious actors seeking to propagate malware on a large scale. In this work, we show how attackers can leverage capabilities of popular package managers and languages to achieve arbitrary code execution on victim machines, thereby realizing open-source software supply chain attacks. Based on the analysis of 7 ecosystems, we identify 3 install-time and 5 runtime techniques, and we provide recommendations describing how to reduce the risk when consuming third-party dependencies. We will provide proof-of-concepts that demonstrate the identified techniques. Furthermore, we describe evasion strategies employed by attackers to circumvent detection mechanisms. | [] | Validation |
44,437 | 16 | Title: HMDO: Markerless Multi-view Hand Manipulation Capture with Deformable Objects
Abstract: We construct the first markerless deformable interaction dataset recording interactive motions of the hands and deformable objects, called HMDO (Hand Manipulation with Deformable Objects). With our built multi-view capture system, it captures the deformable interactions with multiple perspectives, various object shapes, and diverse interactive forms. Our motivation is the current lack of hand and deformable object interaction datasets, as 3D hand and deformable object reconstruction is challenging. Mainly due to mutual occlusion, the interaction area is difficult to observe, the visual features between the hand and the object are entangled, and the reconstruction of the interaction area deformation is difficult. To tackle this challenge, we propose a method to annotate our captured data. Our key idea is to collaborate with estimated hand features to guide the object global pose estimation, and then optimize the deformation process of the object by analyzing the relationship between the hand and the object. Through comprehensive evaluation, the proposed method can reconstruct interactive motions of hands and deformable objects with high quality. HMDO currently consists of 21600 frames over 12 sequences. In the future, this dataset could boost the research of learning-based reconstruction of deformable interaction scenes. | [
24705
] | Test |
44,438 | 24 | Title: Weight Freezing: A Regularization Approach for Fully Connected Layers with an Application in EEG Classification
Abstract: In the realm of EEG decoding, enhancing the performance of artificial neural networks (ANNs) carries significant potential. This study introduces a novel approach, termed"weight freezing", that is anchored on the principles of ANN regularization and neuroscience prior knowledge. The concept of weight freezing revolves around the idea of reducing certain neurons' influence on the decision-making process for a specific EEG task by freezing specific weights in the fully connected layer during the backpropagation process. This is actualized through the use of a mask matrix and a threshold to determine the proportion of weights to be frozen during backpropagation. Moreover, by setting the masked weights to zero, weight freezing can not only realize sparse connections in networks with a fully connected layer as the classifier but also function as an efficacious regularization method for fully connected layers. Through experiments involving three distinct ANN architectures and three widely recognized EEG datasets, we validate the potency of weight freezing. Our method significantly surpasses previous peak performances in classification accuracy across all examined datasets. Supplementary control experiments offer insights into performance differences pre and post weight freezing implementation and scrutinize the influence of the threshold in the weight freezing process. Our study underscores the superior efficacy of weight freezing compared to traditional fully connected networks for EEG feature classification tasks. With its proven effectiveness, this innovative approach holds substantial promise for contributing to future strides in EEG decoding research. | [] | Test |
44,439 | 16 | Title: OT-Net: A Reusable Neural Optimal Transport Solver
Abstract: With the widespread application of optimal transport (OT), its calculation becomes essential, and various algorithms have emerged. However, the existing methods either have low efficiency or cannot represent discontinuous maps. A novel reusable neural OT solver OT-Net is thus presented, which first learns Brenier's height representation via the neural network to obtain its potential, and then gained the OT map by computing the gradient of the potential. The algorithm has two merits, 1) it can easily represent discontinuous maps, which allows it to match any target distribution with discontinuous supports and achieve sharp boundaries. This can well eliminate mode collapse in the generated models. 2) The OT map can be calculated straightly by the proposed algorithm when new target samples are added, which greatly improves the efficiency and reusability of the map. Moreover, the theoretical error bound of the algorithm is analyzed, and we have demonstrated the empirical success of our approach in image generation, color transfer, and domain adaptation. | [] | Train |
44,440 | 36 | Title: Persuading a Behavioral Agent: Approximately Best Responding and Learning
Abstract: The classic Bayesian persuasion model assumes a Bayesian and best-responding receiver. We study a relaxation of the Bayesian persuasion model where the receiver can approximately best respond to the sender's signaling scheme. We show that, under natural assumptions, (1) the sender can find a signaling scheme that guarantees itself an expected utility almost as good as its optimal utility in the classic model, no matter what approximately best-responding strategy the receiver uses; (2) on the other hand, there is no signaling scheme that gives the sender much more utility than its optimal utility in the classic model, even if the receiver uses the approximately best-responding strategy that is best for the sender. Together, (1) and (2) imply that the approximately best-responding behavior of the receiver does not affect the sender's maximal achievable utility a lot in the Bayesian persuasion problem. The proofs of both results rely on the idea of robustification of a Bayesian persuasion scheme: given a pair of the sender's signaling scheme and the receiver's strategy, we can construct another signaling scheme such that the receiver prefers to use that strategy in the new scheme more than in the original scheme, and the two schemes give the sender similar utilities. As an application of our main result (1), we show that, in a repeated Bayesian persuasion model where the receiver learns to respond to the sender by some algorithms, the sender can do almost as well as in the classic model. Interestingly, unlike (2), with a learning receiver the sender can sometimes do much better than in the classic model. | [] | Train |
44,441 | 30 | Title: Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models
Abstract: Language models have graduated from being research prototypes to commercialized products offered as web APIs, and recent works have highlighted the multilingual capabilities of these products. The API vendors charge their users based on usage, more specifically on the number of ``tokens'' processed or generated by the underlying language models. What constitutes a token, however, is training data and model dependent with a large variance in the number of tokens required to convey the same information in different languages. In this work, we analyze the effect of this non-uniformity on the fairness of an API's pricing policy across languages. We conduct a systematic analysis of the cost and utility of OpenAI's language model API on multilingual benchmarks in 22 typologically diverse languages. We show evidence that speakers of a large number of the supported languages are overcharged while obtaining poorer results. These speakers tend to also come from regions where the APIs are less affordable to begin with. Through these analyses, we aim to increase transparency around language model APIs' pricing policies and encourage the vendors to make them more equitable. | [
12128,
38849,
2347,
2764,
19083,
23598,
39156,
5077,
23898,
35580,
15358
] | Train |
44,442 | 16 | Title: Fair GANs through model rebalancing with synthetic data
Abstract: Deep generative models require large amounts of training data. This often poses a problem as the collection of datasets can be expensive and difficult, in particular datasets that are representative of the appropriate underlying distribution (e.g. demographic). This introduces biases in datasets which are further propagated in the models. We present an approach to mitigate biases in an existing generative adversarial network by rebalancing the model distribution. We do so by generating balanced data from an existing unbalanced deep generative model using latent space exploration and using this data to train a balanced generative model. Further, we propose a bias mitigation loss function that shows improvements in the fairness metric even when trained with unbalanced datasets. We show results for the Stylegan2 models while training on the FFHQ dataset for racial fairness and see that the proposed approach improves on the fairness metric by almost 5 times, whilst maintaining image quality. We further validate our approach by applying it to an imbalanced Cifar-10 dataset. Lastly, we argue that the traditionally used image quality metrics such as Frechet inception distance (FID) are unsuitable for bias mitigation problems. | [
34068,
14133
] | Validation |
44,443 | 27 | Title: Assessment of Vehicular Vision Obstruction Due to Driver-Side B-Pillar and Remediation with Blind Spot Eliminator
Abstract: Blind spots created by the driver-side B-pillar impair the ability of the driver
to assess their surroundings accurately, significantly contributing to the
frequency and severity of vehicular accidents. Vehicle manufacturers cannot
readily eliminate the B-pillar due to regulatory guidelines intended to protect
vehicular occupants in the event of side collisions and rollover incidents.
Furthermore, assistance implements utilized to counteract the adverse effects of
blind spots remain ineffective due to technological limitations and optical
impediments.This paper introduces mechanisms to quantify the obstruction caused by the
B-pillar when the head of the driver is facing forward and turning 90°, typical
of an over-the-shoulder blind spot check. It uses the metrics developed to
demonstrate the relationship between B-pillar width and the obstruction angle.
The paper then creates a methodology to determine the movement required of the
driver to eliminate blind spots. Ultimately, this paper proposes a solution, the
Blind Spot Eliminator, and demonstrates that it successfully decreases both the
obstruction angle and, consequently, the required driver movement. The Blind
Spot Eliminator is a lens on the rear-most section of the left driver’s side
window that utilizes refraction to display objects in the surrounding areas. A
prototype of the Blind Spot Eliminator was constructed and experimented with
using a mannequin to model human vision in a typical passenger vehicle. The
results of this experiment illustrated a substantial improvement in viewing
ability, as predicted by earlier calculations. This paper concludes that the
proposed Blind Spot Eliminator has excellent potential to improve driver safety
and reduce vehicular accidents. | [] | Test |
44,444 | 30 | Title: Streamlining Social Media Information Retrieval for Public Health Research with Deep Learning
Abstract: The utilization of social media in epidemic surveillance has been well established. Nonetheless, bias is often introduced when pre-defined lexicons are used to retrieve relevant corpus. This study introduces a framework aimed at curating extensive dictionaries of medical colloquialisms and Unified Medical Language System (UMLS) concepts. The framework comprises three modules: a BERT-based Named Entity Recognition (NER) model that identifies medical entities from social media content, a deep-learning powered normalization module that standardizes the extracted entities, and a semi-supervised clustering module that assigns the most probable UMLS concept to each standardized entity. We applied this framework to COVID-19-related tweets from February 1, 2020, to April 30, 2022, generating a symptom dictionary (available at https://github.com/ningkko/UMLS_colloquialism/) composed of 9,249 standardized entities mapped to 876 UMLS concepts and 38,175 colloquial expressions. This framework demonstrates encouraging potential in addressing the constraints of keyword matching information retrieval in social media-based public health research. | [
42288
] | Train |
44,445 | 30 | Title: ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases
Abstract: Large Language Models (LLMs) have shown the potential to revolutionize natural language processing tasks in various domains, sparking great interest in vertical-specific large models. However, unlike proprietary models such as BloombergGPT and FinGPT, which have leveraged their unique data accumulations to make strides in the finance domain, there hasn't not many similar large language models in the Chinese legal domain to facilitate its digital transformation. In this paper, we propose an open-source legal large language model named ChatLaw. Due to the importance of data quality, we carefully designed a legal domain fine-tuning dataset. Additionally, to overcome the problem of model hallucinations in legal data screening during reference data retrieval, we introduce a method that combines vector database retrieval with keyword retrieval to effectively reduce the inaccuracy of relying solely on vector database retrieval. Furthermore, we propose a self-attention method to enhance the ability of large models to overcome errors present in reference data, further optimizing the issue of model hallucinations at the model level and improving the problem-solving capabilities of large models. We also open-sourced our model and part of the data at https://github.com/PKU-YuanGroup/ChatLaw. | [
12192,
24706,
44162,
13700,
27140,
29894,
33477,
31218,
18227,
24308,
1270,
984,
18777,
9403,
12413,
42747
] | Train |
44,446 | 24 | Title: Towards Last-layer Retraining for Group Robustness with Fewer Annotations
Abstract: Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious correlations and poor generalization on minority groups. The recent deep feature reweighting (DFR) technique achieves state-of-the-art group robustness via simple last-layer retraining, but it requires held-out group and class annotations to construct a group-balanced reweighting dataset. In this work, we examine this impractical requirement and find that last-layer retraining can be surprisingly effective with no group annotations (other than for model selection) and only a handful of class annotations. We first show that last-layer retraining can greatly improve worst-group accuracy even when the reweighting dataset has only a small proportion of worst-group data. This implies a"free lunch"where holding out a subset of training data to retrain the last layer can substantially outperform ERM on the entire dataset with no additional data or annotations. To further improve group robustness, we introduce a lightweight method called selective last-layer finetuning (SELF), which constructs the reweighting dataset using misclassifications or disagreements. Our empirical and theoretical results present the first evidence that model disagreement upsamples worst-group data, enabling SELF to nearly match DFR on four well-established benchmarks across vision and language tasks with no group annotations and less than 3% of the held-out class annotations. Our code is available at https://github.com/tmlabonte/last-layer-retraining. | [
4104
] | Train |
44,447 | 16 | Title: Single Motion Diffusion
Abstract: Synthesizing realistic animations of humans, animals, and even imaginary creatures, has long been a goal for artists and computer graphics professionals. Compared to the imaging domain, which is rich with large available datasets, the number of data instances for the motion domain is limited, particularly for the animation of animals and exotic creatures (e.g., dragons), which have unique skeletons and motion patterns. In this work, we present a Single Motion Diffusion Model, dubbed SinMDM, a model designed to learn the internal motifs of a single motion sequence with arbitrary topology and synthesize motions of arbitrary length that are faithful to them. We harness the power of diffusion models and present a denoising network explicitly designed for the task of learning from a single input motion. SinMDM is designed to be a lightweight architecture, which avoids overfitting by using a shallow network with local attention layers that narrow the receptive field and encourage motion diversity. SinMDM can be applied in various contexts, including spatial and temporal in-betweening, motion expansion, style transfer, and crowd animation. Our results show that SinMDM outperforms existing methods both in quality and time-space efficiency. Moreover, while current approaches require additional training for different applications, our work facilitates these applications at inference time. Our code and trained models are available at https://sinmdm.github.io/SinMDM-page. | [
10529,
38181,
34000,
26097,
38993,
37779,
15030,
36286
] | Train |
44,448 | 4 | Title: Needle in the Haystack: Analyzing the Right of Access According to GDPR Article 15 Five Years after the Implementation
Abstract: The General Data Protection Regulation (GDPR) was implemented in 2018 to strengthen and harmonize the data protection of individuals within the European Union. One key aspect is Article 15, which gives individuals the right to access their personal data in an understandable format. Organizations offering services to Europeans had five years’ time to optimize their processes and functions to comply with Article 15. This study aims to explore the process of submitting and receiving the responses of organizations to GDPR Article 15 requests. A quantitative analysis obtains data from various websites to understand the level of conformity, the data received, and the challenges faced by individuals who request their data. The study differentiates organizations operating worldwide and in Germany, browser website- and app-based usage, and different types of websites. Thereby, we conclude that some websites still compile the data manually, resulting in longer waiting times. A few exceptions did not respond with any data or deliver machine-readable data (GDRP Article 20). The findings of the study additionally reveal ten patterns individuals face when requesting and accessing their data. | [] | Test |
44,449 | 10 | Title: OntoMathPRO 2.0 Ontology: Updates of the Formal Model
Abstract: nan | [] | Train |
44,450 | 3 | Title: Mental Health Coping Stories on Social Media: A Causal-Inference Study of Papageno Effect
Abstract: The Papageno effect concerns how media can play a positive role in preventing and mitigating suicidal ideation and behaviors. With the increasing ubiquity and widespread use of social media, individuals often express and share lived experiences and struggles with mental health. However, there is a gap in our understanding about the existence and effectiveness of the Papageno effect in social media, which we study in this paper. In particular, we adopt a causal-inference framework to examine the impact of exposure to mental health coping stories on individuals on Twitter. We obtain a Twitter dataset with ∼ 2M posts by ∼ 10K individuals. We consider engaging with coping stories as the Treatment intervention, and adopt a stratified propensity score approach to find matched cohorts of Treatment and Control individuals. We measure the psychosocial shifts in affective, behavioral, and cognitive outcomes in longitudinal Twitter data before and after engaging with the coping stories. Our findings reveal that, engaging with coping stories leads to decreased stress and depression, and improved expressive writing, diversity, and interactivity. Our work discusses the practical and platform design implications in supporting mental wellbeing. | [] | Train |
44,451 | 25 | Title: Speech Intelligibility Assessment of Dysarthric Speech by using Goodness of Pronunciation with Uncertainty Quantification
Abstract: This paper proposes an improved Goodness of Pronunciation (GoP) that utilizes Uncertainty Quantification (UQ) for automatic speech intelligibility assessment for dysarthric speech. Current GoP methods rely heavily on neural network-driven overconfident predictions, which is unsuitable for assessing dysarthric speech due to its significant acoustic differences from healthy speech. To alleviate the problem, UQ techniques were used on GoP by 1) normalizing the phoneme prediction (entropy, margin, maxlogit, logit-margin) and 2) modifying the scoring function (scaling, prior normalization). As a result, prior-normalized maxlogit GoP achieves the best performance, with a relative increase of 5.66%, 3.91%, and 23.65% compared to the baseline GoP for English, Korean, and Tamil, respectively. Furthermore, phoneme analysis is conducted to identify which phoneme scores significantly correlate with intelligibility scores in each language. | [] | Train |
44,452 | 23 | Title: Can Large Language Models Find And Fix Vulnerable Software?
Abstract: In this study, we evaluated the capability of Large Language Models (LLMs), particularly OpenAI's GPT-4, in detecting software vulnerabilities, comparing their performance against traditional static code analyzers like Snyk and Fortify. Our analysis covered numerous repositories, including those from NASA and the Department of Defense. GPT-4 identified approximately four times the vulnerabilities than its counterparts. Furthermore, it provided viable fixes for each vulnerability, demonstrating a low rate of false positives. Our tests encompassed 129 code samples across eight programming languages, revealing the highest vulnerabilities in PHP and JavaScript. GPT-4's code corrections led to a 90% reduction in vulnerabilities, requiring only an 11% increase in code lines. A critical insight was LLMs' ability to self-audit, suggesting fixes for their identified vulnerabilities and underscoring their precision. Future research should explore system-level vulnerabilities and integrate multiple static code analyzers for a holistic perspective on LLMs' potential. | [
19970,
29956,
33220,
9542,
11305,
15852,
3181,
9038,
16917,
20892,
2845
] | Train |
44,453 | 33 | Title: Matching Patterns with Variables Under Simon's Congruence
Abstract: We introduce and investigate a series of matching problems for patterns with variables under Simon's congruence. Our results provide a thorough picture of these problems' computational complexity. | [
10722,
24396
] | Train |
44,454 | 28 | Title: Age of Incorrect Information in Random Access Channels without Feedback
Abstract: We focus on a system in which a set of two-state Markov sources report status update to a common receiver over a shared wireless channel. Inspired by practical IoT networks, we consider three variations of ALOHA as medium access protocol: i) a random approach in which a source transmits regardless of its status, ii) a reactive scheme in which updates are sent only when a source changes state, and iii) a hybrid solution which blends the two possibilities. We consider different criteria to capture the ability of the receiver to maintain an accurate perception of the tracked processes: average age of incorrect information (AoII), probability of missed detection (i.e., of not detecting a source transition), and average duration of intervals over which the receiver lingers with erroneous knowledge. We provide closed form analytical expressions for all the metrics, highlighting non-trivial trade-offs and providing useful protocol design hints. | [
28953,
37771,
16575
] | Train |
44,455 | 8 | Title: Adaptive beamforming for optical wireless communication via fiber modal control
Abstract: High-speed optical wireless communication can address the exponential growth in data traffic. Adaptive beamforming customized for the target location is crucial, but existing solutions such as liquidcrystal spatial light modulators and microelectromechanical systems require costly micro/nano manufacturing, delicate alignment, and a high degree of mechanical stability. These challenges reflect the fragility of integrating a fiber network with micro/nano mechanical or photonic systems. Here, we realize low-cost, low-loss, and fiber-compatible beamforming and continuous beam steering through controlled bending of a multi-mode fiber that modifies its modal coupling, and use it to enable flexible optical wireless communication at 10 Gb/s. By using the fiber modal coupling as degrees of freedom rather than an impediment, this approach offers a promising solution for flexible and cost-effective optical wireless communication networks. | [] | Train |
44,456 | 24 | Title: MARLIM: Multi-Agent Reinforcement Learning for Inventory Management
Abstract: Maintaining a balance between the supply and demand of products by optimizing replenishment decisions is one of the most important challenges in the supply chain industry. This paper presents a novel reinforcement learning framework called MARLIM, to address the inventory management problem for a single-echelon multi-products supply chain with stochastic demands and lead-times. Within this context, controllers are developed through single or multiple agents in a cooperative setting. Numerical experiments on real data demonstrate the benefits of reinforcement learning methods over traditional baselines. | [] | Train |
44,457 | 5 | Title: Bike Assisted Evacuation on a Line of Robots with Communication Faults
Abstract: Two autonomous mobile robots and a non-autonomous one, also called bike, are placed at the origin of an infinite line. The autonomous robots can travel with maximum speed $1$. When a robot rides the bike its speed increases to $v>1$, however only exactly one robot at a time can ride the bike and the bike is non-autonomous in that it cannot move on its own. An Exit is placed on the line at an unknown location and at distance $d$ from the origin. The robots have limited communication behavior; one robot is a sender (denoted by S) in that it can send information wirelessly at any distance and receive messages only in F2F (Face-to-Face), while the other robot is a receiver (denoted by R) in that it can receive information wirelessly but can send information only F2F. The bike has no communication capabilities of its own. We refer to the resulting communication model of the ensemble of the two autonomous robots and the bike as S/R. Our general goal is to understand the impact of the non-autonomous robot in assisting the evacuation of the two autonomous faulty robots. Our main contribution is to provide a new evacuation algorithm that enables both robots to evacuate from the unknown Exit in the S/R model. We also analyze the resulting evacuation time as a function of the bike's speed $v$ and give upper and lower bounds on the competitive ratio of the resulting algorithm for the entire range of possible values of $v$. | [] | Train |
44,458 | 16 | Title: Robust retrieval of material chemical states in X-ray microspectroscopy
Abstract: X-ray microspectroscopic techniques are essential for studying morphological and chemical changes in materials, providing high-resolution structural and spectroscopic information. However, its practical data analysis for reliably retrieving the chemical states remains a major obstacle to accelerating the fundamental understanding of materials in many research fields. In this work, we propose a novel data formulation model for X-ray microspectroscopy and develop a dedicated unmixing framework to solve this problem, which is robust to noise and spectral variability. Moreover, this framework is not limited to the analysis of two-state material chemistry, making it an effective alternative to conventional and widely-used methods. In addition, an alternative directional multiplier method with provable convergence is applied to obtain the solution efficiently. Our framework can accurately identify and characterize chemical states in complex and heterogeneous samples, even under challenging conditions such as low signal-to-noise ratios and overlapping spectral features. Extensive experimental results on simulated and real datasets demonstrate its effectiveness and reliability. | [] | Train |
44,459 | 16 | Title: MLANet: Multi-Level Attention Network with Sub-instruction for Continuous Vision-and-Language Navigation
Abstract: Vision-and-Language Navigation (VLN) aims to develop intelligent agents to navigate in unseen environments only through language and vision supervision. In the recently proposed continuous settings (continuous VLN), the agent must act in a free 3D space and faces tougher challenges like real-time execution, complex instruction understanding, and long action sequence prediction. For a better performance in continuous VLN, we design a multi-level instruction understanding procedure and propose a novel model, Multi-Level Attention Network (MLANet). The first step of MLANet is to generate sub-instructions efficiently. We design a Fast Sub-instruction Algorithm (FSA) to segment the raw instruction into sub-instructions and generate a new sub-instruction dataset named ``FSASub". FSA is annotation-free and faster than the current method by 70 times, thus fitting the real-time requirement in continuous VLN. To solve the complex instruction understanding problem, MLANet needs a global perception of the instruction and observations. We propose a Multi-Level Attention (MLA) module to fuse vision, low-level semantics, and high-level semantics, which produce features containing a dynamic and global comprehension of the task. MLA also mitigates the adverse effects of noise words, thus ensuring a robust understanding of the instruction. To correctly predict actions in long trajectories, MLANet needs to focus on what sub-instruction is being executed every step. We propose a Peak Attention Loss (PAL) to improve the flexible and adaptive selection of the current sub-instruction. PAL benefits the navigation agent by concentrating its attention on the local information, thus helping the agent predict the most appropriate actions. We train and test MLANet in the standard benchmark. Experiment results show MLANet outperforms baselines by a significant margin. | [
8622
] | Train |
44,460 | 28 | Title: Alamouti-Like Transmission Schemes in Distributed MIMO Networks
Abstract: The purpose of the study is to investigate potential benefits of using Alamouti-like orthogonal space-time-frequency block codes (STFBC) in distributed multiple-input multiple-output (D-MIMO) systems to increase the diversity at the UE side when instantaneous channel state information (CSI) is not available at radio units (RUs). Most of the existing transmission techniques require instantaneous CSI to form precoders which can only be realized together with accurate and up-to-date channel knowledge. STFBC can increase the diversity at UE side without estimating the downlink channel. Under challenging channel conditions, the network can switch to a robust mode where a certain data rate is maintained for users even without knowing the channel coefficients by means of STFBC. In this study, it will be mainly focused on clustering of RUs and user equipment, where each cluster adopts a possibly different orthogonal code, so that overall spectral efficiency is optimized. Potential performance gains over known techniques that can be used when the channel is not known will be shown and performance gaps to sophisticated precoders making use of channel estimates will be identified. | [] | Test |
44,461 | 30 | Title: Synthetic Cross-accent Data Augmentation for Automatic Speech Recognition
Abstract: The awareness for biased ASR datasets or models has increased notably in recent years. Even for English, despite a vast amount of available training data, systems perform worse for non-native speakers. In this work, we improve an accent-conversion model (ACM) which transforms native US-English speech into accented pronunciation. We include phonetic knowledge in the ACM training to provide accurate feedback about how well certain pronunciation patterns were recovered in the synthesized waveform. Furthermore, we investigate the feasibility of learned accent representations instead of static embeddings. Generated data was then used to train two state-of-the-art ASR systems. We evaluated our approach on native and non-native English datasets and found that synthetically accented data helped the ASR to better understand speech from seen accents. This observation did not translate to unseen accents, and it was not observed for a model that had been pre-trained exclusively with native speech. | [] | Train |
44,462 | 16 | Title: Towards Writer Retrieval for Historical Datasets
Abstract: nan | [] | Train |
44,463 | 16 | Title: The IMPTC Dataset: An Infrastructural Multi-Person Trajectory and Context Dataset
Abstract: Inner-city intersections are among the most critical traffic areas for injury and fatal accidents. Automated vehicles struggle with the complex and hectic everyday life within those areas. Sensor-equipped smart infrastructures, which can cooperate with vehicles, can benefit automated traffic by extending the perception capabilities of drivers and vehicle perception systems. Additionally, they offer the opportunity to gather reproducible and precise data of a holistic scene understanding, including context information as a basis for training algorithms for various applications in automated traffic. Therefore, we introduce the Infrastructural Multi-Person Trajectory and Context Dataset (IMPTC). We use an intelligent public inner-city intersection in Germany with visual sensor technology. A multi-view camera and LiDAR system perceives traffic situations and road users’ behavior. Additional sensors monitor contextual information like weather, lighting, and traffic light signal status. The data acquisition system focuses on Vulnerable Road Users (VRUs) and multi-agent interaction. The resulting dataset consists of eight hours of measurement data. It contains over 2,500 VRU trajectories, including pedestrians, cyclists, e-scooter riders, strollers, and wheelchair users, and over 20,000 vehicle trajectories at different day times, weather conditions, and seasons. In addition, to enable the entire stack of research capabilities, the dataset includes all data, starting from the sensor-, calibration- and detection data until trajectory and context data. The dataset is continuously expanded and is available online for non-commercial research at https://github.com/kav-institute/imptc-dataset. | [] | Train |
44,464 | 27 | Title: V2V-based Collision-avoidance Decision Strategy for Autonomous Vehicles Interacting with Fully Occluded Pedestrians at Midblock on Multilane Roadways
Abstract: Pedestrian occlusion is challenging for autonomous vehicles (AVs) at midblock locations on multilane roadways because an AV cannot detect crossing pedestrians that are fully occluded by downstream vehicles in adjacent lanes. This paper tests the capability of vehicle-to-vehicle (V2V) communication between an AV and its downstream vehicles to share midblock pedestrian crossings information. The researchers developed a V2V-based collision-avoidance decision strategy and compared it to a base scenario (i.e., decision strategy without the utilization of V2V). Simulation results showed that for the base scenario, the near-zero time-to-collision (TTC) indicated no time for the AV to take appropriate action and resulted in dramatic braking followed by collisions. But the V2V-based collision-avoidance decision strategy allowed for a proportional braking approach to increase the TTC allowing the pedestrian to cross safely. To conclude, the V2V-based collision-avoidance decision strategy has higher safety benefits for an AV interacting with fully occluded pedestrians at midblock locations on multilane roadways. | [] | Train |
44,465 | 28 | Title: Shaping a Smarter Electromagnetic Landscape: An Overview of IAB, NCR, and RIS in 5G Standardization and Beyond
Abstract: The main objective of 5G and beyond networks is to provide an optimal user experience in terms of throughput and reliability, irrespective of location and time. To achieve this, traditional fixed macro base station deployments are being replaced by more innovative and flexible solutions, such as wireless backhaul and relays. This article focuses on the evolution and standardization of these advancements, which are shaping the electromagnetic (EM) landscape. Specifically, we explore Integrated Access and Backhaul (IAB) nodes, offering a cost-efficient and agile alternative to fiber backhaul. We also discuss Network-Controlled Repeaters (NCRs) and the emergence of Reconfigurable Intelligent Surfaces (RIS) actively adapting the wireless environment. The article provides an overview of the 5G features and ongoing developments in 3GPP Release 18 related to these intelligent EM entities, highlighting the expected evolution of future wireless networks in terms of architecture, operations, and control signals. | [
30114
] | Test |
44,466 | 16 | Title: X-Avatar: Expressive Human Avatars
Abstract: We present X-Avatar, a novel avatar model that captures the full expressiveness of digital humans to bring about life-like experiences in telepresence, AR/VR and beyond. Our method models bodies, hands, facial expressions and appearance in a holistic fashion and can be learned from either full 3D scans or RGB-D data. To achieve this, we propose a part-aware learned forward skinning module that can be driven by the parameter space of SMPL-X, allowing for expressive animation of X-Avatars. To efficiently learn the neural shape and deformation fields, we propose novel part-aware sampling and initialization strategies. This leads to higher fidelity results, especially for smaller body parts while maintaining efficient training despite increased number of articulated bones. To capture the appearance of the avatar with high-frequency details, we extend the geometry and deformation fields with a texture network that is conditioned on pose, facial expression, geometry and the normals of the deformed surface. We show experimentally that our method outperforms strong baselines both quantitatively and qualitatively on the animation task. To facilitate future research on expressive avatars we contribute a new dataset, called X-Humans, containing 233 sequences of high-quality textured scans from 20 participants, totalling 35,500 data frames. Project page: https://ait.ethz.ch/X-Avatar. | [
3747
] | Train |
44,467 | 24 | Title: Uncertainty-Aware Vehicle Energy Efficiency Prediction Using an Ensemble of Neural Networks
Abstract: The transportation sector accounts for about 25% of global greenhouse gas emissions. Therefore, an improvement of energy efficiency in the traffic sector is crucial to reduce the carbon footprint. Efficiency is typically measured in terms of energy use per traveled distance, e.g., liters of fuel per kilometer. Leading factors that impact the energy efficiency are the type of vehicle, environment, driver behavior, and weather conditions. These varying factors introduce uncertainty in estimating vehicles’ energy efficiency. We propose in this article an ensemble learning approach based on deep neural networks that is designed to reduce the predictive uncertainty and to output measures of such uncertainty. We evaluated it using the publicly available Vehicle Energy Dataset and compared it with several baselines per vehicle and energy type. The results showed a high predictive performance, and they allowed us to output a measure of predictive uncertainty. | [] | Train |
44,468 | 3 | Title: Goal oriented indicators for food systems based on FAIR data
Abstract: Throughout the food supply chain, between production, transportation, packaging, and green employment, a plethora of indicators cover the environmental footprint and resource use. By defining and tracking the more inefficient practices of the food supply chain and their effects, we can better understand how to improve agricultural performance, track nutrition values, and focus on the reduction of a major risk to the environment while contributing to food security. Our aim is to propose a framework for a food supply chain, devoted to the vision of zero waste and zero emissions, and at the same time, fulfilling the broad commitment on inclusive green economy within the climate action. To set the groundwork for a smart city solution which achieves this vision, main indicators and evaluation frameworks are introduced, followed by the drill down into most crucial problems, both globally and locally, in a case study in north Italy. Methane is on the rise in the climate agenda, and specifically in Italy emission mitigation is difficult to achieve in the farming sector. Accordingly, going from the generic frameworks towards a federation deployment, we provide the reasoning for a cost-effective use case in the domain of food, to create a valuable digital twin. A Bayesian approach to assess use cases and select preferred scenarios is proposed, realizing the potential of the digital twin flexibility with FAIR data, while understanding and acting to achieve environmental and social goals, i.e., coping uncertainties, and combining green employment and food security. The proposed framework can be adjusted to organizational, financial, and political considerations in different locations worldwide, rethinking the value of information in the context of FAIR data in digital twins. | [] | Train |
44,469 | 36 | Title: Dynamic generation and attribution of revenues in a video platform
Abstract: The consumption of online videos on the Internet grows every year, making it a market that increasingly generates a greater volume of income. This paper deals with a problem of great interest in this context: the allocation of the generated revenues in a video website between the website and the video creators. For this, we consider a dynamic model of the revenues generation. We will consider that revenue can come from two sources: through the pay-per-view system and through the insertion of advertisements in the videos. Then to study how to divide the revenues in a reasonable and fair way between the two parties, we consider a dynamic cooperative game that reflects the importance of each part in generating revenue. From this game, we determine how its Shapley value is and introduce other allocation rules derived from it. We provide a structure of algorithm to calculate the Shapley value and its derived rules. We show that the computational complexity of the algorithms is polynomial. Finally, we provide some illustrative examples and simulations to illustrate how the proposed allocation rules perform. | [] | Train |
44,470 | 6 | Title: Parachute: Evaluating Interactive Human-LM Co-writing Systems
Abstract: A surge of advances in language models (LMs) has led to significant interest in using LMs to build co-writing systems, in which humans and LMs interactively contribute to a shared writing artifact. However, there is a lack of studies assessing co-writing systems in interactive settings. We propose a human-centered evaluation framework, Parachute, for interactive co-writing systems. Parachute showcases an integrative view of interaction evaluation, where each evaluation aspect consists of categorized practical metrics. Furthermore, we present Parachute with a use case to demonstrate how to evaluate and compare co-writing systems using Parachute. | [] | Validation |
44,471 | 31 | Title: Integrating Commercial and Social Determinants of Health: A Unified Ontology for Non-Clinical Determinants of Health
Abstract: The objectives of this research are 1) to develop an ontology for CDoH by utilizing PubMed articles and ChatGPT; 2) to foster ontology reuse by integrating CDoH with an existing SDoH ontology into a unified structure; 3) to devise an overarching conception for all non-clinical determinants of health and to create an initial ontology, called N-CODH, for them; 4) and to validate the degree of correspondence between concepts provided by ChatGPT with the existing SDoH ontology | [] | Validation |
44,472 | 24 | Title: Feature Space Sketching for Logistic Regression
Abstract: We present novel bounds for coreset construction, feature selection, and dimensionality reduction for logistic regression. All three approaches can be thought of as sketching the logistic regression inputs. On the coreset construction front, we resolve open problems from prior work and present novel bounds for the complexity of coreset construction methods. On the feature selection and dimensionality reduction front, we initiate the study of forward error bounds for logistic regression. Our bounds are tight up to constant factors and our forward error bounds can be extended to Generalized Linear Models. | [] | Train |
44,473 | 30 | Title: PRESTO: A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs
Abstract: Research interest in task-oriented dialogs has increased as systems such as Google Assistant, Alexa and Siri have become ubiquitous in everyday life. However, the impact of academic research in this area has been limited by the lack of datasets that realistically capture the wide array of user pain points. To enable research on some of the more challenging aspects of parsing realistic conversations, we introduce PRESTO, a public dataset of over 550K contextual multilingual conversations between humans and virtual assistants. PRESTO contains a diverse array of challenges that occur in real-world NLU tasks such as disfluencies, code-switching, and revisions. It is the only large scale human generated conversational parsing dataset that provides structured context such as a user's contacts and lists for each example. Our mT5 model based baselines demonstrate that the conversational phenomenon present in PRESTO are challenging to model, which is further pronounced in a low-resource setup. | [
38904,
39948
] | Train |
44,474 | 16 | Title: SegPrompt: Using Segmentation Map as a Better Prompt to Finetune Deep Models for Kidney Stone Classification
Abstract: Recently, deep learning has produced encouraging results for kidney stone classification using endoscope images. However, the shortage of annotated training data poses a severe problem in improving the performance and generalization ability of the trained model. It is thus crucial to fully exploit the limited data at hand. In this paper, we propose SegPrompt to alleviate the data shortage problems by exploiting segmentation maps from two aspects. First, SegPrompt integrates segmentation maps to facilitate classification training so that the classification model is aware of the regions of interest. The proposed method allows the image and segmentation tokens to interact with each other to fully utilize the segmentation map information. Second, we use the segmentation maps as prompts to tune the pretrained deep model, resulting in much fewer trainable parameters than vanilla finetuning. We perform extensive experiments on the collected kidney stone dataset. The results show that SegPrompt can achieve an advantageous balance between the model fitting ability and the generalization ability, eventually leading to an effective model with limited training data. | [
29929,
5454
] | Train |
44,475 | 28 | Title: Kolam Simulation using Angles at Lattice Points
Abstract: Kolam is a ritual art form practised by people in South India and consists of rule-bound geometric patterns of dots and lines. Single loop Kolams are mathematical closed loop patterns drawn over a grid of dots and conforming to certain heuristics. In this work, we propose a novel encoding scheme where we map the angular movements of Kolam at lattice points into sequences containing $4$ distinct symbols. This is then used to simulate single loop Kolam procedure via turtle moves in accordance with the desired angular direction at specific points. We thus obtain sequential codes for Kolams, unique up to cyclic permutations. We specify the requirements for the algorithm and indicate the general methodology. We demonstrate a sample of Kolams using our algorithm with a software implementation in Python. | [] | Train |
44,476 | 24 | Title: Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning
Abstract: Recent works have shown that self-supervised learning can achieve remarkable robustness when integrated with adversarial training (AT). However, the robustness gap between supervised AT (sup-AT) and self-supervised AT (self-AT) remains significant. Motivated by this observation, we revisit existing self-AT methods and discover an inherent dilemma that affects self-AT robustness: either strong or weak data augmentations are harmful to self-AT, and a medium strength is insufficient to bridge the gap. To resolve this dilemma, we propose a simple remedy named DYNACL (Dynamic Adversarial Contrastive Learning). In particular, we propose an augmentation schedule that gradually anneals from a strong augmentation to a weak one to benefit from both extreme cases. Besides, we adopt a fast post-processing stage for adapting it to downstream tasks. Through extensive experiments, we show that DYNACL can improve state-of-the-art self-AT robustness by 8.84% under Auto-Attack on the CIFAR-10 dataset, and can even outperform vanilla supervised adversarial training for the first time. Our code is available at \url{https://github.com/PKU-ML/DYNACL}. | [
13858,
5508
] | Train |
44,477 | 13 | Title: Towards Zero Memory Footprint Spiking Neural Network Training
Abstract: Biologically-inspired Spiking Neural Networks (SNNs), processing information using discrete-time events known as spikes rather than continuous values, have garnered significant attention due to their hardware-friendly and energy-efficient characteristics. However, the training of SNNs necessitates a considerably large memory footprint, given the additional storage requirements for spikes or events, leading to a complex structure and dynamic setup. In this paper, to address memory constraint in SNN training, we introduce an innovative framework, characterized by a remarkably low memory footprint. We \textbf{(i)} design a reversible SNN node that retains a high level of accuracy. Our design is able to achieve a $\mathbf{58.65\times}$ reduction in memory usage compared to the current SNN node. We \textbf{(ii)} propose a unique algorithm to streamline the backpropagation process of our reversible SNN node. This significantly trims the backward Floating Point Operations Per Second (FLOPs), thereby accelerating the training process in comparison to current reversible layer backpropagation method. By using our algorithm, the training time is able to be curtailed by $\mathbf{23.8\%}$ relative to existing reversible layer architectures. | [
13513,
11884,
37372
] | Train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.